report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Today it is evident that recent surpluses were the result not only of hard choices made earlier in the 1990s, but also of fortuitous economic, demographic, and policy trends that are no longer working for us as we enter the 21st century. In retrospect, the nation emerged from deficits of nearly three decades only to find itself in what has been called “the eye of the storm.” The passage to surpluses was aided by a tailwind consisting of (1) extraordinarily strong economic growth, (2) a slowing of health care cost growth, (3) a demographic holiday stemming from low birth rates during the Depression and World War II paired with a large workforce resulting from the post-war baby boom—which together gave rise to a stable worker-to-beneficiary ratio in Social Security, and (4) the fall of the Soviet Union permitting a decline in defense spending as a share of the economy. The fiscal winds have now shifted—many of these fortunate trends have now reversed course and are making the choices harder. Although it appears the economy may have turned the corner, forecasters are not showing a return to the extremely rapid growth the nation enjoyed during the last half of the nineties. Health care costs have once again resumed growing at double-digit rates. Reductions in defense spending can no longer be used as a means to help fund other claims on the budget; indeed, spending on defense and homeland security will grow as we seek to defeat terrorism worldwide. Finally—and I know this is one of the reasons you invited me here today—the nation’s demographic holiday is ending. In 2008—only 6 years from now—demographic storm clouds will begin to shadow the baseline as the first wave of baby boomers become eligible to claim Social Security. However one allocates credit across the events and decisions that led to years of surpluses, we benefited from that achievement. These large surpluses not only helped in the short term by reducing debt and interest costs but also strengthened the budget and the economy for the longer term. The budgetary surpluses of recent years put us in a stronger position to respond both to the events of September 11 and to the economic slowdown than would otherwise have been the case. However, going forward, the nation’s commitment to surpluses will truly be tested. For the last few years surpluses were built in to the baseline so that given a lack of policy action, there would be a surplus. Last year, the Congressional Budget Office (CBO) baseline not only projected unified surpluses for at least the 10-year window but also substantial surpluses in the non-Social Security portion of the budget. Saving the Social Security surplus became an achievable and compelling fiscal policy goal for the nation in this context. This is no longer true. At least for the next several years the baseline does not turn to unified surplus. A surplus in the non- Social Security portion of the budget is not projected under the baseline to emerge until 2010. As a result, explicit policy actions on spending and/or revenue will be necessary to return to and maintain surpluses over the next 10 years. Although in important ways you begin the task of crafting a budget this year in a very different place than you did last year, in other ways the responsibilities remain the same. We still have a stewardship obligation to future generations. By stewardship obligation I mean that in making budget decisions today, it is important to be mindful of their impact on the future. This means that in responding to the legitimate needs of today, we should take into account the longer-term fiscal pressures we face. The message of GAO’s long-term simulations, updated using CBO’s new budget estimates, is consistent with previous simulations: absent change, spending for federal health and retirement programs eventually overwhelms all other federal spending. As we look ahead we face an unprecedented demographic challenge. A nation that has prided itself on its youth will become older. Between now and 2035, the number of people who are 65 or over will double. As the share of the population over 65 climbs, federal spending on the elderly will absorb larger and ultimately unsustainable shares of the federal budget. Federal health and retirement spending are expected to surge as people live longer and spend more time in retirement. In addition, advances in medical technology are likely to keep pushing up the cost of providing health care. Moreover, the baby boomers will have left behind fewer workers to support them in retirement, prompting a slower rate of economic growth from which to finance these higher costs. Absent substantive change in related entitlement programs, large deficits return, requiring a combination of unprecedented spending cuts in other areas, and/or unprecedented tax increases, and/or substantially increased borrowing from the public (or correspondingly less debt reduction than would otherwise have been the case). These trends have widespread implications for our society, our culture, our economy, and—of most relevance here—our budget. Ultimately, as this Committee and its counterpart in the House recommended on October 4, the federal government should attempt to return to a position of surplus as the economy returns to a higher growth path. Returning to surpluses will take place against the backdrop of greater competition of claims within the budget. Although budget balance may have been the desired fiscal position in the past decade, surpluses would promote the level of savings and investment necessary to help future generations better afford the commitments of an aging society. Early action is important. We all recognize that we have urgent matters to address as a nation and our history shows we have been willing to run deficits during wars and recessions. However, it remains important that to get on with the task of addressing the long-term pressures sooner rather than later. Some will suggest that early action may not be necessary—for example, that faster economic growth may enable a smaller pool of workers to more easily finance the baby boom retirement. While this might happen, the best estimates of the actuaries suggest it is unlikely. CBO has also said that the nation’s long-term fiscal outlook will largely be determined by federal spending for retirees, especially for health.. Although long-term projections are inherently more uncertain than short- term forecasts, in some ways we can be surer about the outlook 20 years from now since it is driven by known demographics. The swing in 1-, 5-, and 10-year projections over the last 12 months has served to emphasize the extent to which short-term projections are subject to uncertainty. And CBO notes that this year the near-term projections are subject to unusual uncertainties as the nation wages war on terrorism and recovers from a recession. CBO pointed out that it is considered more difficult to forecast the economy when it is entering or exiting a recession. This year there are additional uncertainties in the near-term budget outlook. CBO’s reference case—the baseline—from which you begin your deliberations (and which in the first 10 years is the underpinning for our long-term model) is a representation of current laws and policies. Thus, by definition it does not account for the effects of future legislation, including likely increases in spending for defense and homeland security to which both parties have agreed in principle. Nor, as CBO noted, does it make assumptions about a number of issues, e.g., the extension of agriculture programs, Medicare prescription drug coverage, changes in the Alternative Minimum Tax, or the extension of various expiring tax provisions. Given this extreme uncertainty around the next 1 to 5 years, why look out 20 or 30 years? Absent some draconian or unexpected dramatic event, the long-term budget outlook is driven by factors already in motion—most notably the aging of the population. In previous testimonies before you, I have talked about a demographic tidal wave. Beginning about 2010, the share of the population that is age 65 or older will begin to climb, surpassing 20 percent by 2035. (See fig. 1.) Because of the coming demographic shift, the message from our simulations remains the same as last year, indeed as since we first published results from our long-term model in 1992: Absent policy change, in the long term, persistent deficits and escalating debt driven by entitlement spending will overwhelm the budget. This year we ran three different policy paths to illustrate the implications of a range of budgetary choices. I’d like to emphasize again that these simulations are not intended to endorse a particular policy but rather to illustrate the long-term implications of different scenarios. All three scenarios begin with CBO’s baseline estimates. The first starts with the baseline where for the first 10 years tax and entitlement laws are unchanged—including sunset provisions—and discretionary spending grows with inflation. After the first 10 years, we hold discretionary spending and revenues constant as a share of gross domestic product (GDP) and allow Social Security and Medicare to grow based on the actuaries’ intermediate estimates. In this path, the unified surpluses that emerge in 2004 are saved. Nevertheless, deficits return in 2036. At the other end is an alternative policy path in which discretionary spending grows with the economy in the first 10 years and in which last year’s tax cuts are extended. This yields a smaller period of surpluses with deficits returning in 2011. In both of these paths taxes remain constant as a share of GDP after 2012; this is, of course, a policy decision. To illustrate something in between these two paths, we simulated a third that tracks the CBO baseline until 2010. After 2010 we assume that the full Social Security surplus is saved through 2024—this requires some combination of tax and spending policy actions. In this simulation deficits reemerge in 2025. (See fig. 2.) In all three paths, surpluses eventually give way to large and persistent deficits. These simulations show that there is a benefit to fiscal discipline—it delays the return to deficits—but that even the most demanding path we simulated—a path that does not provide for funding Presidential or many Congressional initiatives—is structurally imbalanced over the long term. Although savings from higher surpluses are important, they must be coupled with action to slow the long-term drivers of projected deficits, i.e. Social Security and health programs. Surpluses can help—they could, for example, facilitate the needed reforms by providing resources to ease transition costs—but, by themselves, surpluses will not be sufficient. In the long term, under all three paths federal budgetary flexibility becomes increasingly constrained and eventually disappears. To move into the future with no changes in federal health and retirement programs is to envision a very different role for the federal government. Assuming, for example, that last year’s tax reductions are made permanent and discretionary spending keeps pace with the economy, spending for net interest, Social Security, Medicare, and Medicaid consumes nearly three- quarters of federal revenue by 2030, leaving little room for other federal priorities including defense and education. By 2050, total federal revenue is insufficient to fund entitlement spending and interest payments—and deficits are escalating out of control. (See fig. 3.) Reducing the relative future burdens of Social Security and federal health programs is critical to promoting a sustainable budget policy for the longer term. Absent reform, the impact of federal health and retirement programs on budget choices will be felt as the baby boom generation begins to retire. While much of the public debate concerning the Social Security and Medicare programs focuses on trust fund balances—that is on the programs’ solvency—the larger issue concerns sustainability. The 2001 Trustees Reports estimate that the Old-Age Survivors Insurance and Disability Insurance (OASDI) Trust Funds will remain solvent through 2038 and the Hospital Insurance (HI) Trust Fund through 2029. Furthermore, because of the nature of federal trust funds, HI and OASDI Trust Fund balances do not provide meaningful information about program sustainability—that is, the government’s fiscal capacity to pay benefits when the program’s cash income falls below benefit expenses. From this perspective, the net cash impact of the trust funds on the government as a whole—not trust fund solvency—is the important measure. Under the trustees’ intermediate assumptions, the OASDI Trust Funds are projected to have a cash deficit beginning in 2016 and the HI Trust Fund a deficit also beginning in 2016. (See fig. 4.) At that point, the programs become net claimants on the Treasury. In addition, as we have noted in other testimony, a focus on HI solvency presents an incomplete picture of the Medicare program’s expected future fiscal claims. The Supplementary Medical Insurance (SMI) portion of Medicare, which is not reflected in the HI solvency measure, is projected to grow even faster than HI in the near future. According to the best estimates of the Medicare trustees, Medicare HI and SMI together will double as a share of GDP between 2000 and 2030 (from 2.2 percent to 4.5 percent) and reach 8.5 percent of GDP in 2075. Under the trustees’ best estimates, Social Security spending will grow as a share of GDP from 4.2 to 6.5 percent between 2000 and 2030, reaching 6.7 percent in 2075. Medicare HI Medicare HI cash deficit cash deficit To finance these cash deficits, Social Security and the Hospital Insurance portion of Medicare will need to draw on their special issue Treasury securities acquired during the years when these programs generated cash surpluses. This negative cash flow will placed increased pressure on the federal budget to raise the resources necessary to meet the program’s ongoing costs. In essence, for OASDI or HI to “redeem” their securities, the government will need to obtain cash through increased taxes, and/or spending cuts, and/or increased borrowing from the public (or correspondingly less debt reduction than would have been the case had cash flow remained positive). Our long-term simulations illustrate the magnitude of the fiscal challenges associated with an aging society and the significance of the related challenges the government will be called upon to address. As we have stated elsewhere, early action to change these programs would yield the highest fiscal dividends for the federal budget and would provide a longer period for prospective beneficiaries to make adjustments in their own planning. Waiting to build economic resources and reform future claims entails risks. First, we lose an important window where today’s relatively large workforce can increase saving and enhance productivity, two elements critical to growing the future economy. We lose the opportunity to reduce the burden of interest in the federal budget, thereby creating a legacy of higher debt as well as elderly entitlement spending for the relatively smaller workforce of the future. Most critically, we risk losing the opportunity to phase in changes gradually so that all can make the adjustments needed in private and public plans to accommodate this historic shift. Unfortunately, the long-range challenge has become more difficult, and the window of opportunity to address the entitlement challenge is narrowing. It remains more important than ever to return to these issues over the next several years. Ultimately, the critical question is not how much a trust fund has in assets, but whether the government as a whole can afford the promised benefits now and in the future and at what cost to other claims on scarce resources. One of the reasons to address these longer-term pressures is their potential to crowd out the capacity to support other important priorities throughout the rest of the budget. The tragedy of September 11 made us all realize the benefits fiscal flexibility provides to our nation’s capacity to respond to urgent and newly emergent needs. Obviously we will allocate whatever resources are necessary to protect the nation. However, these new commitments will compete with and increase the pressure on other priorities within the budget. Financing these compelling new claims within an overall fiscal framework that eventually returns the budget to surplus is a tall order indeed. The budget process is the one place where we as a nation can conduct a healthy debate about competing claims and new priorities. However, such a debate will be needlessly constrained if only new proposals and activities are on the table. A fundamental review of existing programs and operations can create much-needed fiscal flexibility to address emerging needs by weeding out programs that have proven to be outdated, poorly targeted, or inefficient in their design and management. It is always easier to subject proposals for new activities or programs to greater scrutiny than that given to existing ones. It is easy to treat existing activities as “given” and force new proposals to compete only with each other. Such an approach would move us further, rather than nearer, to budgetary surpluses. Moreover, it is healthy for the nation periodically to review and update its programs, activities and priorities. As we have discussed previously, many programs were designed years ago to respond to earlier challenges. In the early years of a new century, we have been reminded how much things have changed. For perspective, students who started college this past fall were 9 years old when the Soviet Union broke apart and have no memory of the Cold War; their lifetimes have always known microcomputers and AIDS. In previous testimony, both before this Committee and elsewhere, I noted that it should be the norm to reconsider the relevance or “fit” of any federal program or activity in today’s world and for the future. Such a review might weed out programs that have proven to be outdated or persistently ineffective, or alternatively could prompt us to update and modernize activities through such actions as improving program targeting and efficiency, consolidation, or reengineering of processes and operations. Ultimately, we should strive to hand to the next generations the legacy of a government that is effective and relevant to a changing society—a government that is as free as possible of outmoded commitments and operations that can inappropriately encumber the future. We need to think about what government should do in the 21st century and how it should do business. The events of last fall have provided an impetus for some agencies to rethink approaches to long-standing problems and concerns. In particular, agencies will need to reassess their strategic goals and priorities to enable them to better target available resources to address urgent national preparedness needs. For instance, the threat to air travel has already prompted attention to chronic problems with airport security that we and others have been pointing to for years. Moreover, the crisis might prompt a healthy reassessment of the broader transportation policy framework with an eye to improving the integration of air, rail, and highway systems to better move people and goods. Other long-standing problems also take on increased relevance in today’s world. Take, for example, food safety. Problems such as overlapping and duplicative inspections across many federal agencies, poor coordination, and inefficient allocations of resources are not new and have hampered productivity and safety for years. However, they take on new meaning and urgency given the potential threat from bioterrorism. We have argued for a consolidated food safety initiative merging the separate programs of the multiple federal agencies involved. Such a consolidated approach can facilitate a concerted and effective response to the new threats. The federal role in law enforcement is another area that is ripe for reexamination following the events of September 11. In the past 20 years, the federal government has taken on a larger role in financing criminal justice activities that have traditionally been viewed as the province of the state and local sector. This is reflected in the growth of the federal share of financing—from 12 percent in 1982 to nearly 20 percent in 1999. Given the new daunting new law enforcement responsibilities in the wake of September 11 and limited budgetary resources at all levels, the question is whether these additional responsibilities should prompt us to rethink the priorities and roles of federal, state, and local levels of government in the criminal justice area and ultimately whether some activities are affordable in this new setting. The Federal Bureau of Investigation has already begun thinking about reprioritization and how its investigative resources will shift, given the new challenges posed by the terrorism threat. With the Coast Guard’s focus on homeland security, it has de-emphasized some of its other critical missions in the short term, most notably fisheries enforcement and drug and migrant interdiction. The Coast Guard is currently developing a longer-term mission strategy, although it has no plans at present to revise the schedule or asset mix for its Deepwater Project (which will be awarded mid-2002). In rethinking federal missions and strategies, it is important to examine not only spending programs but the wide range of other more indirect tools of governance the federal government uses to address national objectives. These tools include loans and loan guarantees, tax expenditures, and regulations. For instance, in fiscal year 2000, the federal health care and Medicare budget functions include $37 billion in discretionary budget authority, $319 billion in entitlement outlays, $5 million in loan guarantees, and $91 billion in tax expenditures. The outcomes achieved by these various tools are in a very real sense highly interdependent and are predicated on the response by a wide range of third parties, such as states and localities and private employers, whose involvement has become more critical to the implementation of these federal initiatives. The choice and design of these tools is critical in determining whether and how federal objectives will be addressed by these third parties. Any review of the base of existing policy should address this broader picture of federal involvement. GAO has also identified a number of areas warranting reconsideration based on program performance, targeting, and costs. Every year, we issue a report identifying specific options, many scored by CBO, for congressional consideration stemming from our audit and evaluation work. This report provides opportunities for (1) reassessing objectives of specific federal programs, (2) improved targeting of benefits, and (3) improving the efficiency and management of federal initiatives. Just as long-standing areas of federal involvement need re-examination, so proposed new initiatives designed to address the new terrorism threat need appropriate review. With the focus on counterterrorism, you will undoubtedly face many proposals redefined as counterterrorism activities. The Congress will need to watch for the redefinition of many claims into counterterrorism activities. It will be especially important to seek to distinguish among these claims. In sorting through these proposals, we might apply investment criteria in making choices. Well-chosen enhancements to the nation’s infrastructure are an important part of our national preparedness strategy. Investments in human capital for certain areas such as public health or airport security will also be necessary as well to foster and maintain the skill sets needed to respond to the threats facing us. A variety of governmental tools will be proposed to address these challenges—grants, loans, tax expenditures, and/or direct federal administration. The involvement of a wide range of third parties—state and local governments, nonprofits, private corporations, and even other nations—will be a vital part of the national response as well. In the short term, we will do whatever is necessary to get this nation back on its feet and compassionately deal with the human tragedies left in its wake. However, as we think about our longer-term preparedness and develop a comprehensive homeland security strategy, we can and should select those programs and tools that promise to provide the most cost- effective approaches to achieve our goals. Today the Congress faces the challenge of sorting out these many claims on the federal budget without the fiscal benchmarks and rules that served as guides through the years of deficit reduction. Going forward, new rules and goals will be important both to ensure fiscal discipline as we sort through these new and compelling claims and to prompt policymakers to focus on the longer-term implications of current policies and programs. For more than a decade, budget process adaptations have been designed to reach a zero deficit. With the advent of surpluses, a new framework was needed—one that would permit accommodating pent-up demands but not eliminate all controls. A broad consensus seemed to develop to use saving the Social Security surplus or maintaining on-budget balance as a kind of benchmark. However, the combination of the economic slowdown and the need to respond to the events of September 11 has overtaken that measure. Once again, Congress faces the challenge of designing a budget control mechanism. Last October, Mr. Chairman, you and your colleague Senator Domenici and your House counterparts called for a return to budget surplus as a fiscal goal. This remains an important fiscal goal, but achieving it will not be easy. In the near term, limits on discretionary spending may be necessary to prompt the kind of reexamination of the base I discussed above. There are no easy choices. There will be disagreements about the merits of a given activity—reasonable people can disagree about federal priorities. There may also be disagreements about the appropriate response to program failure: Should the program be modified or terminated? Would the program work better with more money or should funding be cut? Spending limits can be used to force choices; they are more likely to do so, however, if they are set at levels viewed as reasonable by those who must comply with them. Spending limits alone cannot force a reexamination of existing programs and activities. However, the recognition that for most agencies the new responsibilities acquired since September 11 cannot merely be added to existing duties requires that decisions be made about priorities. In the last decade Congress and the Administration put in place a set of laws designed to improve information about cost and performance. This information can help inform the debate about what the federal government should do. In addition, the budget debate can benefit from the kind of framework I discussed above. In previous testimony before this committee, I suggested that Congress might equip itself to engage in this debate by developing a congressional performance resolution to target its oversight on certain governmentwide performance issues cutting across agencies and programs. Along with caps, this and other measures might help ensure that Congress becomes part of the debate over reprioritization and government performance. The dramatic shift in budget projections since last year has prompted discussion of shortening the budget window. This may well be a sensible approach to reducing uncertainty. However, such a change should be coupled with steps to provide a broader and longer-term fiscal horizon: goals and metrics to address the longer-term implications of today’s choices. This does not mean that we should budget for a 20- or 30-year period. It does mean considering establishing indicators and targets that bring a long-term perspective to budget deliberations and a process that prompts attention to the long-term implications of today’s decisions. Periodic simulations along the lines we and CBO have developed can and should become a regular feature of budget debate. We would be the first to say that the simulations are not predictions of the future or point estimates, rather they serve as indicators—or warning lights—about the magnitude and direction of different policy profiles. These scenarios are particularly helpful in comparing long-term consequences of different fiscal paths or major reforms of entitlements using the same assumptions. As I said earlier, the demographic tidal wave that drives the long-term budget challenge is a known element with predictable consequences. Some kind of fiscal targets may be helpful. As a way to frame the debate, targets can remind us that today’s decisions are not only about current needs but also about how fiscal policy affects the choices over the longer term. Other nations have found it useful to embrace broader targets such as debt-to-GDP ratios, or surpluses equal to a percent of GDP over the business cycle. To work over time targets should not be rigid—it is in the nature of things that they will sometimes be missed. It should be possible to make some sort of compelling argument for the target—and it should be relatively simple to explain. Reaching a target is not a straight line but an iterative process. The other nations we have studied have found that targets prompted them to take advantage of windows of opportunity to save for the future and that decisionmakers must have flexibility each year to weigh pressing short-term needs and adjust the fiscal path without abandoning the longer-term framework. In re-examining what I have called the “drivers” of the long-term budget, we need to think about new metrics. We have been locked into the artifacts of the trust funds, which do not serve as appropriate signals for timely action to address the growth in these programs. As I mentioned earlier, trust fund solvency does not answer the question of whether a program is sustainable. Although aggregate simulations are driven by these programs, the need for a longer-term focus is about more than Social Security and Medicare. In recent years there has been an increased recognition of the long-term costs of Social Security and Medicare. While these are the largest and most important long-term commitments—and the ones that drive the long-term outlook—they are not the only ones in the budget that affect future fiscal flexibility. For Congress, the President, and the public to make informed decisions about these other programs, it is important to understand their long-term cost implications. A longer time horizon is useful not only at the macro level but also at the micro-policy level. I am not suggesting that detailed budget estimates could be made for all programs with long-term cost implications. However, better information on the long-term costs of commitments like employee pension and health benefits and environmental cleanup could be made available. Here again, new concepts and metrics may be useful. We have been developing the concept of “fiscal exposures” to represent a range of federal commitments—from explicit liabilities to implicit commitments. Exactly how such information would be incorporated into the budget debate would need to be worked out—but it is worth serious examination. In one sense much has changed in the budget world since last February. There are even more compelling needs and demands on the federal budget than a year ago—and policymakers must deal with them absent the surpluses that were projected then. However, the demographic trends that drive the long-term outlook have not changed. The baby boom generation is still getting older and closer to retirement. Because of the coming demographic shift, the message from our simulations remains the same as last year, indeed as since we first published results from our long-term model in 1992: Absent changes in Social Security and health programs, in the long term, persistent deficits and escalating debt driven by entitlement spending will overwhelm the budget.
Combating terrorism and ensuring homeland security have created urgent claims on the nation's attention and on the federal budget. Although an economic recovery seems to be underway, the recession that began last spring has had real consequences for the budget. At the same time, the fiscal pressures created by the retirement of the baby boomers and rising health care costs continue unchanged. However, the surpluses also put the nation in a stronger position to respond to the events of September 11 and to the economic slowdown. The nation's commitment to surpluses will be tested. A return to surplus will require sustained discipline and difficult choices. Because the longer-term outlook is driven in large part by known demographic trends, the outlook 20 years from now is surer than the forecast for the next few years. The message of GAO's updated simulations remains the same as last year: absent structural changes in entitlement programs for the elderly, persistent deficits and escalating debt will overwhelm the budget in the long term. Both longer-term and new commitments undertaken after September 11 sharpen the need for competing claims and new priorities. A fundamental review of existing programs and activities is necessary both to increase fiscal flexibility and to make government fit the modern world. Stated differently, there is a need to consider the proper role of the federal government in the 21st century and how government should do business. The fiscal benchmarks and rules that moved the country from deficit to surplus expire this fiscal year. Any successor system should include a debate about reprioritization today and a better understanding of the long-term implications of different policy choices. Many things that the nation may be able to afford today may not be sustainable in the future.
Air ambulance providers inhabit a unique position at the intersection of aviation and medical services. Providing air ambulance service is capital intensive and requires both aviation and medical investments (see fig. 1). Air ambulances must be ready to deploy at a moment’s notice in response to emergencies. Air ambulances are of two main types—rotor wing (helicopter) and fixed-wing aircraft. These two types of aircraft are generally used on different types of missions, with helicopters providing on-scene responses and shorter distance hospital-to-hospital transports and fixed-wing aircraft providing longer transports between airports. Because helicopter air ambulances make up approximately 74 percent of all air ambulances, this report focuses on helicopter air ambulance service. This report also focuses on air ambulance providers that are direct air carriers. Although most people may associate helicopter air ambulance with on-scene response to an accident such as a car accident, the majority of transports are interfacility, or from hospital to hospital. For example, Air Methods, the largest air ambulance provider, reported in June 2016 that of its total flights in the first quarter of 2016, approximately 70 percent were interfacility and 30 percent were on-scene response. Unlike other aviation services that are scheduled ahead of time, air ambulance transports are initiated only in response to time-sensitive medical-related events. In the case of on-scene response transports, first responders decide when air ambulance service is needed, while hospital staff make decisions regarding when to initiate interfacility transports. Because air ambulance providers transport critically sick or injured patients facing time-sensitive emergencies, patients typically have little to no ability to make cost-saving decisions, such as selecting a provider that participates in the patient’s insurance or electing to be transported by ground ambulance. On the other hand, air ambulance providers respond to emergencies without regard for a patient’s ability to pay and provide the same service regardless of the amount the provider will ultimately be compensated for the transport. Air ambulance providers fall under three main types of business models, which vary on which entity makes business decisions, including setting prices and determining in-network agreements with private insurance. These business models are: Hospital-affiliated—may be a department of a hospital or owned by a consortium of hospitals, is typically non-profit, makes business decisions, and provides the medical crew. These providers may operate their own aviation services or contract for the services, often from companies that operate their own air ambulance service as independent providers. Independent—a company, typically for-profit, that handles both medical and aviation aspects and makes business decisions. Hybrid—joint venture between a hospital and an independent provider where the hospital typically provides the medical crew but (unlike the hospital-affiliated model) does not make business decisions, although the hospital name may be branded on the helicopter. Instead, the independent provider makes business decisions such as setting prices. In 2007, we reported that a few large providers dominated the air ambulance industry, and in 2010, we reported that the industry had shifted since 1999 from mostly hospital-affiliated providers toward independent providers. These trends appear to have continued. For example, in 2015 three for-profit, independent providers together reported operating 692 helicopters, or about 66 percent of the total 1,045 helicopters in the industry that year. These three providers operate helicopters that span all three business model types across multiple states. As a result, some of these 692 helicopters may be under contract for aviation services to hospital-affiliated providers, in which case the hospital pays the independent provider a fixed rate and sets the prices charged for the service rather than the independent provider. Air ambulance providers, like other medical service providers, charge standard rates for all transports but receive payments from many sources, often at varying rates. Air ambulance providers charge patients based on a pre-established lift-off fee and per mile fee, regardless of medical services provided in route. Providers then receive payments from a mix of sources, depending on the transported patient’s insurance coverage. The amount paid by private health insurance also depends on whether the provider has a contract in place with the insurer. Key payers of air ambulance service charges include: Medicare—a federal program for people who are 65 or older and certain younger people with disabilities, regardless of income level. Medicaid—a joint federal and state program for some people with limited income and resources. Private health insurance companies—may have a contractual in- network agreement with an air ambulance provider for a payment rate negotiated ahead of time. Without such a contract, air ambulance providers are considered out of network, and the insurance company’s policies set its payment rates. Self-pay—patients not covered by insurance. Whether or not an air ambulance provider may bill a patient for amounts in excess of the amount covered by insurance, as well as any deductibles, coinsurance, or copayment—called balance billing—varies based on the patient’s insurance coverage. Under Medicare rules, for example, air ambulance providers are not permitted to balance bill Medicare patients for ambulance services beyond deductibles and coinsurance requirements. With respect to Medicaid, providers participating in a state’s Medicaid program are required to accept Medicaid payment as payment in full and are prohibited from collecting any additional amounts from Medicaid patients, other than authorized cost sharing amounts. Patients with private health insurance might only be balance billed when the insurer and provider lack an in-network agreement, while uninsured patients might be held responsible by the provider for the entire price charged (see table 1). CMS sets rates and pays claims for Medicare. Medicare payments, including beneficiary co-payments, for helicopter air ambulance service totaled approximately $460 million in 2014. Although CMS typically sets Medicare payment rates by considering whether payments are adequate for a relatively efficient provider, Medicare rates for air ambulance service were last updated in 2002 as part of a negotiated rulemaking that involved public and industry stakeholders. Beginning with the negotiated rulemaking in 2002 through 2006, CMS phased in an air ambulance fee schedule as part of a series of Medicare reforms that were enacted into law in 1997. The fee schedule redistributed payments among various types of ambulance services and effectively raised the payment amounts for air ambulance service. Medicare air ambulance payments include three components: a base payment, a separate payment for mileage to the nearest appropriate facility, and a geographic adjustment factor. In addition, there is a permanent add-on payment that includes a 50 percent increase to both the base and mileage rate for rural air ambulance transports. Since 2006, CMS has adjusted rates annually, primarily based on inflation. Although the prices, routes, and services of the air ambulance industry are largely deregulated, DOT oversees certain aspects of the industry. As air carriers, air ambulance providers fall under the ADA, which was designed to promote “maximum reliance on competitive market forces” as the means to best further “efficiency, innovation, and low prices” as well as “variety quality… of air transportation.” The ADA also contains a provision that explicitly precludes state-level regulation of matters related to air carrier rates, routes, and services. Beyond aviation safety, which DOT’s Federal Aviation Administration (FAA) oversees in a variety of ways, DOT oversees certain aspects of the industry. DOT’s involvement with the air ambulance industry falls within the Office of the Secretary of Transportation (OST). For example, as an air carrier, an air ambulance provider must obtain economic authority from DOT before offering service. In addition, OST’s Office of the General Counsel issues guidance and opinion letters regarding the ADA provision that precludes state level regulation of air ambulance prices, routes, and services. Furthermore, the Office of the Assistant General Counsel for Aviation Enforcement and Proceedings (Enforcement Office), within OST’s Office of the General Counsel, has discretionary authority to investigate whether an air carrier, including an air ambulance provider, has been or is engaged in an unfair method of competition or an unfair or deceptive practice in air transportation or the sale of air transportation. States are involved with air ambulances in several ways and some have taken action to bring awareness to air ambulance pricing. State emergency medical services offices are responsible for licensing medical services such as emergency medical technicians and ground and air ambulances. In addition, states have the authority to regulate the business of insurance and, as a part of this function, may review insurers’ health insurance plans and premium rates. Furthermore, each state administers and operates its Medicaid program, including setting payment rates, within broad federal requirements. States across the country have attempted to gather information or raise awareness regarding air ambulance pricing. For example, state governments have held hearings, including Maryland (2015) and Pennsylvania (2017); New Mexico recently completed a study, and Florida has convened a working group to examine air ambulance pricing issues. Meanwhile Montana developed a public website that features “frequently asked questions” about air ambulance service and provides information on pricing and the extent of contracting with insurance by provider. Between 2010 and 2014, the median prices charged for helicopter air ambulance service by providers approximately doubled. Specifically, according to Medicare data we analyzed, the median price providers charged for helicopter air ambulance transports increased 113 percent between 2010 and 2014. According to private health insurance data we analyzed, the median price charged increased 76 percent between 2010 and 2014 (see fig. 2). For comparison, the consumer price index increased by about 8.5 percent between 2010 and 2014. In 2010, a transport priced at approximately $30,000 was at the 95th percentile— meaning 95 percent of all prices charged were below that amount— according to both Medicare and private health insurance data. In 2014, a transport priced at the same amount—about $30,000—was the median, or 50th percentile, of all prices charged according to these data, while a transport of approximately $50,000 was at the 95th percentile. The increase in median prices charged from 2010 to 2014 may be part of a longer-term trend. For example, representatives from Air Methods, the largest air ambulance provider, reported that they have increased the average price charged per transport from $13,000 in 2007 to $49,800 in 2016—an increase of 283 percent over the past decade. Air ambulance transports, like many medical services, are generally paid at rates lower than the prices charged. Representatives from the eight providers we spoke to reported that transports of Medicare, Medicaid, and self-pay patients made up approximately 46 to 71 percent of their transports and were paid at particularly low rates. For example, according to Medicare data, median payments per transport increased only slightly between 2010 and 2014—from $6,267 in 2010 to $6,502 in 2014. According to provider representatives, Medicaid and self-pay payments are often lower than Medicare payments. See figure 3 for information on the proportion of provider transports and range of average payment amounts by key payer as reported to us by the eight selected providers in 2016. In contrast to the payment received, these selected providers reported average prices charged ranging from $13,200 to $49,800 per transport in 2016. Representatives of the providers we spoke to said that privately insured patients account for the highest percentage of their revenue. For example, seven of the eight providers indicated that the majority of their transport revenue comes from privately insured patients, which accounted for a minority (22 to 41 percent) of their overall transports in 2016. According to an HCCI report, which includes data from three large, national private health insurers, the median payment these insurers paid per transport increased by 70 percent from 2010 to 2014, from about $15,600 to $26,600. As with prices charged, payment amounts increased in range between these years, with the upper end payment increasing more substantially than the lower end payment. Although HCCI data includes approximately 40 million individuals with employer-sponsored insurance, according to an HCCI representative, patients in rural areas may be underrepresented in the data. Even though the HCCI data show private insurance payments increasing largely in parallel to price increases from 2010 to 2014, representatives from five of the eight providers we spoke to noted that payment rates from private insurance have been declining. Representatives from one provider noted that low payments from insurers occur in certain geographical areas, particularly rural areas, where one insurer covers a large proportion of the population and has a large share of the insurance market. National data on balance billing and on the extent to which providers are contracted with insurers are unavailable. Due to a lack of such information, it is unclear to what extent patients with private health insurance are billed by providers for the difference between the air ambulance price charged and the insurer’s payment (balance billing). Some states have attempted to collect balance billing information from patients. For example, Montana collected information on 39 instances of balance billing in 2015 and 2016. Likewise, Michigan reviewed 19 air ambulance balance billing cases between 2013 and 2016 which had an average balance bill of about $31,000. Selected providers reported that factors such as transport costs and volume, payer mix, and competition play a role in prices charged. Costs to provide air ambulance transports are high and relatively fixed. For example, according to Air Methods representatives, to operate one air ambulance helicopter requires a staff of 13—4 pilots, 4 nurses, 4 paramedics, and a mechanic—in order to maintain around-the-clock readiness and be ready to deploy at any time. In contrast, helicopter tour operators would generally only need to employ a pilot for times when flights are arranged. Air ambulance providers’ costs for air ambulance service are relatively fixed—meaning they do not increase significantly when they complete more transports. For example, personnel and the costs of helicopter ownership are the same regardless of how often the helicopter is used. Providers we spoke to noted that a small portion of their costs—such as fuel—are variable, meaning they increase with the number of transports completed. To be profitable, and thus be in business and provide service, providers must earn sufficient revenues to cover their costs, including their fixed costs. To increase revenue, a provider must increase its number of transports and/or its prices charged. When a provider has a lower transport volume, then that provider must earn higher prices on average across transports in order to be profitable. Representatives from the eight selected providers we spoke to reported average costs per transport, given current transport volumes, of $6,000 to $13,000 in 2016. Representatives from the providers we spoke to agreed that average transport volume per helicopter has decreased but offered different perspectives on this change. According to the Atlas & Database of Air Medical Services, from 2010 to 2014, the number of air medical helicopters nationwide increased by more than 10 percent, from 900 to 1,020. Meanwhile, over the same time period, Medicare and HCCI data do not show a proportionate increase in the number of transports per Medicare or private health insurance beneficiary. Specifically, from 2010 to 2014, the number of air ambulance transports per 1,000 patients was flat for Medicare and decreased slightly among privately insured patients represented in the HCCI data. Representatives from three providers stated that there is an issue with overcapacity or oversaturation in the industry and that the helicopters being added to the industry are in areas with existing coverage and not serving additional demand, thereby reducing the average number of transports per helicopter rather than increasing access to patients previously not covered by the service. On the other hand, representatives from four other providers told us that the decrease in transports per helicopter is due to helicopters increasingly being located in rural areas where there is greater need, but less population density, leading to fewer transports per helicopter. As noted earlier, air ambulance providers are dispatched only in response to time- sensitive medical events so have limited control over transport volume once providing service to an area. Providers we spoke to said their mix of payers also affects prices charged. As noted earlier, providers reported that the majority of their revenue comes from private insurance. In order to increase this revenue from private insurance, providers must increase their prices charged. Representatives from six of the eight providers we spoke to said that they adjust prices charged to receive sufficient revenue from private health insurance to account for lower-reimbursed transports. Providers have limited ability to control the payer mix—the proportion of transports reimbursed by, for example, Medicare or private health insurance—as they do not turn away patients based on insurance coverage. Representatives from three providers report that the payer mix has shifted over time from private insurance to Medicare as the population ages. For example, one large independent provider reported a 13 percent shift in transport mix from private insurance to Medicare over a 10 year period, while a hospital-affiliated provider reported that since 2013, the percentage of its transports covered by Medicare has increased from 30 to 35 percent, while the percentage of privately insured patients has decreased from 39 to 33 percent. Price increases do not proportionately result in higher revenues when the majority of transports are paid at lower fixed reimbursement levels. For example, representatives from one provider explained that to increase revenue 3 percent, they have to increase prices charged by 15 percent. The overall competitive environment of the air ambulance industry may also play a role in air ambulance prices. As noted previously, patients do not have control over decisions allocating the use of emergency air ambulance service, such as the choice of air versus ground service or between providers. As a result, patients cannot avoid out-of-network air ambulance providers. In such an environment, providers may not lose transport volume as a result of raising prices or being out-of-network with private health insurance. Consequently, air ambulance providers are not subject to the price competition that typically occurs in competitive markets, where if prices are too high, consumers will find alternatives such as a lower-priced service or provider. Furthermore, the ADA preempts state-level regulation of prices, routes, and services of air carriers, including air ambulance providers. DOT’s guidance notes that once DOT has granted economic authority to an air ambulance provider, “the competitive marketplace, rather than state regulations” controls the provider’s prices, routes, and services. Based on the eight selected providers we spoke to, the large independent providers may have higher prices and be less likely to contract with insurers than hospital-affiliated providers. Representatives from the three large independent providers we spoke to reported average prices charged per transport of over $40,000 in 2016, while representatives from the five hospital-affiliated providers reported average prices that ranged from about $13,000 to about $31,000. In addition, representatives from the three large independent providers we spoke to noted they generally do not have contracts with insurers, which, as noted earlier, leaves patients vulnerable to balance billing. For example, representatives from one large independent provider noted that they have contracts in place with fewer than 10 of the approximately 1,000 private insurance payers they work with per year—in other words, around one percent. A representative from a large independent provider noted that being out of network with insurance is advantageous to the provider because a patient receiving a balance bill will ask for a higher payment from the insurance company, which often results in higher payment to the air ambulance provider than having a pre-negotiated payment rate with the insurer. On the other hand, a representative from a small hospital-affiliated provider told us that as a non-profit, they feel obligated to contract with the largest insurer in their service area, in part because one of the hospitals the provider is affiliated with contracts with the insurer. The nature of competition in the air ambulance industry may also be affected by the proportion of air ambulance helicopters operated by the three large independent providers, which is growing and may indicate increasing market concentration. Specifically, the three large independent providers reported operating 692 air ambulance helicopters in 2015 and 763 in 2016—an increase from about 66 to 73 percent of all helicopters in the industry. This growth may be due to mergers and acquisitions. For example, in January 2016, Air Methods acquired Tri-State Care Flight, which had a fleet of 22 helicopters. In April 2017, the second largest air ambulance provider, Air Medical Group Holdings (AMGH) announced that it had agreed to acquire Air Medical Resource Group, adding 62 bases across 15 states to its business. In addition, the three large independent air ambulance providers are for-profit and increasingly owned by private equity firms. For example, the largest provider in the industry, Air Methods, a publicly traded company, announced in March 2017 that it had entered into an agreement to be acquired by a private equity firm for a total transaction value of approximately $2.5 billion. Meanwhile AMGH is also held by a private equity firm and was purchased in 2015 for a reported $2 billion. The presence of private equity in the air ambulance industry indicates that investors see profit opportunities in the industry. Despite the above indications of factors that affect prices, an in-depth analysis of these factors is not possible due to lack of the following types of data. Costs to provide service: Data on providers’ costs to provide service is not readily available. The Department of Health & Human Services has reported there is no national comprehensive database of ambulance service costs available. Additionally, although the Centers for Medicare & Medicaid Services (CMS) has cost data for a portion of ambulance providers—those owned by hospitals—CMS found that these data had limitations, such as not distinguishing between air and ground ambulance transports. Recently, a study on the costs to provide air ambulance service was prepared by Xcenda, a health care consulting firm, on behalf of the Association of Air Medical Services, an association of air ambulance providers, but this study has limitations. The study notes its findings represent 51 percent of all air ambulance bases nationwide and therefore may not be generalizable to the whole industry. Representatives from four hospital-affiliated providers we spoke to noted they had declined to participate due to concerns over the study’s independence and data security. Furthermore, according to Xcenda representatives, the study was designed to assess the adequacy of Medicare’s air ambulance payment rates, which may give respondents an incentive to report high costs to justify higher Medicare payments. Number of transports: As noted above, transport volume is a key factor in determining the total revenues earned and costs incurred to provide service. The FAA Modernization and Reform Act of 2012 required that that the FAA collect certain specified operations data for the air ambulance industry, including the number of annual transports, and report this information to Congress by 2014 and annually thereafter. In May 2017, FAA provided its first submission under the act to Congress. The submission contains a summary of data collected from helicopter air ambulance operators from April 1, 2015, to December 31, 2015. Provider information, including business model type and established prices charged: As noted above, the air ambulance industry may be increasingly concentrated, which could indicate a lack of competition in the industry such as relatively few providers setting prices for a large portion of the total market. However, industry-wide data are not available, such as prices charged by provider or providers’ business model types (hospital-affiliated, independent, or hybrid). Industry information may become more difficult to obtain as private equity firms increasingly own air ambulance providers. For example, upon completion of the acquisition noted above, Air Methods—the largest provider in the industry—will no longer be required to submit periodic reports to the U.S. Securities and Exchange Commission as is required of publicly held corporations, thereby eliminating a key source of publicly available information on the industry. As a result of the lack of industry wide data, it is unknown how the approximately 73 percent of helicopters operated by the three large providers translates into market share. Likewise, it is difficult to assess the nature of competition in the industry or even determine relationships between providers, such as what entity sets pricing for a particular provider. As noted earlier, the helicopters operated by the large independent providers include those contracted to hospital-affiliated providers that set their own prices, as well as hybrid programs where the helicopter is branded as part of the hospital system, but where the independent provider sets the prices. For example, representatives from AMGH note that one of their subsidiaries—Air Evac Lifeteam—is not generally contracted with hospitals, but another AMGH subsidiary— Med-Trans—operates mostly hospital-affiliated bases. However, according to these representatives, AMGH handles its own pricing, billing, and collections across 97 percent of all of its bases. The 26 stakeholders we interviewed identified three types of potential actions to address air ambulance pricing, as shown in table 2. Stakeholders expressed mixed views on two of these actions—modifying the ADA and raising Medicare rates. None disagreed with the third action of increased data collection for the purposes of investigations—such as unfair or deceptive practices—or increased transparency regarding prices. Half of the 26 stakeholders we interviewed supported modifying or reevaluating the ADA as it pertains to the air ambulance industry in order to allow states to have more of an oversight role. Some stakeholders said that when the ADA was enacted in 1978, the air ambulance industry was in its infancy, and so the ADA was not formulated with the unique aspects of the air ambulance industry in mind. In addition, some stakeholders told us states are best suited to regulate air ambulance service, noting that states have an incentive to protect patients from large balance bills while also ensuring access to the service. Some states, such as North Dakota, have passed legislation designed to address air ambulance billing, legislation that has subsequently been struck down in court. In particular, in 2015, the North Dakota legislature passed a statute intended to protect patients from large balance bills. It required, among other things, that air ambulance providers submit documentation indicating that they are participating providers with health insurers in the state that cover a certain proportion of the state’s population in order to be listed on a “primary call list” for dispatching. The state statute also required that air ambulance providers make their fee schedules available to certain requesters, including potential patients, upon request. However, the state statute was challenged in federal district court and in March 2016, the U.S. District Court for the District of North Dakota struck it down as being preempted, ruling that the call list requirement was “precisely the type of state regulation Congress sought to prevent…in the ADA.” North Dakota officials said they would like to see the ADA modified so they could implement such legislation. Subsequent to the ruling, North Dakota enacted a new statute in 2017 which is designed to increase transparency regarding an air ambulance provider’s health insurance network status in non-emergency situations. North Dakota officials we spoke to noted that the most recent legislation, which was signed by the governor in April 2017, will likely be challenged in court. Eight stakeholders opposed modifying the ADA and seven instead suggested raising Medicare rates as an approach to addressing high air ambulance prices. An air ambulance provider association noted that modifying the ADA could create a “patchwork” of regulations nationwide, disrupting the regulatory certainty the industry has been built upon. This association also noted that since air ambulance providers do not turn away patients based on their ability to pay, providers have limited options to cover their costs. Two stakeholders noted that increased Medicare rates would reduce the need for air ambulance providers to increase prices, thereby alleviating pressure on patients with private health insurance. Although raising Medicare rates across the board for air ambulance service is being promoted by some stakeholders, particularly the large independent providers, 10 other stakeholders we spoke with disagreed with this approach as a solution to addressing air ambulance pricing issues. Raising air ambulance Medicare rates is being promoted through a national campaign sponsored by several providers and a key industry association which supports legislation to raise air ambulance Medicare rates. Meanwhile, 10 stakeholders we spoke with—2 providers, 1 association representing providers, 4 groups familiar with air ambulance business and billing, 1 selected state, 1 association of state officials, and 1 insurance association—disagreed with raising Medicare rates as a way to address balance billing. Some of these stakeholders noted that increasing Medicare rates could incentivize further growth in the industry, which could reduce the average number of transports per helicopter, putting pressure on providers to increase prices charged—thereby exacerbating the problem. Further, industry growth may be an indication that Medicare rates are not too low. We have previously reported that when rates are set too low, access to appropriate care for patients covered by Medicare may be adversely affected. However, the growth in the number of air ambulance helicopters indicates that providers are still deciding to provide service under existing Medicare rates. Five of the 26 stakeholders we spoke to—including three groups familiar with air ambulance business and billing and two providers—told us that DOT should collect information to better understand the air ambulance industry. In addition, the federal standards for internal control state that management should identify information needed to achieve objectives and address risks. Such risks in the air ambulance industry could include: Matters related to industry concentration: The ADA provides that in carrying out its economic regulation authorities, DOT should consider, among other things “avoiding unreasonable industry concentration,” when determining what is in the public interest and consistent with public convenience and necessity. As noted earlier, three large air ambulance providers operate 73 percent of the total helicopters in the industry in 2016, although the extent to which this translates into market share is unknown. Unfair or deceptive practices: DOT could potentially use such information to investigate concerns or complaints of unfair or deceptive practices, which, as noted previously, DOT through its Enforcement Office has discretionary authority to investigate. An official from DOT’s Enforcement Office noted that DOT has not exercised its discretionary authority to investigate air ambulance providers. DOT officials noted that DOT needs additional information about the overall industry and that there is a dearth of information about the industry generally. For example, the officials told us DOT has received very few air ambulance complaints. In particular, the officials said they searched their database and found a small number of complaints since 2006. DOT officials noted they believe the small number of complaints may be due to consumers not thinking of DOT when encountering issues with air ambulance pricing, and instead filing complaints with other entities such as their health insurance or state department of insurance. As noted earlier, some states have collected such information—for example together Michigan and Montana collected 58 instances of balance billing since 2013. For the commercial airline industry, which is also subject to the ADA, DOT’s Enforcement Office gives consumers multiple options for filing airline service complaints, including through an online form that allows users to select from an extensive drop-down menu of U.S. and foreign air carriers. In May 2017, DOT added “air ambulance (all)” as an option on the online complaint form, with a field to manually enter an air ambulance provider’s name. However, the DOT website does not have online instructions on how to file air ambulance complaints. DOT’s Enforcement Office has a website with information such as how to file a complaint (by phone, mail, or online), what types of complaints to file (those about service other than safety or security issues), and how DOT uses the submitted information. It is unclear how, if at all, such information applies to air ambulance complaints. For example, DOT officials noted they would like consumers to know the boundaries of DOT’s jurisdiction regarding air ambulance service. Without communicating this and other aspects such as how to file an air ambulance complaint, DOT has limited ability to understand the industry including the nature of competition that could affect its decisions on whether to pursue investigations into potential unfair or deceptive practices. Some selected stakeholders suggested increased price transparency as a solution to address air ambulance pricing issues. In particular, nine stakeholders we spoke to—two providers, two groups familiar with air ambulance business and billing, two insurance associations, two selected states, and one group involved with consumer policy or research—noted that price transparency could provide benefits to providers and the industry, patients, insurance companies and other payers, hospitals, and first responders. Furthermore, the federal standards for internal control state that management should externally communicate information needed to achieve objectives and address risks. According to a DOT official, DOT enforces disclosure requirements due to the ADA’s focus on competitive market forces, which relies on consumers having accurate and timely information on which they can make decisions. For example, for commercial airlines, DOT’s Enforcement Office compiles a monthly Air Travel Consumer Report that includes consumer complaints submitted to DOT and is made available to the general public so consumers and others can compare the complaint records of individual airlines. Such consumer disclosure requirements are intended to enable consumers to make informed decisions on tradeoffs when selecting flights, considering such factors as provider, service quality, and price. DOT officials noted that air ambulance providers have not yet been included in the Air Travel Consumer Report but may be included if a provider reaches a certain threshold such as five complaints received in a month. However, as mentioned above, air ambulance patients may be unaware that they can file complaints with DOT. Furthermore, DOT lacks industry information to group complaints together and to identify patterns, particularly for large nation-wide providers which, as mentioned previously, operate across many states and may set prices for hybrid programs where the helicopter may be branded with a hospital’s name rather than the large provider’s name. Without more industry information, DOT is unable to put any such complaints into the context of the overall industry, both for the purposes of potentially investigating unfair and deceptive practices and for accurately compiling complaints for public reporting. Although DOT has required consumer disclosures for the commercial airline industry, it has not done so for the air ambulance industry, such as disclosing to patients the prices charged for services. For example, DOT has recently issued a final rule that, among other things, requires air carriers to disclose when flights involve any code-sharing arrangements and a supplemental notice of proposed rulemaking regarding the disclosure of baggage fees wherever fare and schedule information is provided to consumers. DOT officials note they have not made such requirements for the air ambulance industry due in part to the emergency nature of air ambulance transports. In particular, DOT officials questioned the value of consumer disclosure requirements for the air ambulance industry given that patients have little to no choice or ability to “shop.” However, a representative from a group involved with consumer policy or research noted that it is important that the public understands the price variation that exists among air ambulance providers, along with any potential limits of their insurance coverage. Furthermore, patients are not the only “consumers” of air ambulance service. Other stakeholders make decisions regarding air ambulance service such as insurance companies that pay for transports or hospitals and first responders that initiate transports. According to a representative from an association of insurers, transparency is the first step toward a rational approach to air ambulance costs—all stakeholders need to know such fundamental aspects as average prices charged, transport frequency, and the amount insurance and patients may pay. Without such information, stakeholders may not be able to make decisions that serve the patients’ best interest. For example, if hospital staff had information on the extent to which providers in the area were contracted with insurance, such staff may make more informed decisions in selecting providers that best serve the financial interests of the patient while still maintaining the same level of care. The ADA largely deregulated the domestic air carrier industry, including the air ambulance industry, and was intended to promote reliance on competitive market forces in order to best further quality service at low prices, among other things. Despite growth in the number of helicopters offering air ambulance service in recent years, lower air ambulance prices have not materialized. In fact, air ambulance prices have increased— approximately doubling between 2010 and 2014—and large providers report average prices charged of over $40,000 per transport in 2016. These increases pose risks to privately insured patients who may be held responsible by the provider for a portion of these charges (balance billing). Despite media reports of balance billing, DOT officials note they have received very few air ambulance complaints since 2006, possibly because consumers do not think of DOT as a place to file such complaints. DOT recently modified its online form to include air ambulance complaints, but the website does not have instructions on how to file such a complaint. Without taking such steps, DOT is missing potential information both for its understanding of the industry and for public disclosure to enable informed decision making. Furthermore, DOT lacks data needed to assess several key aspects of the industry, ranging from basic aspects—such as the composition of the industry by provider type, prices charged by provider, or number of overall air ambulance transports—to the more complex, such as the extent of contracting between providers and insurers or extent of balance billing to patients. However, information from the eight selected providers we interviewed indicate concerning trends about the nature of competition in the industry, such as increasing prices and growing concentration of the market among three large providers. Further, the increasing role of private equity in the industry could further exacerbate these trends while also reducing transparency. Without better information, DOT has limited ability to understand the industry including the nature of competition that could affect its decisions on whether to take investigative or enforcement actions. Likewise, without information on the industry and pricing, stakeholders such as hospital staff have limited ability to make air ambulance decisions, such as selecting a provider that best serves the patient’s financial interests without compromising the medical benefits air ambulance transports provide. To increase transparency and obtain information to better inform decisions on whether to investigate potentially unfair or deceptive practices in the air ambulance industry, we recommend the Secretary of Transportation take the following four actions: Communicate a method to receive air ambulance-related complaints, including those regarding balance billing, such as through a dedicated web page that contains instructions on how to submit air ambulance complaints and includes information on how DOT uses the complaints. Take steps, once complaints are collected, to make pertinent aggregated complaint information publicly available for stakeholders, such as the number of complaints received by provider, on a monthly basis. Assess available federal and industry data and determine what further information could assist in the evaluation of future complaints or concerns regarding unfair or deceptive practices. Consider consumer disclosure requirements for air ambulance providers, which could include information such as established prices charged, business model and entity that establishes prices, and extent of contracting with insurance. We provided a draft of this report to DOT and CMS for review and comment. CMS and DOT provided technical comments which we incorporated as appropriate. In written comments provided by DOT (see app. II), DOT agreed with three of our recommendations but did not concur with our recommendation that DOT assess available federal and industry data and determine what information could assist in the evaluation of future complaints or concerns regarding unfair or deceptive practices. DOT noted that its analysis of a given complaint is based on the unique facts of each individual case, rather than on aggregate data. Therefore, DOT noted it does not believe an assessment of federal or industry data would yield information relevant to its determinations in future cases. We appreciate DOT’s comments; however, we believe that this recommendation is justified. As we note in the report, the federal standards for internal control state that management should identify information needed to achieve objectives and address risks. DOT has discretionary authority to investigate air ambulance providers but to date has not done so. Although collecting consumer complaints will help DOT identify areas for further investigation, further information will help put complaints into the context of the larger industry. DOT is currently limited in its ability to conduct such an analysis for the air ambulance industry due to data limitations noted in our report, involving such basic elements as relationships between providers including what entity sets prices or the given market share or number of transports per provider. As we noted in the report there has been some efforts to collect data, such as FAA’s efforts regarding the number of transports. These and other data are potential sources for DOT to better understand the helicopter air ambulance industry and evaluate whether consumer complaints indicate larger patterns of unfair and deceptive practices. We are sending copies of this report to the Secretary of the Department of Transportation, the CMS Administrator, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. For this work we examined: (1) what is known about the prices charged for air ambulance service, (2) what is known about the factors that affect the prices charged for air ambulance service, and (3) what actions, if any, selected stakeholders believe the federal government should take regarding air ambulance pricing. To describe what is known about air ambulance prices charged, we analyzed and assessed the reliability of information on prices charged and amount paid for 2010 and 2014 from Medicare claims data from the Centers for Medicare & Medicaid Services (CMS) and private health insurance data published by the Health Care Cost Institute (HCCI). Although HCCI data include data from three large national insurers and includes approximately 40 million individuals with employer-sponsored insurance, the data may not be generalizable to all privately insured patients. The years 2010 and 2014 were selected as they were the furthest back year and the most recently available year from HCCI, could be compared across both data sets, and allowed us to analyze changes over time. The price charged is the amount providers claim for two components: (1) the transport and (2) each mile that the patient is transported. The amount paid is the total amount allowed by the payer (i.e., the private insurer or Medicare); the patient may be responsible for a portion of that total amount through copays or deductibles. For Medicare, we excluded claims without both the transport and mileage components and excluded claims with mileage amounts over the 99th percentile. To address possible issues with payments by secondary payers, we excluded claims where the payment amount was less than 10 percent of the price charged and claims that otherwise may have indicated a secondary payer. We also excluded any claims for transports with multiple patients or where an ambulance was dispatched but the patient died before being transported; these types of transports are paid at reduced rates. We also excluded outliers based on the price charged— excluding claims that were above the 99th percentile or below the 1st percentile for that year. We assessed the reliability of the data published by HCCI by reviewing related documentation, discussing the methods used with knowledgeable officials, performing data reliability checks, and comparing the findings to published information, and we determined the data were sufficiently reliable for the purposes of this report. Medicare claims data, which are used by the Medicare programs as a record of payments made to health care providers, are closely monitored by both CMS and contractors that process, review, and pay claims for Medicare- covered services. The data are subject to various internal controls, including checks and edits performed by the contractors before claims are submitted to CMS for payment approval. Although we did not review these internal controls, we assessed the reliability of Medicare claims and enrollment data by reviewing related CMS documentation and comparing our results to published sources. We determined that the Medicare claims and enrollment data were sufficiently reliable for the purposes of our reporting objectives. To describe what is known about the factors affecting prices of air ambulance service, we also reviewed previous reports on costs to provide air ambulance service from the Department of Health and Human Services and by the contractor Xcenda as commissioned by the Association of Air Medical Services. We also interviewed eight selected providers and other stakeholders (as described below) regarding the prices and costs associated with their service and factors affecting prices and costs. To gauge the scope of the air ambulance industry, we also analyzed available information from the Atlas & Database of Air Medical Services, 2010—2016. Lastly, we interviewed representatives from associations regarding the costs to operate air ambulance and other on- demand air services, including the Helicopter Association International, Air and Surface Transport Nurses Association, National EMS Pilots Association, and International Association of Flight & Critical Care Paramedics. These associations were selected because they represent major cost categories of providing air ambulance service, including the aircraft and personnel. To describe potential federal government actions to address the issue of air ambulance pricing, we reviewed documentation and interviewed officials from U.S. Department of Transportation (DOT) and the Department of Health & Human Services’ Centers for Medicare & Medicaid Services (CMS). In particular, we reviewed CMS documents regarding Medicare payments for air ambulance transports. We reviewed pertinent laws and regulations and DOT guidance, enforcement actions, and legal opinions, such as the Airline Deregulation Act of 1978, the federal statute (49 U.S.C. § 41712(a)) providing DOT the authority to investigate unfair and deceptive practices, and whether there were any actions by the DOT Enforcement Office regarding air ambulances. We also reviewed DOT Enforcement Office activities regarding commercial airlines, such as the consumer complaint online web form. We compared DOT’s practices and procedures for aspects of its air ambulance oversight to federal internal control standards related to information collection and external communication. To describe stakeholder views on potential federal actions, we selected and interviewed 26 stakeholders, including representatives from: 8 air ambulance providers (3 large independent providers and 5 hospital- affiliated providers); 2 associations representing air ambulance providers; 6 groups familiar with air ambulance business and billing such as analysts and consultants; 4 states active in assessing air ambulance costs and prices charged; 2 associations of state officials; 2 associations representing health insurers; and 2 groups involved with consumer policy or research. Although these selected stakeholders are not generalizable to all air ambulance stakeholders, they were selected to represent a range of perspectives. For example, we selected air ambulance providers that represented a range in business model types (hospital-affiliated and independent), a range of sizes (large and small), a range of known perspectives in the industry, and geographical dispersion. There are no available data on all providers that indicates industry-wide characteristics, such as a breakdown of providers by business model type or size. In addition, we selected stakeholders familiar with air ambulance business billing to capture expertise on business aspects and also a range of views; and selected states active in assessing air ambulance prices and costs and, to the extent possible, geographically dispersed. In addition to the contact named above, Heather MacLeod (Assistant Director), Melissa Bodeau, Stephen Brown, Christine Brudevold, James Cosgrove, Danielle Ellingston, Geoff Hamilton, Corissa Kiyan-Fukumoto, Emily Larson, Malika Rice, Oliver Richard, Daniel Ries, Daniela Rudstein, and Monica Savoy made key contributions to this report.
Helicopter air ambulances reduce transport times for critically ill patients during life-threatening emergencies. Although patients typically have little to no choice over the service or provider given the often emergency nature of the transports, they might be billed for charges that have potentially devastating financial impacts. GAO was asked to review air ambulance pricing. This report examines: (1) the prices charged for air ambulance service, (2) the factors that affect prices, and (3) stakeholders' views on any actions the federal government could take to address air ambulance pricing. To answer these questions GAO analyzed 2 years of data (2010 and 2014—the latest available) on prices from CMS and a private health insurance database; interviewed 26 stakeholders, such as 8 air ambulance providers chosen to represent a range of types (hospital-affiliated and independent) and sizes; and interviewed DOT and CMS officials. Between 2010 and 2014, the median prices providers charged for helicopter air ambulance service approximately doubled, from around $15,000 to about $30,000 per transport, according to Medicare data from the Centers for Medicare & Medicaid Services (CMS) and private health insurance data. Air ambulance providers do not turn away patients based on their ability to pay and receive payments from many sources depending on the patient's coverage, often at rates lower than the price charged. For example, the Medicare median payment was $6,502 per transport in 2014. Air ambulance providers might bill a privately-insured patient for the difference between the price charged and the insurance payment—a practice called balance billing—when the provider lacks an in-network contract with the insurer. However, due to a lack of information it is unclear to what extent patients are balance billed. Factors such as a provider's proportion of transports provided by payer and competition may play a role in air ambulance prices charged, but data to assess these factors are not available. For example, selected providers reported that they adjust prices to receive sufficient revenue from private health insurance to account for certain lower-paid transports, such as those covered by Medicare. Price increases may also be tied to the industry's characteristics such as apparent market concentration—the three large independent providers reported operating 73 percent of the industry's total helicopters in 2016. An analysis of these factors is not possible due to a lack of currently available data such as the number of transports or the industry's composition by provider. Selected stakeholders we spoke to proposed actions to address air ambulance pricing issues, including (1) raising Medicare rates, (2) allowing state-level regulation of air ambulance prices, and (3) improving data collection for the purposes of investigations and transparency regarding prices. Stakeholders expressed mixed views on the first two proposals but none disagreed with the third. Federal internal control standards state that management should identify and communicate information needed to achieve objectives and address risks. The Department of Transportation (DOT) has discretionary authority to investigate potentially unfair practices in air transportation or the sale of air transportation, but has not exercised this authority in regards to helicopter air ambulances. DOT officials said they need additional information about the air ambulance industry. For example, DOT officials note that they have received few air ambulance complaints since 2006 and report that consumers may not think of DOT as the place to complain. Although DOT recently modified its online form to include air ambulance complaints, it has not communicated how to file complaints. Without doing so and obtaining more industry data, DOT is missing important information needed to put complaints into the context of the overall industry that could affect its assessment on whether to pursue investigations. Further, stakeholders such as hospital staff could benefit from greater transparency as they currently have limited ability to make air ambulance decisions that serve both the financial interests and medical needs of the patient. The Secretary of Transportation should: (1) communicate a method to receive air ambulance, including balance billing, complaints; (2) take steps to make complaint information publicly available; (3) assess available data and determine what information could assist in the evaluation of future complaints; and (4) consider air ambulance consumer disclosure requirements. DOT concurred with all but the third recommendation, stating additional information is not needed for such purposes. GAO stands by the recommendation, as discussed in this report. DOT and CMS also provided technical comments which were incorporated as appropriate.
In the last 10 years, the Congress has expanded federal efforts to promote employment for people with more severe disabilities by creating new programs, expanding existing programs, and providing employment protections. In the past, social attitudes toward people with mental retardation or psychiatric conditions often labeled them as unemployable outside of institutions or sheltered workshops and thus unable to benefit from job training or vocational rehabilitation. However, recent advances in assistive technology, particularly in computers, have made many personal limitations less prohibitive barriers to work. Voice recognition software, for example, allows those who do not have use of their hands to produce documents on a computer. In addition, the development of supported employment, in which ongoing on-the-job support is provided to people with disabilities through a job coach, has demonstrated that many people previously considered unemployable could work alongside people without disabilities. In response to these developments, the Congress has created new programs to promote the increased use of assistive technology and to provide states with funding specifically designated for supported employment. In addition, the Congress has amended the Rehabilitation Act to strengthen the requirement that states serve individuals with severe disabilities. In 1990, the Congress provided educational and employment protections to people with disabilities. For example, ADA prohibited employment discrimination on the basis of disability by state and local governments and many private-sector employers, as long as the person was qualified and able to perform the essential job functions “with or without reasonable accommodation.” Similarly, in the Individuals With Disabilities Education Act (IDEA), the Congress mandated that all children with disabilities be provided a “free, appropriate public education,” and courts interpreting the law have required that this education be provided in “the least restrictive environment.” This provision emphasized a clear presumption that children with disabilities should be mainstreamed—that is, taught in regular classrooms when possible. Over many years, public concern and congressional action have produced a broad continuum of services and policies designed to help people with disabilities. We identified 19 different federal departments or agencies that administered 130 programs targeting people with disabilities in 1994.These programs ranged from those for toddlers with disabilities, for example, Early Intervention State Grants for Infants and Toddlers With Disabilities—to those for the elderly with disabilities, for example, Independent Living Services for Older Blind Individuals. These many programs provided education, health care, and books and assisted with employment. (For a list of these programs, as well as targeting and funding information, see app. II.) Of the 130 programs, 69 were wholly targeted (targeted exclusively) to people with disabilities; the others were partially targeted—that is, they provided services to a wider clientele but nonetheless gave some priority or preference to people with disabilities. In 1994, the federal government spent over $60 billion through these 69 wholly targeted programs, including efforts such as the Disabled Veterans’ Outreach program, which helps disabled veterans. In addition, people with disabilities benefited from between $81 billion and $184 billion in federal spending through 61 partially targeted programs in areas such as income support, housing, and transportation. The federal commitment to helping people with disabilities has also attempted to facilitate their employment both directly and indirectly. Of the 130 programs available in 1994, 26 provided direct employment services such as skills training and job search assistance. For example, the Supported Employment program established by the Rehabilitation Act is employment focused because it provided training and placement services to people with severe disabilities. (Apps. II and III provide details about these programs.) Employment-focused programs in 1994 provided between $2.5 billion and $6.1 billion in services targeted to people with disabilities. In addition, we identified 57 of 130 programs as related to employment—that is, although not directed specifically at employment, these programs may have indirectly affected employment outcomes. These include federal programs that help finance purchases of assistive technology, such as specially designed wheelchairs or computer software, which are employment related because they can enable an individual with a disability to enter the workplace. In 1994, employment-related programs provided between $62 billion and $156 billion in services targeted to people with disabilities. The remaining 47 of the 130 federal programs were unrelated to employment. Federal efforts to promote early intervention services for toddlers with disabilities are an example of these types of programs. (See fig. 1.) Employment-related programs indirectly facilitate work through services such as assistive technology, transportation, health insurances, and the like. Many of these employment-focused and -related programs provided a specific service rather than a broad range of services to people with disabilities. For example, the Department of Transportation (DOT) funds capital improvements for local transit systems and also provides funds for paratransit services. The Job Training Partnership Act (JTPA) program, although only partially targeted to people with disabilities, emphasizes shorter term skill training and provides only a limited range and amount of support services. Important exceptions to this are the vocational rehabilitation programs; both the federal-state Vocational Rehabilitation program and the Veterans’ Vocational Rehabilitation program can provide a wide variety of services designed to promote employment. Although the federal government provides funds for all 130 programs, the extent of the federal role in their administration varies considerably. Federal programs provide assistance directly to the individual or indirectly through other public or private service providers at the state and local levels. Programs that provide assistance indirectly often involve limited responsibilities for the federal government in administering services. For some programs, assistance or services flow directly from the federal government to the individual with a disability. For example, income support payments under the Social Security Disability Insurance (DI) program flow directly to beneficiaries, and phone calls requesting information from the Education Department’s Information Clearinghouse are a direct service from the federal government. The largest federal programs in terms of spending—the income maintenance and health care programs—generally deliver assistance directly to individuals; however, if these programs are excluded, states receive a substantial amount of the funds provided through disability programs. For many programs, assistance or services flow indirectly from the federal government through state governments, which are responsible for delivering services to individuals with disabilities. For example, under the federal-state Vocational Rehabilitation program, the federal government allocates program funds to the states, which have authority to deliver services. For some programs, the states may provide funds to other entities, such as local governments or nonprofit or private agencies, to administer services. In the states we visited, funds from federal disability programs were further distributed to a wide range of state agencies— departments of rehabilitation services, employment and training, developmental disabilities, mental health, education, and those for the deaf and hard of hearing, or the blind, for example. For many other programs, assistance or services flow indirectly from the federal government to other organizations such as state or local agencies or nonprofit or private organizations. For example, the Projects With Industry (PWI) program may be administered through other public, private, or nonprofit agencies. Under these programs, federal agencies allocate grants on the basis of the application or proposal submitted by an organization or agency, which is then responsible for providing services. Federal funds allocated through these programs provide support for special projects in delivering disability services; others support research or train state or local professionals to work more effectively with people with disabilities. (See fig. 2.) Indirect - Government Agencies Only (Nonfederal) Although many federal programs have decentralized the provision of services to state governments, the programs have adopted a variety of funding mechanisms to do so, including funding formulas based on different criteria as well as varying procedures for awarding grants. The variation in these funding mechanisms affects the distribution of federal funds to states. States may receive more or less money depending on the size and characteristics of their targeted population as well as their success in pursuing grants and other awarded monies. In our analysis of statewide 1990 funding data for the eight wholly targeted employment- focused programs and statewide 1990 census data, we found that the disabled working-age population as a percentage of the total working-age population varied between 7 and 15 percent. Federal programs distributed to states between $200 and $1,100 per working-age person with a disability. Some states like Florida, Georgia, and South Carolina received between $200 and $350 per working-age person with a disability; sparsely populated states like Wyoming and Alaska received between $800 and $1,100 per working-age person with a disability. (See app. IV.) Promoting employment is one of the most important challenges confronting federal assistance to people with disabilities. People with disabilities constitute an underutilized workforce and a potential resource to the U.S. economy. Surveys have estimated that 18 to 40 percent of people with disabilities have jobs—far below the 73 percent of people without impairments. Yet in these surveys, most individuals with disabilities indicated that their disability did not prevent them from working. For example, although 8.2 percent of individuals were identified as having a work disability in the 1990 census, only a little over half of those said that they could not work. Increased employment would alleviate the poor economic condition of people with disabilities, many of whom struggle to get by on marginal resources. According to 1990 census estimates, 22 percent of working-age people with disabilities live on or below the poverty line, and an additional 12 percent can be classified as “near poor” (with incomes between 101 and 150 percent of the poverty line). Not surprisingly, many people with disabilities turn to public assistance. In 1992, approximately 3.5 million disabled workers participated in the Social Security Administration’s (SSA) DI program, and approximately 4 million people with disabilities participated in the Supplemental Security Income (SSI) program. Aside from Social Security income, census figures indicate that people with disabilities were also more likely to receive other forms of public assistance. Only 2 percent of working-age people without disabilities— those aged 16 to 64—reported receiving public assistance income, compared with 15 percent of working-age people with disabilities. Our discussions with disability experts, consumers, and officials from public and private agencies identified multiple barriers that contribute to the relatively low employment rates for people with disabilities. Some of the major employment barriers they identified are listed in table 1, which also includes examples of federal efforts addressing each barrier. For many individuals with disabilities, employment barriers can restrict the range of employment opportunities available. For example, the 40 percent of people with disabilities who have less than a high school education may find the job market particularly difficult, especially with the general decline in the number of lower skill jobs available in many industries. According to 1990 census figures, people with disabilities were nearly twice as likely to have less than a high school education (40 to 21 percent); similarly, individuals without impairments were more than twice as likely to have a college degree or more (21 to 9 percent). In contrast, although a lack of skill training can limit employment opportunities, access to appropriate technology can expand the range of possibilities. In a National Council on Disability (NCD) report, users of assistive technology reported that such equipment enabled them to work more productively for more hours, increase their earnings, and either keep their jobs or obtain employment. Obtaining access to supportive technologies, however, is often difficult for many people with disabilities. The Council reported that a person with severe disabilities may be considered eligible for, and benefit from, more than 20 federal programs in the area of assistive technology. Yet the report cites that the many inconsistencies between and within these programs lead to an extraordinary amount of confusion and frustration for individuals with disabilities and their families. Moreover, even if a person is clearly eligible for all services, he or she must negotiate multiple eligibility requirements— perhaps including medical examinations, additional documentation, and interviews with officials from multiple agencies—to get access to services under several narrowly focused programs. People with disabilities also face problems with accessing nongovernment-supported health care due to preexisting conditions. For example, while ADA does not allow employers to discriminate on the basis of health care costs, the President’s Committee on Employment of Persons With Disabilities cited employer discrimination in accessing nongovernment-supported health insurance as a major employment barrier. For example, employers, especially small businesses, may find that sometimes the premiums for employee group health insurance will increase significantly if an employee with a disability is included in the policy. Although measuring the extent of discrimination is difficult, several research studies have found that wages and hiring rates are lower for individuals with disabilities than for those without impairments, even after differences in education, experience, and other factors are accounted for. In addition to these barriers, people with disabilities face other obstacles in taking advantage of available employment opportunities. For example, many of the federal and state officials we spoke with, along with other experts, identified the lack of accessible transportation as especially problematic. The U.S. transportation system is heavily automobile based, but people with disabilities are less able to rely on cars than individuals without impairments. According to census data, 14 percent of people with disabilities did not have an automobile in the household, compared with 6 percent of people without disabilities. Some disabilities (such as blindness) make driving impractical; others require costly adjustments, such as hand controls or a lift, to a standard automobile. In addition, financial considerations may limit access to automobiles for many people with disabilities, especially for the over 10 million people with disabilities who reported incomes of less than $10,000 in 1990. The need to rely on public transportation may especially restrict employment options for people with disabilities who live in rural areas. People with disabilities who rely on income support programs such as Social Security DI or SSI may also be discouraged from attempting to work by the prospect of losing their benefits, particularly their health insurance coverage. Disability advocates and rehabilitation counselors believe that the fear of losing medical coverage is one of the most significant barriers to the participation of SSI and DI beneficiaries in the Vocational Rehabilitation program, their return to work, or both. In recent years, other initiatives have adopted additional procedures to mitigate these work disincentives, but relatively few beneficiaries have taken advantage of these provisions. Because people with disabilities may need a variety of services to seek or retain employment, and with federal assistance dispersed among many programs and agencies, coordination of these activities is especially important. Programs and agencies may coordinate in different ways, from sharing basic program information to establishing compatible eligibility criteria to cooperating in service provision. (See table 2.) Our review raised questions about the extent to which federal disability programs achieve coordination in any of these areas. Many of the agencies responsible for federal disability programs did not engage, or engaged very little, in basic informational coordination either with each other, state and local agencies, the private sector, or the disability community. Eligibility coordination was also lacking; similarly, service coordination appeared to be uncommon. Coordination in any of these areas appeared to be a formidable task for several reasons. First, many of the recent initiatives targeted to people with disabilities added to or expanded an already existing program structure organized to address the needs of nondisabled people. As a result, administrators, particularly those who manage partially targeted programs, often do not fully understand the needs of people with disabilities and do not place a high priority on coordinating with organizations serving their special needs. Second, many federal programs rely on service providers at the state and local levels for direct service delivery. In addition to the 130 federal programs overseen by 19 agencies in 1994, states distributed program administration and authority to a variety of agencies: state departments of employment and training; state rehabilitation departments; state education departments; state departments for the blind, deaf, or developmentally disabled; state health departments; and others. Many of these different agencies also apply their own eligibility criteria, creating even greater variation. One disability researcher reported that when he surveyed states and asked which departments provided disability-related services, he received almost as many different responses as there were respondents. Finally, federal and state officials also identified turf battles, different orientations and approaches, and competing program objectives as other impediments to coordination. Although federal assistance to people with disabilities is dispersed among many programs and service delivery agencies at the state and local level, limited informational coordination exists among agencies about these programs and how they fit together. Federal officials did not systematically share program information and ongoing developments with their counterparts at other federal, state, local, and nonprofit agencies or with the private sector or the disability community. Some federal officials we interviewed did not know of the existence of other federal programs helping people with disabilities. Although others knew of these programs, they seldom or never talked with agency officials from other programs nor did they keep up with ongoing program developments. Limited informational coordination by federal administrators was common, particularly among those who manage partially targeted programs. For example, one Labor official, commenting on the department’s lack of outreach to the disability community, said that the Department “does not even talk to its customers.” Similarly, we consistently heard from disability advocates, state and local officials, service providers, and private employers that JTPA does not effectively serve the needs of people with disabilities. State officials told us that, in fact, some JTPA offices were situated in locations that were inaccessible to people with mobility limitations. One consequence of this limited informational coordination was the difficulty people with disabilities experienced in getting reliable information about federal services. In particular, although the majority of SSI field offices and their staff reported that they spent time providing program information, according to our 1991 study, state and local officials told us that consumers often received inconsistent answers to commonly asked questions about SSI, work, and rehabilitation. The lack of consistent, accurate information about SSI, work, and rehabilitation could magnify some of the work disincentives created by provisions of income support programs. One consumer we interviewed stated that getting answers to questions about work and rehabilitation was difficult because SSI/DI administrators did not understand the needs that were specific to her disability. She also told us that getting incomplete information made employment a risky proposition because she could lose her health benefits and not have the earning power to replace them. Lack of informational coordination also has negative consequences for employers and service providers, both public and private. For example, in some states, counselors from vocational rehabilitation programs do not have access to job listings from agencies that administer employment and training programs. The absence of such linkages for sharing information can present undue burdens on employers. For example, without such information sharing, counselors from separate agencies may independently contact the same employer to develop employment opportunities for people with disabilities. Having different service providers—a vocational rehabilitation counselor, an employment training specialist, a supported employment job developer, or a representative from PWI—contact one employer can undermine the relationship between service providers, employers, and the disability community. Eligibility coordination is similarly limited among federal programs and agencies. Each federal program has congressionally authorized eligibility and scope-of-service requirements. Differences in eligibility criteria can make access to services a complex process, however, and could confuse people with disabilities as well as those who serve them. We identified at least 14 different definitions of disability used by federal programs alone, and many of these definitions provided considerable agency and state discretion in eligibility determination. For example, in assessing eligibility for services, one program permitted each of its 300 field offices considerable discretion in defining disability. State officials who serve people with disabilities told us that the requirements for participating in this program are very strenuous and a paper chase is required to apply. Even when programs may have well-defined criteria within their own departments, these criteria may differ from those used by other agencies. For example, programs administered through the Department of Education, such as Vocational Education and Vocational Rehabilitation, defined eligibility in terms of physical or mental impairments, whereas the programs administered through Social Security (DI and SSI) defined disability in terms of the inability to work. (See app. V.) In addition to the federal eligibility definitions, many states have the flexibility to develop and apply additional eligibility criteria and standards. For example, according to federal officials, theoretically, each state can have its own definition of developmental disabilities. State agencies may use the federal definition of developmental disabilities or the state’s definition. For someone to be eligible for services in one state, mental retardation has to be the primary disability. Other states define developmental disability in terms of intelligence quotient with differing thresholds. A 1988 report from the Training and Research Institute for People With Disabilities cited that among state agencies serving the mentally retarded or developmentally disabled population, only 40 percent evaluated their consumers using the relevant federal definitions and standards and none of the state vocational rehabilitation agencies evaluated their consumers according to federal criteria. We also found in a recent study that state vocational rehabilitation agencies used criteria that were more restrictive than federal standards in screening SSI/DI participants. Restrictive standards allowed state rehabilitation agencies to limit the referrals they receive from the Social Security offices to those they considered to be the best rehabilitation candidates. The wide variation in eligibility standards limits the possibilities for linkages among programs, such as reciprocal referrals or eligibility agreements, in which agencies or programs can establish that eligibility for one program would expedite service provision from another. Such linkages could reduce confusion and service delays to consumers, despite the variation in eligibility, yet we found few examples of such reciprocal agreements. For example, few linkages exist between state vocational rehabilitation programs and federal or state employment and training agencies. In our 1992 study of support services under JTPA, only 24 percent (131 of 557) of local organizations surveyed said that they had coordination agreements with the state rehabilitation agencies. Although in many cases variation in eligibility requirements may be appropriate or necessary, collectively these differences make federal programs difficult for consumers to use. For example, in the area of assistive technology, consumers testified at public forums convened by NCD that one device or piece of equipment has to be defined in different ways to meet eligibility requirements under different programs, each with its own funding limitations. Different rules are further complicated by differences in interpretating guidelines in an agency within a state and across states. Even if a person is clearly eligible for all services, he or she must negotiate multiple eligibility requirements—perhaps including medical examinations, documentation, and interviews with officials from multiple programs—to access services under several narrowly focused programs. Routinely, people with disabilities must go to several different offices to get services. Similarly, different standards and criteria also increase costs for service providers and can limit their participation. For example, an international nonprofit organization that provides a variety of employment and rehabilitation services for the disabled told us that some local chapters of the organization choose not to participate in some programs that have a federal and state component. These local chapters would prefer to spend their resources on delivering services instead of negotiating different processes in a variety of agencies. Established, well-maintained service coordination among programs also appears uncommon, resulting in inefficiencies and limiting private-sector participation and support. For example, many experts believe that increased access to regular fixed-route transportation facilitates the employment of people with disabilities. Transportation continues to be problematic, however, particularly in rural areas. Although different federal and state programs provided separate transportation funding for the elderly and the disabled, these services were not required to be coordinated at the local level. Thus, federal and state officials told us that, for example, in one county a half-empty van providing transportation for the elderly and another half-empty van providing transportation for the disabled may be traveling the same routes at the same time. Poor service coordination can also discourage employer efforts to work with programs and help people with disabilities. Private-sector partners involved with government programs told us that service coordination is essential for them. Officials from one corporate partner told us that having a single point of contact—rather than having to deal with multiple programs and administrators—is crucial to the company’s ability to participate in a program employing individuals with psychiatric disabilities. Another corporate official explained that lack of responsiveness and service coordination among multiple employment programs—along with reductions in financial incentives—contributed to her company’s decision to discontinue its efforts to participate in job training programs for the disadvantaged and for people with disabilities. The diffusion of federal assistance to people with disabilities is not unique to these programs, and efforts to address the resulting problems are not new. For more than 30 years, the Congress, federal agencies, and others have recognized that most public and private human service agencies are organized to address a narrow range of issues and individuals. Nevertheless, their periodic attempts to reorganize and reshape the way human services are delivered have met with only marginal success. Public and private officials from all levels of government and service delivery have tried different approaches to change the way human services are planned, funded, and administered. As we identified in a previous study, however, broad-based efforts to eliminate fragmentation by creating a new service delivery system have faced many obstacles and met with limited success. Mandates alone are unlikely to secure the significant time and resource commitments needed from officials to initiate and sustain systemwide reform. In contrast, less ambitious efforts to improve coordination among service providers have succeeded somewhat in enhancing services. These efforts did not try to reorganize agencies’ administrative structures; they improved services by taking a more modest, practical approach, focusing on the point of delivery and adapting to local conditions. Specifically, they linked individuals, services, and programs by (1) convincing service providers and officials of the need to cooperate and developing incentives for them to participate in the effort, (2) getting key participants to agree to the goals of the initiative and the role of each party in implementing changes, and (3) establishing a forum to institutionalize changes and continue ongoing communication. Some states have developed strategies that use the practical and modest approach that we had previously identified as improving coordination. For example, in California, one rural county we visited appeared to be improving services and reducing program costs. Despite significant barriers to coordination, state and local officials were improving communication among service providers and linking people with disabilities to the services they need in a comprehensive manner. Officials reported that their coordination efforts had reduced time and expense for administrators and consumers by 40 to 50 percent. In this case, state and local officials created a collaborative forum—the School-to-Work Interagency Transition Partnership (SWITP)—that uses interagency linkages at the local level to help students with disabilities successfully transition from school to work. Officials formed a transition team composed of the student, parents, school counselors, representatives from the local JTPA program, and the state vocational rehabilitation agency. The team meets to identify a student’s employment goals and devises a plan to tailor available services to the student’s aspirations for achieving independent living. Representatives from almost all of the necessary services usually attend the meeting, and they work with the student and his family to identify priorities and overcome barriers. For example, one student could not take needed computer classes because of a lack of access to public transportation, but buying a car would have jeopardized the student’s income maintenance and health care benefits. Because the agencies were working together in a team format to coordinate services, they quickly identified and implemented a solution. Transition team members said that students liked being part of the team because it gave them greater personal independence. One of the students who participated in the transition team told us it had been indispensable in guiding him from high school to independent living. The student had only work experience as a janitor, but the team helped him to identify his skill strengths and weaknesses as well as his own aspirations for other vocations. School counselors provided insights about the student’s disability, and the JTPA staff identified relevant training the student needed. The end product of the meeting was a strategic plan, which gave the employment specialist a basis on which to approach employers, emphasizing the student’s skills and their benefits to employers. Despite initial employer reluctance, the team placed the student temporarily for on-the-job training and continued to support both the employer and the student after the placement. The student’s enthusiasm and willingness to learn impressed the employer, and, 4 years later, the student was still employed there as a custom upholsterer—at well above the minimum wage. Although SWITP’s comprehensive team assessment and planning process is targeted to youth with disabilities, it mirrors the challenges and strategies that have faced other programmatic efforts to improve services. Like administrators of adolescent drug prevention programs, SWITP service providers faced tasks of coordinating diverse external agency procedures, documentation, and personalities. SWITP service providers noted that they often need conflict management skills, a strong focus on the student’s needs, and patience to overcome the turf concerns of specialized professionals and their agencies to provide the full range of services necessary for their participants. SWITP providers also found that coordination was enhanced by using a master document containing the information necessary for each agency to meet each program’s data requirements. Although the master document does not replace all other documentation, it condenses the multiple intake documents previously required from students. In addition, service providers regularly consult each other about changes in their programs and consumers, which has enhanced their ability to follow up with their students long after they have left the program. SWITP service providers reported strong support for the process because it fosters trust (“we don’t feel threatened by one another”), noting that this trust has given them greater flexibility in helping their students achieve their goals. States are also exploring other strategies to improve communication and overcome organizational barriers. For example, Massachusetts has created interagency agreements establishing forums in which state agency personnel can discuss and systematically train each other about their respective missions, procedures, standards, and target populations. Nevada and Massachusetts have also reported arrangements for exchanging electronic information between vocational rehabilitation and employment and training agencies, which has facilitated reciprocal referrals. While the variety of programs and agencies engaged in serving people with disabilities raises questions about the efficiency of federal efforts, the effectiveness of these efforts is also unclear. Most of the 26 employment- focused programs we examined have not been formally evaluated. For many of the employment-focused programs, no statutory or agency data collection requirements exist. Federal officials explained that few formal evaluations have been conducted because of the lack of data collection, limited resources, and in many instances the data collection problems posed by federal and state program flexibility. The absence of legislative and agency data collection requirements, coupled with limited available resources, precludes effectiveness studies for many of the programs we visited. Many of the agencies administering these 26 employment-focused programs did not require or collect data on program outcomes—specifically, data on whether participants got jobs and kept them, what wages they received, and whether they received employee benefits such as health insurance. For example, JTPA has no statutory requirement for service delivery areas to report the characteristics of the services delivered to people with disabilities or how they are delivered. Program officials told us that, with the limited resources of most agencies, they lack the capabilities to initiate data collection efforts. For some of the programs that did collect outcome data, the information collected was not sufficient to adequately link outcomes to the services provided. For example, although service providers for the Supported Employment program provided detailed information on program participant performance and initial placement, they were not required to track consumers after an 18-month period, making any long-term assessment of the linkage between training and employment difficult. Without a concurrent effort to improve coordination at all service levels, however, imposing reporting or assessment requirements may not improve the basis for evaluation. Given the flexibility each state has in choosing its own standards and definitions, outcome tracking can be a formidable task. In many instances, service providers, both public and private, use different intake data, eligibility criteria, paperwork requirements, software, and confidentiality rules. Consequently, “people aren’t talking the same language,” as one state official summarized, and considerable investments would be required to develop more uniform documentation and data to accommodate the many definitions and standards used. For example, different agencies and organizations at the state level provide funds for supported employment services. Federal officials told us, however, that mental health agencies have a different definition of services that constitute supported employment than do the vocational rehabilitation agencies. Without better coordination, data collection and tracking will remain a costly endeavor, and program administrators will lack confidence that their programs are effective, either individually or in combination with other services. The Congress has in the past directed agencies involved in research and evaluation of programs serving people with disabilities to improve their coordination. For example, according to a report from the Office of Technology Assessment (OTA), Education’s Rehabilitation Services Administration signed a memorandum of understanding in 1993 with other agencies involved in similar research and evaluation. The memorandum was intended to initiate collaboration of service delivery, staff training, and evaluation activities for the rehabilitation and employment of people with psychiatric disabilities. Similarly, the National Task Force on Rehabilitation and Employment of Psychiatric Disabilities tried to promote collaboration in the research and evaluation of federal rehabilitation and employment efforts. The task force met quarterly for 3 years, but attendance declined significantly, with many members complaining about its voluntary nature and limited impact on policies. The OTA report stated that experts and advocates commented to them that such efforts had achieved only mixed success, leading to OTA’s conclusion that “while mechanisms for communicating across agencies have or do exist, they lie moribund at the present time.” Our review raises questions about the efficiency of federal efforts to help people with disabilities. In 1994, the federal government provided a broad range of services to people with disabilities through 130 different programs, 19 federal agencies, and a multitude of public and private agencies at the state and local levels. Although research groups and independent panels have stressed the need to simplify and streamline programs serving people with disabilities, suggestions for creating a new system to deliver services may be difficult to implement. In 1992, we urged caution when the Congress considered initiatives for federal, state, and local organizations to make fundamental changes in human service delivery systems, and we also urge caution for programs serving people with disabilities. Although the potential benefits of creating a new system to deliver services more comprehensively to people with disabilities may be great, so are the barriers and the risks of failure. Obstacles preventing officials from reorganizing service agencies, creating new funding and service agreements, and divesting authority from their own agencies are difficult to overcome. Mandates alone are unlikely to secure the significant time and resource commitments needed from officials—whether they are charged with directing reforms or have responsibility for administering services. In the current fiscal environment, a renewed focus by federal agencies on improving coordination would be a useful step toward improving services and enhancing the customer orientation of their programs. Given the multifaceted federal effort, better coordination is crucial to any strategy to eliminate duplication and service gaps and to enhance the efficiency of programs administered by the many public agencies at all levels of government. Without such an effort, assessing the impact of the federal commitment to people with disabilities and the relevance of improvement measures, such as program consolidation, becomes virtually impossible. We have identified several state and local initiatives that have shown promise in meeting the challenges of coordination; other initiatives most likely exist throughout the nation. These efforts appear to have succeeded somewhat in reducing duplication and service gaps, while saving agencies money. In light of these initiatives, the major Departments serving people with disabilities—Education, Labor, and Health and Human Services (HHS)—have an opportunity to identify, encourage, support, and learn from the innovative solutions being developed at the state and local levels. The Departments of Labor, Education, and Transportation provided comments on our draft report, agreeing with our findings and conclusions. (See app. VI for a copy of written comments from the Department of Labor.) Each of these agencies also provided technical comments, which we incorporated in the report as appropriate. HHS did not provide comments on the report within the time available. As arranged with your office, we are sending copies of this report to the Secretaries of Labor, Education, and Health and Human Services. GAO contacts and staff acknowledgments for this report appear in appendix VII. Please call me on (202) 512-7014 if you or your staff have any questions. This report identifies and describes federal programs designed to assist people with disabilities, with a special emphasis on programs promoting employment. Specifically, we focused on the following questions: (1) Which federal programs target people with disabilities, and how many of these programs provide employment-related services? (2) To what extent are information, eligibility, and services coordinated under these programs? (3) What does available evidence suggest about the efficiency or effectiveness of federal programs in promoting employment among people with disabilities? To accomplish these objectives, we integrated evidence from the literature, from analyses of available databases, and from interviews with consumers and public and private organizations. We interviewed officials of federal agencies that administer programs targeted to people with disabilities. We also interviewed disability advocates; officials of nonprofit groups; and state and local officials in Massachusetts, California, Virginia, and Nevada. We chose these states on the basis of expert opinions and agency officials to obtain a variety of geographic locations, program sizes, and administrative structures. In addition, we interviewed consumers and several private-sector participants in several of these states to obtain their perspectives on how these programs promote employment of people with disabilities. We reviewed the literature on labor economics and employment programs, generally, and on people with disabilities, in particular, to obtain information on the problems and employment barriers such individuals face and on federal efforts to surmount these barriers. We also reviewed agency documents and legislation to help determine the purpose, eligibility requirements, and services authorized under these programs. To profile the population of people with disabilities, we used several databases. In addition to relying on previously published results from the Current Population Survey (CPS), the Survey of Income and Program Participation (SIPP), and the 1995 National Organization on Disability/Louis Harris Survey on Employment of Persons With Disabilities, we analyzed information from the 1990 census and from the 1993 National Health Interview Survey (NHIS). Our estimates from the 1990 census were based on a 5-percent subset of the full census sample—approximately 15.9 percent of all U.S. housing units consisting of over 12 million people and 5 million housing units. These households received the long form of the census questionnaire, which collects detailed information on many variables, including several different ways of measuring disability status. The 1993 NHIS is a personal interview household survey using a nationwide sample of 109,671 civilian noninstitutionalized people in the United States. The two surveys differ in the content of their disability-related questions as well as in the other information gathered. For example, NHIS was useful in estimating the prevalence of chronic conditions, information the census does not gather. The census database provided more precise information on the geographic distribution of people with disabilities. The major sources used to identify federal programs were the Catalog of Federal Domestic Assistance (CFDA), agency documents, and interviews with federal officials. We defined a federal program as a function of a federal agency that provides assistance or benefits to a state or states, territorial possession, county, city, other political subdivision, or grouping or instrumentality thereof; or to any domestic profit or nonprofit corporation, institution, or individual, other than an agency of the federal government. We defined the scope of our review to include those programs meeting one or more of the following criteria: (1) people with disabilities are specifically mentioned in the legislation as a targeted group; (2) people are eligible for the program wholly or partially because of a disability; (3) people with disabilities are given special consideration in eligibility assessments; or (4) program officials are directed to give priority to serving people with disabilities. In general, we included all programs that explicitly recognized disability or handicap, regardless of how (or whether) the program or legislation defined disability. Programs that serve individuals without respect to disability but that serve some individuals with disabilities (for example, Aid to Families With Dependent Children) are beyond the scope of this report. We also omitted those programs that exclusively funded medical research. Our definition of federal programs also excluded federal legislation that does not authorize the direct expenditure of federal funds but instead provides indirect support or imposes mandates on federal or nonfederal entities. For example, the Javits-Wagner-O’Day Act of 1971 authorizes federal agencies to procure selected goods and services from sheltered workshops for blind or severely disabled individuals. Although we excluded these types of federal efforts from our analysis of federal programs, we described some of the most important of these efforts in the report. To analyze in more detail those programs that affect employment issues, we divided these federal programs into three groups: (1) employment- focused programs that provide services such as job training, supported employment, job placement, and employment counseling; (2) employment-related programs that provide services that could reduce barriers to employment—such as transportation, health care, or assistive technology; (3) programs unrelated to employment that provide services that are unconnected (or could have only a remote connection) to employment—such as services to infants and toddlers. We gathered information on 1990 and 1994 expenditures using the Consolidated Federal Funds Report (CFFR) compiled by the Bureau of the Census. The CFFR tracks the majority of federal domestic outlays and is the best information available on expenditures or obligations. For some programs, agencies had not reported information to the Census Bureau; we attempted to gather the information from the agencies. In other cases, this information was not available. For many of these cases, the agency performed the program’s activity in conjunction with other agency activities, and we could not distinguish funds spent for one activity from funds spent for the other. For this reason, our estimates of total expenditures on disability-related programs are likely to be underestimated. In addition, our estimates reflect federal outlays only and exclude any supplements from states and localities. (These estimates, which appear in table II.1 in app. II, reflect the federal expenditures/obligation for the entire program unless noted otherwise.) Many federal programs are partially targeted toward people with disabilities—that is, the programs target multiple groups of individuals, with people with disabilities being only one and not necessarily the most important one. For some of these programs, agency officials track program expenditures by target group. For example, the Health Care Financing Administration tracks Medicare expenditures for the aged and for the disabled. Many partially targeted programs, however, do not track expenditures by targeted group. For example, the Transportation Department’s Federal Transit Administration finances public transit systems, along with capital improvement funds to make mass transit more accessible to people with disabilities. Agency officials have found it impractical to track disability-related expenditures under this program, particularly since it is impossible to know riders’ disability status and whether or not they are using public transportation for work or some other activity. Because we could not distinguish expenditures under many partially targeted programs, we created an interval estimate of disability- related expenditures. At the lower bound, none of the expenditures for these programs were included; at the upper bound, all expenditures for these programs were included. This appendix presents an overview in table II.1 of the 130 federal programs that we identified as targeted to people with disabilities. Each program’s administering department or agency, services, and the individuals or groups who ultimately benefit from these services are included. Each program’s 1994 funding, the degree of targeting, and the type of applicant are also included. The order we used to list programs corresponds to the five-digit program identification number assigned by the Catalog of Federal Domestic Assistance (CFDA). The first column of table II.1 contains the CFDA five-digit program identification number. The first two digits identify the federal department or agency that administers the program, and the last three digits are unique codes identifying a program. For example, programs starting with “14” are administered by the Department of Housing and Urban Development (HUD) and those starting with “96” by the Social Security Administration (SSA). For programs not listed in the CFDA, the table uses the alphanumeric code the Bureau of the Census has assigned. For example, Funding for the American Printing House for the Blind is allocated through the Department of Education. All Education programs start with “84” as a program identification, and the additional alpha codes “JJJ” or “JAW” are assigned by the Bureau of the Census. Column 2 identifies the descriptive title listed in the CFDA. Column 3 shows the federal department, agency, commission, council, or instrumentality of the government with direct responsibility for program management. Column 4 provides the most prominent services authorized under each program. Although other services may also be available, the table cites those services relevant to people with disabilities. Column 5 describes the ultimate beneficiaries of federal assistance. Although other groups or individuals may benefit from a program, the table only describes characteristics relevant to people with disabilities. Column 6 shows information about targeting: All programs that are partially targeted have a “P” in column 6. A partially targeted program is one that serves people with disabilities and others; a wholly targeted program provides assistance only to people with disabilities. Programs with a “W” in this column are considered wholly targeted. Column 7 shows federal expenditures and/or obligations for the entire program in 1994, unless noted otherwise. Broadly, the CFDA specifies three categories of federal assistance: financial, nonfinancial, or a combination of both. For programs that provide any financial assistance, the table shows the total amount spent or obligated in 1994 as identified through the Bureau of the Census. Programs that provide nonfinancial assistance have “NF” in column 7 because the census only tracks financial assistance for each program. Some programs have “NA” in column 7 because expenditure information was unavailable. Column 8 identifies the applicant for each program. The CFDA defines applicants as any entity or individual eligible to receive funds from a federal program. Generally, the applicant and the beneficiary will be the same individual or group for programs that provide assistance directly from a federal agency. However, financial assistance that passes through state or local governments will have different applicants and beneficiaries. We classified applicants into the following five groups: individuals, nonfederal governmental entities, nongovernmental entities, other, and the general public. Expenditures or obligations for 1994 (in dollars) (continued) Expenditures or obligations for 1994 (in dollars) (continued) Federal Transit Capital Improvement Grants Capital Assistance Program for Elderly Persons and Persons With Disabilities Federal Employment for Individuals With Disabilities Employment Discrimination—Title 1 of the Americans With Disabilities Act (ADA) Books on tape, braille, large type, etc. Expenditures or obligations for 1994 (in dollars) (continued) Expenditures or obligations for 1994 (in dollars) (continued) Expenditures or obligations for 1994 (in dollars) (continued) Expenditures or obligations for 1994 (in dollars) (continued) Expenditures or obligations for 1994 (in dollars) (continued) Expenditures or obligations for 1994 (in dollars) (continued) Funding for Construction at National Technical Institute for the Deaf Model Secondary Schools for the Deaf Architectural and Transportation Barriers Compliance Board (ATBCB) National Council on Disability (NCD) Expenditures or obligations for 1994 (in dollars) (4,020) (continued) Expenditures or obligations for 1994 (in dollars) (continued) Expenditures or obligations for 1994 (in dollars) (disabled only) 7,800,000,000 (disabled only) (disabled only) (disabled only) (continued) Expenditures or obligations for 1994 (in dollars) X See introduction to this app. for explanation of beneficiary. Over several decades, congressional concern about employment opportunities for people with disabilities has led to more than two dozen federal employment-focused programs. In addition, the Congress has provided certain employment protections to people with disabilities, for example, by barring discrimination in employment on the basis of disability. Finally, several laws also provide a variety of mechanisms that indirectly support the employment of people with disabilities, for example, by authorizing federal purchases from nonprofit organizations that employ people with disabilities. These federal employment initiatives incorporate three approaches toward employing individuals with disabilities: In sheltered employment, individuals with disabilities work in a “sheltered workshop,” which is a controlled environment providing job operations involving a limited set of tasks. Sheltered employment is most frequently used with individuals with severe functional limitations, although the blind have a long history of working in sheltered employment operations. Under supported employment, individuals with disabilities are integrated into a work setting but are provided postemployment services, frequently including job coaches or on-the-job training, to help facilitate the transition to employment. Federal initiatives for supported employment are intended for individuals with relatively severe disabilities. Competitive employment most often refers to a regular job, in which an individual does not receive postemployment services. The majority of federal placement initiatives for people with disabilities are aimed at placing individuals with disabilities in competitive employment. Services provided under such federal efforts include job training, educational support, counseling, assessment, and placement. Federal efforts to promote the employment of people with disabilities are largely aimed at competitive employment. Some of the federal programs with a goal of competitive employment are designed exclusively for people with disabilities. Others, however, are part of the wider federal effort to promote job opportunities for people who are disadvantaged in the labor market. For example, the Job Training Partnership Act (JTPA) provides job training services mainly to the economically disadvantaged, but people with disabilities who are not economically disadvantaged may also qualify. Both wholly and partially targeted federal employment programs rely heavily on leveraging support from the private sector to place individuals in jobs and move them toward economic self-sufficiency. The largest federal effort focused exclusively on facilitating employment of people with disabilities is the Vocational Rehabilitation program. Vocational rehabilitation formula grants are provided to a state on the basis of the state’s per capita income and overall population. States are required to submit a plan for providing services to the Commissioner of Rehabilitation Services Administration and to match 21.3 percent of federal funds. Services that can be provided with these grant funds include job training, assessment, counseling, maintenance during rehabilitation, personal assistance, placement or rehabilitation technology, and assistance in operating a business. Vocational rehabilitation counselors must draw up an Individual Employment Plan for each client to specify what that client needs to move toward employability. The program provides services as specified in the plan; these services can include virtually anything deemed necessary to facilitate a positive employment outcome. The emphasis of the program remains on competitive employment, but it can place individuals in supported or sheltered employment as well. A number of other programs support the Basic State Grants for Vocational Education. For example, several programs provide funding to train vocational rehabilitation personnel through state agencies or other public or private organizations. An estimated 44,034 people participated in training (including continuing education programs) in fiscal year 1993. Another support program for the vocational rehabilitation system provides funding for special projects and demonstration efforts. In fiscal year 1994, this program funded 11 new grants and 87 continuation projects in supported employment. These efforts emphasized (among other areas) services to individuals with specific learning disabilities, for example, individuals with long-term mental illness and transition services for youths with special needs. In addition, other federal programs provide grants to Native American tribes for vocational rehabilitation services to individuals living on reservations; another provides vocational rehabilitation services to migrant and seasonal farmworkers. The Projects With Industry (PWI) program is one of the few federal efforts that engages the private sector as a partner in expanding employment opportunities for people with disabilities. Services provided to individuals with disabilities vary with different projects but generally include evaluation, counseling, training, job development, and job placement. Services may also be provided to employers, sometimes including job-site or equipment modification. The PWI program may involve grants or contracts with individual employers, state vocational rehabilitation units, or other public or private organizations. Each grantee must develop and work with a Business Advisory Council, with representatives from private industry and organized labor, and individuals with disabilities. The Department of Veterans Affairs (VA) has established two programs to provide vocational rehabilitation services to veterans with disabilities. VA generally provides services to honorably discharged veterans who have received a 20-percent or higher VA disability rating for a service- connected disability. Under a second program, veterans who are receiving a VA pension may also qualify for vocational rehabilitation services. Case managers in these programs can provide whatever services the veteran needs to facilitate employment. Some of these services include evaluation, counseling, education, training, and job placement assistance; many veterans with disabilities receive financing for higher education. However, these vocational services are time limited. Veterans must generally complete the training portion of their vocational rehabilitation plan within 48 months; participants generally cannot receive services after 12 years from the date on which their eligibility was established. In program year 1995, approximately 48,000 veterans with disabilities received vocational rehabilitation services. JTPA provides job training and employment-seeking skills. It is primarily directed to economically disadvantaged people but also includes others who face employment barriers. JTPA features a unique partnership of the federal government, the states, and the private sector. Although JTPA does provide support services such as child care and transportation, local JTPA providers are restricted in the amount they can spend and they often spend less than permitted. Thus, people with disabilities who require more extensive support services may need to access other programs to supplement JTPA services. In addition to its primary training program, JTPA also encompasses the residential Job Corps program and research, pilot, and demonstration efforts. Individuals with disabilities can be served under all these JTPA programs, and the needs of individuals with disabilities receive special consideration in the awarding of discretionary JTPA projects. For example, in 1995, special project grants were awarded to organizations, such as Goodwill Industries and the American Rehabilitation Association, to provide job search assistance and job placement to people with disabilities. In addition, people with disabilities can sometimes qualify for JTPA without meeting income guidelines because they face a barrier to employment. JTPA’s focus for its clients with disabilities remains competitive employment, although JTPA funds can be used for supported employment efforts as well. Established by the Depression-era Wagner-Peyser Act, the state-federal employment service (ES) provides employment offices to assist individuals looking for jobs and employers looking for workers. Through many local offices, the ES program offers an array of services, including job counseling, skills assessment and testing, job search workshops, job opening identification, and referrals to employers. Services provided by ES, however, are frequently limited to job listings and some counseling. Although these services are available to everyone, states are required to give special consideration to people with disabilities by requiring every local ES office to designate at least one staff member to help individuals with disabilities locate employment or training. In program year 1994, ES provided assistance to an estimated 625,133 people with disabilities, which accounted for approximately 3.3 percent of ES’ total clientele. With a joint Education/Vocational Rehabilitation plan approved by the Department of Education, states can receive project grants to provide school-to-work transition services to secondary students (14 and older) with disabilities. Many of these projects implement the Individuals With Disabilities Education Act’s (IDEA) requirement to provide transition services. Other institutions, such as colleges and universities and other nonprofit organizations, are also eligible for project grants to improve the school-to-work transition for students with disabilities. Although vocational rehabilitation programs can also provide financing for small businesses, competitive employment remains that program’s primary emphasis. The Handicapped Assistance Loan program awards loans to small businesses that are 100-percent owned by individuals with disabilities to provide construction or working capital. Under this program, the Small Business Administration (SBA) guarantees commercial loans with extended repayment periods to businesses. This program can also be used for sheltered workshops, as described in more detail later. Since 1947, the President’s Committee on Employment of Persons With Disabilities has made efforts to develop public-private partnerships and encourage businesses to hire individuals with disabilities. The committee’s activities include information dissemination and coordination as well as operating the Job Accommodation Network (JAN). JAN provides information on workplace accommodations to employers, rehabilitation professionals, and individuals through a toll-free number. The Office of Personnel Management operates the federal Selective Placement Program, which provides federal agencies with assistance in placing federal employees who have become disabled and in recruiting employees with disabilities to federal service. Under this program, people with disabilities can apply for federal employment without going through the normal competitive process. The Americans With Disabilities Act (ADA) of 1990 established a clear and comprehensive prohibition against discrimination on the basis of disability. Among other protections, ADA established regulations focused on removing architectural, communications, and transportation barriers. Regarding employment, ADA essentially prohibits organizations employing 15 or more employees from discriminating against a qualified individual with a disability because of the disability in the job application or hiring process, in advancement or discharge of employees, employee compensation, job training, or other conditions of employment. ADA protects individual applicants or employees as long as they can perform all essential functions of the job with or without reasonable accommodation. To be reasonable, an accommodation must not impose an undue hardship on the employer and must enable the individual with a disability to perform the necessary work. For example, a reasonable accommodation for an individual in a wheelchair might be to raise his or her desk so that the wheelchair can fit comfortably beneath it. ADA is a mandate and not a federal program as such, although programs have been set up to enforce the provisions of the law. Nonetheless, the ADA remains a key part of the federal commitment to promote employment of people with disabilities. Through several different legislative actions, the federal government has prohibited employment discrimination solely on the basis of disability for federal contractors, state and local governments, and private businesses with 15 or more employees. Several programs have been set up to enforce these provisions. The Office of Federal Contract Compliance Programs is responsible for investigating complaints against federal contractors. The Department of Justice is responsible for investigating and prosecuting cases of employment discrimination under ADA against state and local governments, and the Equal Employment Opportunity Commission is responsible for ADA cases involving private-sector employees. These bodies may prosecute a case, decide that no cause for suit exists, or give clearance for an individual to file the case in federal court on his or her own. The Randolph-Sheppard Act in 1936 set up a program for blind individuals that gives organizations working with the blind preference in operating vending facilities on federal property. Under this program, these organizations may be granted rights to place vending machines or sell other items in federal buildings. The gross receipts of Randolph-Sheppard vending facilities totaled $388.8 million during fiscal year 1990. The Targeted Jobs Tax Credit (TJTC), which expired in 1994, was established by the Congress to promote employment for disadvantaged people. The Congress authorized this special tax credit to induce private businesses to employ people who were chronically unemployed, disadvantaged youth, welfare recipients, and people with disabilities. The tax credit amounted to 40 percent of the first $6,000 in wages during the first year of employment. For an employer to qualify for the tax credit, the worker must have been employed for at least 90 days or have completed at least 120 hours of work. Approximately 8 percent of the individuals benefiting from the TJTC were people with disabilities. Two federal programs provide financing for supported employment programs: one program provides aid to state programs, the other finances projects directly. In addition, many states finance some supported employment services through state grant programs that receive funds from HHS to provide services to individuals with developmental disabilities. These programs provide ongoing (although generally time-limited) postemployment support to individuals with disabilities to help them maintain community employment. Under this program, states are given formula grants to provide supported employment services. This program is intended to provide services to individuals with severe disabilities to allow them to get jobs. These services can include job coaches, ongoing supports, training for coworkers, and a variety of other services designed to enable individuals to adjust to the workplace. Services provided under this program are generally limited to 18 months; after this time, states must either find additional funds to pay for continuing services or discontinue the services and see if the individual can continue without the additional support. This program awards grants to public and nonprofit agencies, including states, to conduct special projects and demonstrations to expand or assist supported employment services to individuals with the most severe disabilities. In fiscal year 1993, this program supported 13 new community-based projects, 14 continuing community-based projects, and 16 grants to states for systems-change projects. Services under this program are like services provided under the state grant and can include assistance to employers in training coworkers, assistive technology, and job coaches. Like the formula grants to states program, this program allows recipients to use these funds to build community capacity to provide these services. Federal financial help also supports sheltered workshop employment for people with disabilities. This support, however, is generally somewhat indirect, coming from federal purchases, exemptions from federal wage laws, and some business loans. The Javits-Wagner-O’Day Act established an initiative under which federal agencies may purchase selected goods and services from sheltered workshop providers. In fiscal year 1991, $431.55 million in contracts were awarded to 497 such workshops. Under the Fair Labor Standards Act, sheltered workshops may apply to the Secretary of Labor for exemptions from the minimum wage law. SBA may award handicapped assistance loans to sheltered workshops for construction or working capital. For workshops to be eligible, at least 75 percent of the work hours for the direct production must be performed by people with disabilities. The Handicapped Assistance Loan program also provides loans to small businesses wholly owned by people with disabilities. The disability programs we examined differ in their services, objectives, size and scope, and in how they distribute program dollars. Although many of these federal programs allocate their funding to state governments or local providers, the programs generally have established different mechanisms for doing so. For example, the federal-state Vocational Rehabilitation program allocates its funding to state governments on the basis of a formula that includes state population and per capita income. By contrast, the Labor Department’s Special Projects for Employment of Persons With Disabilities program awards grants to states or local providers on the basis of applications and proposals. Thus, the aggregate distribution of funds among states and geographic areas reflects these different allocation mechanisms in combination and may not resemble the distribution that would result from any one mechanism in particular. In addition, the aggregate distribution of funds among states under a multiple program structure may not represent the distribution that would have been chosen under a more integrated system. To illustrate the distribution effects of the allocation mechanisms currently used by disability programs, we examined the state distribution of funds for those wholly targeted, employment-focused programs that channel funds to locations nationwide. We compared this distribution with the distribution of people with disabilities by state and then looked at the per capita amounts available to each state under these programs. Our analysis focused on eight programs that represent the majority of funds distributed under employment-focused programs. We chose to limit our illustration to wholly targeted programs because people with disabilities represent a relatively small portion of clients served by many of the partially targeted programs. Without reliable data on state-by-state spending on people with disabilities only, we could not incorporate partially targeted programs without distorting the analysis. Of the 26 employment-focused programs that we identified, 9 were partially targeted and thus excluded from our analysis. An additional four programs provided advice to people with disabilities and their employers from central locations, and five programs did not report state-by-state spending information. Thus, eight programs remained for our analysis. Table IV.1 shows 1990 federal expenditures of eight wholly targeted, employment-focused programs. Funding mechanism/ cost sharing or matching (Y/N) Formula grants (N) Direct loans (N) Direct payments (N) Formula grants (Y) Project grants (Y) Project grants (Y) Project grants (N) Formula grants (N) We obtained the information in this appendix from publicly available data through the Bureau of the Census. Specifically, we derived summary statistics from the Census of Population and Housing, 1990, and we derived expenditure data from the Consolidated Federal Funds Report (CFFR). Our selection of employment programs was based on the availability of expenditure data for wholly targeted programs. As shown in table IV.1, approximately $2.3 billion was distributed in 1990 through eight employment programs that were wholly targeted to people with disabilities. Many of these programs funded state, local, private, or nonprofit entities that administered services in their area. These organizations included institutions of higher learning, state vocational rehabilitation agencies, job training councils, local educational agencies, or other appropriate public or private nonprofit institutions. No single agency or department had both the responsibility and authority to administer these employment programs. Of the eight we selected, the Department of Labor, the Small Business Administration (SBA), and the Department of Veterans Affairs (VA) administered one program each. The remaining five programs were administered through the Department of Education, including the largest, Vocational Rehabilitation (see table IV.1). Programs that used a decentralized program structure distribute funds through a formula or project proposals (see table IV.1). The largest program, Rehabilitation Services—Vocational Rehabilitation Grants to States, distributes funds through a formula and accounts for over 80 percent of the total amount available for people with disabilities in employment assistance. Formulas are also used by the Supported Employment and the Disabled Veterans’ Outreach programs. Under these programs, the states’ annual allotment is based on population characteristics such as per capita income, total population, or the number of disabled veterans in the state. Programs that used project grants to make allotments include Rehabilitation Services—Service Projects, Rehabilitation Long-Term Training, and Secondary Education and Transitional Services for Youth With Disabilities. For each of these, the state or service provider must apply for funding. Consequently, the variation in expenditures may relate to the population characteristics as well as the success of these local organizations in pursuing additional funds. As shown in figure IV.1, in 1990, the disabled working-age population as a percentage of the total working-age population across states varied between 7 and 15 percent. Southern states had the highest concentration of the disabled. For example, the percentage of working-age disabled people in West Virginia was around 15 percent of the total working-age population. Other southern states were also in the higher end of the distribution. States such as Kentucky and Alabama registered a disabled working-age population around 13 percent. In highly populated states like California, New York, Texas, Florida, and Illinois, the disabled working- age population was generally between 10 and 11 percent. In contrast, sparsely populated states, such as Wyoming, and states in the High Plains, such as North Dakota and South Dakota, had disabled working-age populations of less than 10 percent. As shown in figure IV.2, in 1990, these programs distributed to states between $200 and $1,100 per working-age person with a disability. Approximately 40 states have less than $500 available per person in the working-age disabled population. Although southern states have higher percentages of people with disabilities in their working-age population, these states were in the lower end of the expenditure distribution. Florida, Georgia, and South Carolina, for example, have between $200 and $350 available per disabled person. Large, highly populated states such as California and New York, were also in the lower end of the distribution, although sparsely populated states, such as Wyoming and North Dakota, were in the higher end. The distribution of federal dollars must be interpreted cautiously due to limitations in the availability of data for several reasons. First, data derived from the CFFR are the best estimates of federal obligations or outlays available. Because these data are estimates, however, in any given year, actual outlays may be higher or lower because program funds may be deobligated at any time. Similarly, although this analysis accounts for the majority of federal expenditures on employment-focused programs, we could not obtain sufficiently reliable data to allow us to analyze expenditures on partially targeted programs. Moreover, per capita amounts may conceal reasonable underlying factors not captured by our data sets such as money that states or local jurisdictions raise. For example, the largest employment program, Vocational Rehabilitation, requires state and local jurisdictions to provide a matching component. While these funds increase the overall expenditures available, our estimates reflect only federal outlays. One of the most contentious aspects of disability research is also the most basic—the definition of disability. Different federal programs use different operational definitions of disability, as do researchers, advocacy groups, and other interested parties. Some of this variation occurs because many groups define disability for different purposes and thus use different criteria for evaluating a definition. For example, a relatively broad definition of disability would encompass a wide array of people with disabilities; however, some broader definitions can be quite subjective. Researchers may prefer a definition that can be used with existing data sources; program officials must be concerned with definitions that can be measured and verified. Three fundamental issues about the nature of disability contribute to these definition and measurement differences: Scope of definition—Defining disability involves distinguishing between normal variations among individuals and conditions that are disabling. Duration of a condition—Because a person’s disability status may change over time, some researchers argue that disability should be continually re-evaluated and remeasured and that temporary or sporadic conditions should be considered in evaluating disability. Others contend that only permanent conditions should be considered. A condition (such as rheumatoid arthritis) may be limiting but may have only sporadic impact on an individual’s ability to function—so even differentiating between permanent and temporary disabilities can be difficult. Variation in application—Even with the most clear-cut definitions of disability, applying the criteria involves an inherent judgment. Two parties may agree on a definition of disability but may then apply different classifications of who is disabled. For example, a significant difference exists in the number of people identified as eligible for disability insurance by the state disability determination services and by administrative law judges. Not only is disability hard to determine under any given definition, but definitions of disability vary widely. We identified many different definitions used by programs, researchers, and advocacy groups. (Table V.1, at the end of this appendix, lists some examples of these definitions and their sources.) Relying on functional assessment, medical criteria, or individual perception, these definitions emphasize different aspects of disability—from the individual’s ability to work, for example, to the role of the person’s physical environment in shaping the degree of disability. The number of disability definitions combined with differences in measurement techniques have resulted in estimates of the number of people with disabilities that range from 3.5 million to 49 million. Although many definitions are similar, even subtle differences in the population included, the survey used, or the definition of disability can have far-reaching effects on how many individuals are counted as having a disability. For example, estimates from the 1990 Survey of Income and Program Participation (SIPP) indicated that 8.6 million Americans aged 16 to 67 were “unable to work” due to disability; the 1990 census estimated that 6.6 million Americans aged 16 to 64 were “unable to work” due to disability—a difference of nearly one-third. When the definition of disability is widened to include individuals who are “limited in work,” 1990 to 1993 estimates range from 12.9 million to 19.5 million. Table V.2, at the end of this appendix, shows the differences in the estimated disability prevalence in the United States using different definitions and sources. The most common method of defining disability—both for researchers and under federal programs—is based on functional limitation. Under this type of definition, an individual is considered to have a disability if he or she is limited in, or unable to perform, a certain activity or activities. The definition can be broad or narrow, depending on whether activities are specified narrowly or widely and on whether the individual must be unable to do the activity or must only be limited. The term “limited” may refer to the type or amount of activity. For instance, a person with arthritis may be unable to perform some types of household chores (such as sewing) but may be able to do other tasks (like laundry) without any problem. Similarly, a person with another condition may be able to do any chore for a short period of time but may need to rest before attempting to complete the task. Activities can also be specified widely or narrowly. For example, some survey questions leave the term “activities” to be defined by the respondent. Other instruments confine the definition of activities to a specific list, like the activities of daily living (ADL) or the independent activities of daily living (IADL). As an example, a broader definition of disability could characterize individuals with a disability as “limited in performing any of their usual activities;” a narrower definition could characterize individuals with a disability as being “unable to work at a full-time job.” Some disability advocates find a wide-ranging functional approach to disability definition appealing because it measures the impact on an individual’s condition without regard to the cause of that condition. Others have criticized many of these definitions, however, as being too general to make effective distinctions among individual cases. Although narrowing the scope of the activities considered would make the definition more specific, it would also increase the probability that individuals would be arbitrarily excluded. In addition, even when the activities are defined fairly narrowly—with ADLs or IADLs, for example—measuring or verifying disability can be difficult. Survey evidence demonstrates the effect of adopting a widely ranging functional definition as opposed to a narrower one. For example, when the 1990 census asked individuals if they were limited in their mobility (for example, going out of the house or to a store by themselves), an estimated 3.5 million individuals aged 16 to 64 were identified as disabled. However, in the 1991 SIPP, an estimated 27 million individuals aged 21 to 64 were identified as disabled when the definition included having a functional limitation in any activity; and an estimated 49 million individuals of all ages were considered disabled when the definition included having a functional limitation in any activity and when examples were provided. Several disability definitions take a narrow view of activity limitation, with employment as the only activity. For example, income maintenance and pension programs often define disability to include only those individuals who cannot work because of their impairment. These definitions allow programs to focus on individuals for whom employment is deemed unfeasible and thus may be in greater need. A “can’t work” definition, however, requires judgment not only of an individual’s physical conditions, but also of his or her capabilities in a wide variety of potential employment situations. This makes implementing the definition problematic, especially in recent years because improvements in information technology, an increased emphasis on accommodation in the workplace, and new models of working with individuals with disabilities (such as supported employment) have complicated assessments of the ability to work. Medical and legal determination of the ability to work is thus labor intensive. The emphasis on ability to work has also been criticized by analysts who believe that this definition creates a strong disincentive to employment. Because applicants must prove that they cannot work to receive benefits and may risk losing these benefits if they become employed, they may be reluctant to look for a job. In addition, having proved to the authorities that they are unable to work, disability beneficiaries may agree with this assessment and thus not try to enter the labor force. Many household surveys include questions that reflect this kind of broad “can’t work” definition, for example, “Do you have a health condition that limits your ability to work?” or “Are you unable to work because of a disability?” or “Do you have a condition that limits the type or amount of work you can do?” In 1990, the number of working-age individuals who reported they were unable to work or limited in work ranged from 12.9 million to 19.5 million; the number reported as unable to work ranged from 6.6 million to 14.2 million. Some definitions consider an individual disabled if he or she has one or more of a specified list of medical conditions. For example, vocational education programs define students as having a disability if they are “mentally retarded, hard of hearing, deaf, speech impaired, visually handicapped, seriously emotionally disturbed, orthopedically impaired, other health impaired, deaf-blind, multihandicapped, or have specific learning disabilities, who because of these impairments, need special education and related services, and cannot succeed in the regular vocational education program without special education assistance.” In addition, some functional definitions of disability specifically exclude certain conditions. For example, the definition of disability in the Americans With Disabilities Act (ADA) excludes psychoactive substance abuse, transsexualism, pedophilia, compulsive gambling, and kleptomania (among other disorders). These definitions are presumably relatively straightforward because they require only an assessment of a medical condition, not an evaluation of an individual’s ability to function with this impairment. However, a medical definition generally contains no information on the severity of the condition and ignores potentially debilitating conditions not included on the list. Thus, a medically based approach may sometimes be as arbitrary as a more subjective definition. In addition, medically based definitions would presumably require certification and may be expensive to verify. Relatively little up-to-date information on the prevalence of specific medical disorders exists in the United States. The data that are available, however, suggest that definitions of disability based on medical conditions may be quite distinct from definitions based on an individual’s functional ability, and may classify large numbers of individuals as having a disability. For example, the 1993 National Health Interview Survey (NHIS) reported that, of Americans aged 18 to 64, 13.2 million were hearing impaired; 5.8 million were visually impaired; and 0.9 million had palsy, cerebral palsy, or mental retardation. Fully 61 percent of the visually impaired and 65 percent of the hearing impaired reported no limitation in the kind or amount of work they could do—indicating that medical condition and self-perception of ability to work are distinct concepts. Results from the National Comorbidity Survey administered between 1990 and 1992 indicated that during the previous 12 months as many as 29 percent of individuals may have had at least 1 of 14 psychiatric disorders, including major depression, anxiety disorders, and substance abuse. Researchers have used two other types of disability definitions that are less practical for programmatic purposes. For example, individually defined disability is used in some survey data. This measure classifies an individual as disabled on the basis of self-assessment or on the opinions of others. No explicit definition of disability is used, so each individual answers the question using his or her own concept of what it means to be disabled. An individually defined concept of disability could capture some people who would not be included under more restrictive definitions, but this definition is likely to be inconsistent and thus unreliable to distinguish among individual cases. A second type of definition is an environmental/ societal-based definition of disability, which emphasizes the role of the surrounding environment in determining the extent of an individual’s limitations; that is, it assesses whether the person can function independently given the environment he or she must face. These definitions require consideration of both the individual’s physical or mental condition and the surrounding environment. For example, under such a definition, an individual in a wheelchair may be considered disabled if he or she lives in a city with no public transportation and no curb cuts but might not be considered disabled in a city that had these features. Assessing functional ability in the context of both the individual and the environment or society is not only subjective but extremely difficult—the environment is all encompassing and frequently changing. However, this type of definition does raise the public’s awareness of the role of the environment in determining individuals’ capabilities. Table V.1: Examples of Different Definitions of Disability, by Source and Type “. . . a limitation that affects an individual’s ability to perform certain functions.” “Had a disability or health problem that prevented him or her from participating fully in work, school, or other activities.” Louis Harris Survey and DeJong “Individuals with significant physical or mental impairments whose abilities to function independently in the families or communities or whose ability to obtain, maintain, or advance in employment is substantially limited. Eligibility shall not be based on the presence or absence of any one or more specific severe disabilities.” Centers for Independent Living Program (84.132) “. . . departure from normal role functioning attributable to a health-related condition.” Unable to perform at least three ADLs or IADLs without assistance. HUD Congregate Housing (14.170) World Health Organization (WHO) Functional “. . . a physical or mental impairment which substantially limits one or more of the major life activities of such individual; a record of having such an impairment; or being regarded as having such an impairment.” (excluding specific conditions, especially current substance abuse) “. . . are incapable of regularly pursuing any substantially gainful employment due to a disability that is likely to be of long or indefinite duration or is likely to result in death.” Also, “. . . unable to perform their usual occupation due to a disability that is likely to be of long or indefinite duration or is likely to result in death.” Canadian disability insurance/income maintenance program as reported in Maki “. . . individuals with mental or physical impairments that reduce their capacity to work by at least 50 percent; individuals who are at least 30 percent impaired and unemployed are also considered handicapped.” Either (a) receives benefits from a government disability program or (b) reports a limitation on his or her ability to work. Limited in the type or amount of work (or housekeeping if housekeeping is considered to be the “primary occupation”). The inability to engage in substantial gainful activity, by any medically determinable physical or mental impairment which can be expected to result in death or has lasted or is expected to last for a continuous period of not less than 12 months. Having one or more of the following physical conditions—“weakness/lack of strength; trouble with fingers; trouble walking, standing, or with stairs; in a wheelchair; trouble seeing/blind; trouble with leaving bed or leaving home; trouble lifting; deaf; trouble with stiffness or pain; trouble with seizures or spasms; mental illness; mental retardation.” (continued) “Mentally retarded, hard of hearing, deaf, speech impaired, visually handicapped, seriously emotionally disturbed, orthopedically impaired, other health impaired, deaf-blind, multihandicapped, or have specific learning disabilities, who because of these impairments, need special education and related services and cannot succeed in the regular vocational education program without special education assistance.” Having one or more of the following physical conditions—“major amputations, cerebral palsy, major head injury, Friedreich’s ataxia, muscular dystrophy, spina bifida, amyotrophic lateral sclerosis, cystic fibrosis, spinal cord injury, multiple sclerosis, post-polio, stroke.” Individuals with “mental retardation; hearing impairments; speech, or language impairments; visual impairments; serious emotional disturbance; orthopedic impairments; autism; traumatic brain injury; other health impairments; specific learning disabilities; . . . that need special education and related services.” “A person was defined as having a disability if he or she considered himself or herself to have a disability or said that other people would consider him or her to be a person with a disability.” How would you describe your health? (excellent, good, fair, poor) Disability is “. . . the expression of a physical or mental limitation in a social context—the gap between a person’s capabilities and the demands of the environment. People with such functional limitations are not inherently disabled, that is, incapable of carrying out their personal, familial, and social responsibilities. It is this interaction of their physical or mental limitations with social and environmental factors that determines whether they have a disability.” “The disadvantage or restriction of activity caused by a contemporary social organization which takes no or little account of people who have physical impairments and thus excludes them from the mainstream of social activities.” Population Survey (CPS) work or receiving disability benefits from government income maintenance program 1993 CPS definition—either unable to work or receiving disability benefits from government income maintenance program disability or a mobility limitation or a self-care limitation 1994 CPS definition—either unable to work or receiving disability benefits from government income maintenance program 1993 Work disability—either unable to work or limited in work 1990-91 Work disability—either unable to work or limited in work 1993 Functional—limited in either work or in some other activity (continued) Ages of population included in estimate 1991 Functional—limited in any activity 21-64 1993 Functional—limited in any activity 21-64 1992 Functional—limited in any activity All ages 1993 Functional—limited in any activity All ages 1990-91 Composite—limited in any activity or in self-care or has difficulty with one of listed tasks No sources predating 1990 were included. The Survey of Disability and Work (1972 and 1978), the National Long-Term Care Survey (1982-84), the Epidemiological Catchment Area survey (1981), and the SSA New Beneficiary Survey (1982) also provide some disability information. Some information on specific disabling conditions is available from the National Comorbidity Survey administered between 1990 and 1992, the 1990-1991 SIPP, and the 1993 NHIS. The Epidemiological Catchment Area survey (1981) also provides data on specific conditions. The following individuals also contributed to this report: Richard Kelley, Steven Machlin, and Mary Reich. Anderson, K., J. Mitchell, and J.S. Butler. “Effect of Deviance During Adolescence on Choice of Jobs.” Southern Economics Journal, Vol. 60, No. 2 (1993), pp. 341-56. Barker, P., et al. “Serious Mental Illness and Disability in the Adult Household Population: United States, 1989.” Advance Data, No. 218, Centers for Disease Control and Prevention: National Center for Health Statistics. Washington, D.C.: 1992, p. 11. Baldwin, M., and W. Johnson. “Labor Market Discrimination Against Men With Disabilities.” Journal of Human Resources, Vol. 29, No. 1 (1994), pp. 1-19. Berkowitz, Edward D. Disabled Policy: America’s Programs for the Handicapped (A Twentieth Century Fund Report) Cambridge, U.K.: Cambridge University Press, 1987. Bound, J. “Health and Earnings of Rejected Disability Insurance Applicants: Reply.” American Economic Review, Vol. 81, No. 5 (1991), pp. 1427-34. _____. “The Health and Earnings of Rejected Disability Insurance Applicants.” American Economic Review, Vol. 79, No. 3 (1989), pp. 482-503. _____ and T. Waidman. “Disability Transfers, Self-Reported Health, and Labor Force Attachment of Older Men: Evidence from the Historical Record.” The Quarterly Journal of Economics (1992), pp. 1393-1419. Burkhauser, R.V., and P. Hirvonen. “United States Disability Policy in a Time of Economic Crisis: A Comparison with Sweden and the Federal Republic of Germany.” Millibank Quarterly, Vol. 67, Suppl. 2, Pt. 1 (1989), pp. 166-95. Chirikos, T. “Accounting for the Historical Rise in Work-Disability Prevalence.” Millibank Quarterly, Vol. 64, No. 2 (1986), pp. 271-301. DeJong, G., A. Batavia, and R. Griss. “America’s Neglected Health Minority: Working-age Persons with Disabilities.” Millibank Quarterly, Vol. 67, Suppl. 2, Pt. 2 (1989), pp. 311-51. Haveman, R., and B. Wolfe. “The Economic Well-Being of the Disabled 1962-84.” Journal of Human Resources, Vol. 25, No. 1 (1990), pp. 32-54. Iams, H.M. “Characteristics of the Longest Job for New Disabled Workers: Findings from the New Beneficiary Survey.” Social Security Bulletin, Vol. 49, No. 12 (1986), pp. 13-18. Johnson, W., and J. Lambrinos. “Wage Discrimination Against Handicapped Men and Women.” Journal of Human Resources, Vol. 20, No. 2 (1985), pp. 264-77. Maki, D.R. “The Economic Implications of Disability Insurance in Canada.” Journal of Labor Economics, Vol. 11, No. 1 (1993), pp. S148-S169. Manton, K. “Epidemiological, Demographic, and Social Correlates of Disability Among the Elderly.” Millibank Quarterly, Vol. 67, Suppl. 2, Pt. 1 (1989), pp. 13-58. Mitchell, J.M., and R. Burkhauser. “Disentangling the Effect of Arthritis on Earnings: A Simultaneous Estimate of Wage Rates and Hours Worked.” Applied Economics, Vol. 22 (1990), pp. 1291-1309. Oi, Walter Y. “Disability and a Workfare-Welfare Dilemma,” in Disability and Work: Incentives, Rights and Opportunities, ed. Carolyn Weaver. Washington, D.C.: American Enterprise Institute, 1991. Oliver, Michael. The Politics of Disablement: A Sociological Approach, New York: St. Martin’s Press, 1990. Parsons, Donald. “The Decline in Male Labor Force Participation,” Journal of Political Economy, Vol. 88, No. 1 (1988), pp. 117-34. Ravaud, J.F., B. Madiot, and I. Ville. “Discriminiation Towards Disabled People Seeking Employment.” Social Science Medicine, Vol. 35, No. 8 (1992), pp. 951-58. Reisine, S., J. Fifield. “Expanding the Definition of Disability: Implications for Planning, Policy, and Research.” Millibank Quarterly, Vol. 70, No. 3 (1992), pp. 491-508. Rones, P. “Can the Current Population Survey Be Used to Identify the Disabled.” Monthly Labor Review, June (1981), pp. 37-39. Stern, S. “Measuring the Effect of Disability.” Journal of Human Resources, Vol. 24, No. 3 (1989), pp. 361-95. Wolfe, B.L. “How the Disabled Fare in the Labor Market.” Monthly Labor Review, (1980), pp. 48-52. Yelin, E. “Displaced Concern: The Social Context of the Work Disability Problem.” The Millibank Quarterly, Vol. 67, No. 2 (1989), pp. 114-65. _____, and P. Katz. “Labor Force Participation Among Persons With Musculoskeletal Conditions, 1970-1987.” Arthritis and Rheumatism, Vol. 34, No. 11 (1991), pp. 1361-70. Zeitzer, I.R. “Recent European Trends in Disability and Related Programs.” Social Security Bulletin, Vol. 57, No. 2 (1994), pp. 21-26. _____. “The Role of Assistive Technology in Promoting Return to Work for People with Disabilities: The U.S. and the Swedish Systems.” Social Security Bulletin, Vol. 54, No. 7 (1991), pp. 24-29. SSA Disability: Program Redesign Necessary to Encourage Return to Work (GAO/HEHS-96-62, Apr. 24, 1996). PASS Program: SSA Work Incentive for Disabled Beneficiaries Poorly Managed (GAO/HEHS-96-51, Feb. 28, 1996). Americans With Disabilities Act: Effect of the Law on Access to Goods and Services (GAO/PEMD-94-14, June 21, 1994). Social Security: Disability Rolls Keep Growing, While Explanations Remain Elusive (GAO/HEHS-94-34, Feb. 8, 1994). Vocational Rehabilitation: Evidence for Federal Program’s Effectiveness Is Mixed (GAO/PEMD-93-19, Aug. 27, 1993). Social Security: Disability: SSA Needs to Improve Continuing Disability Review Program (GAO/HRD-93-109, July 8, 1993). Americans With Disabilities Act: Initial Accessibility Good But Important Barriers Remain (GAO/PEMD-93-16, May 19, 1993). Vocational Rehabilitation: VA Needs to Emphasize Serving Veterans With Severe Employment Handicaps (GAO/HRD-92-133, Sept. 28, 1992). Integrating Human Services: Linking At-Risk Families With Services More Successful Than System Reform Efforts (GAO/HRD-92-108, Sept. 24, 1992). Vocational Rehabilitation: Better VA Management Needed to Help Disabled Veterans Find Jobs (GAO/HRD-92-100, Sept. 4, 1992). Job Training Partnership Act: Actions Needed to Improve Participant Support Services (GAO/HEHS-92-124, June 12, 1992). Adolescent Drug Use Prevention: Common Features of Promising Community Programs (GAO/PEMD-92-2, Jan. 16, 1992). Vocational Rehabilitation: Clearer Guidance Could Help Focus Services on Those With Severe Disabilities (GAO/HRD-92-12, Nov. 26, 1991). Social Security: District Managers’ Views on Outreach for Supplemental Security Income Programs (GAO/HRD-91-19FS, Oct. 30, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed federal programs targeted to disabled persons, focusing on: (1) how many of the programs provide employment-related services; (2) coordination of information, eligibility criteria, and services among various programs; and (3) the programs' effectiveness in promoting employment among disabled persons. GAO found that: (1) 130 federal programs provide services to disabled persons; (2) in fiscal year 1994, federal agencies spent over $60 billion on 69 programs exclusively targeted to disabled persons and between $81 billion and $184 billion on 61 other programs targeted to a wider clientele that gave special consideration to disabled persons; (3) most program expenditures supported income maintenance and health care programs; (4) employment-oriented programs constituted only 26 of the 130 programs and received only 2.5 to 4 percent of total federal funding for such programs in 1994; (5) 57 other programs provided indirect employment assistance; (6) most programs provide services through states and local governments, and nonprofit and private organizations; (7) various program funding mechanisms affect the distribution of program funds among states; (8) the federal government funds a wide range of services to address major employment barriers; (9) disabled persons who need services from more than one program find the programs' differing eligibility criteria and numerous service providers burdensome; (10) the lack of program coordination and information sharing leads to service duplication and gaps, and past efforts to improve service coordination have only been marginally successful; (11) some state and local agencies have improved service delivery to disabled persons and reduced program costs; and (12) few programs have been evaluated for their effectiveness, since many agencies do not require or collect data on program outcomes.
FTA generally funds New Starts projects through FFGAs, which are required by statute to establish the terms and conditions for federal participation in a New Starts project. FFGAs also define a project’s scope, including the length of the system and the number of stations; its schedule, including the date when the system is expected to open for service; and its cost. For projects to obtain FFGAs, they must emerge from a regional, multimodal transportation planning process. The early stages of the New Starts project development process— alternatives analysis and much of preliminary engineering—are carried out in concert with the metropolitan planning process specified by SAFETEA- LU and the environmental review processes required by the National Environmental Policy Act of 1969 (NEPA). Alternatives analysis studies are a corridor-level analysis of a range of alternatives designed to address locally-identified mobility and other problems in a specific transportation corridor. The alternatives analysis phase culminates in the selection of a locally preferred alternative (LPA), which is the New Starts project that FTA evaluates for funding. After a locally preferred alternative is selected, the project sponsor submits an application to FTA for the project to enter the preliminary engineering phase. During the preliminary engineering phase, project sponsors refine the design of the locally preferred alternative, taking into consideration all reasonable design alternatives and estimating each alternative’s costs, benefits, and impacts (e.g., financial or environmental). Further, project sponsors are required to complete the NEPA environmental review process in order to receive federal funding. Specifically, FTA interprets NEPA to require, as part of the NEPA process for evaluation of the alternatives, an environmental review document with information on each alternative’s benefits and costs relating to the New Starts evaluation. When the preliminary engineering phase is completed and federal environmental requirements are satisfied, FTA may approve the project’s advancement into final design, after which FTA may recommend the project for a FFGA and proceed to construction. FTA oversees grantees’ management of projects from the preliminary engineering phase through the construction phase (see fig. 1). This project management oversight is conducted by FTA staff, working closely with its project management oversight contractors (PMOC), to provide continual monitoring and assessment of projects’ scope, schedule, and budget, and of its sponsor’s technical capacity. To help inform administration and congressional decisions about which projects should receive federal funds, FTA currently distinguishes among proposed projects by evaluating and assigning ratings to various statutory evaluation criteria—including both project justification and local financial commitment criteria—and then assigning an overall project rating. (See fig. 2.) These evaluation criteria reflect a broad range of benefits and effects of the proposed project, such as cost-effectiveness, as well as the ability of the project sponsor to fund the project and finance the continued operation of its transit system. FTA has developed specific measures for each of the criteria outlined in statute. However, FTA currently assigns a 50 percent weight to both the cost-effectiveness and the land use criteria when developing the project justification summary rating. The other project justification criteria are not weighted, although the mobility improvements criterion is used as a “tiebreaker.” On the basis of their evaluation measures, FTA assigns proposed projects a rating for each criterion and then assigns a summary rating for local financial commitment and project justification. These two ratings are averaged together, and FTA assigns each project a “high,” “medium-high,” “medium,” “medium-low,” or “low” overall rating, which is used to rank projects and determine which projects to recommend for funding. Projects are rated at several points during the New Starts process, including as part of the evaluation for entry into the preliminary engineering and final design phases, and they are rated yearly for inclusion in the New Starts Annual Report. The administration uses the FTA evaluation and rating process, along with the phase of development of New Starts projects, to decide which projects to recommend to Congress for funding. Although many projects receive a summary rating that would make them eligible for a FFGA, generally only a few are proposed for a FFGA in a given fiscal year. FTA proposes FFGAs for those projects that are projected to meet the fol lowing conditions during the fiscal year for which funding is proposed: deral project funding must be committed and available for the roject. The project must be in or near the final design phase and have progre far enough for uncertainties about costs, bene financial or environmental) to be minimized. fits, and impacts (e.g., The project must meet FTA’s tests for readiness and technical capacity, o remaining cost, project scope, or local which confirm that there are n financial commitment issues. SAFETEA-LU introduced a number of changes to the New Starts pr including some that affect the evaluation and rating process. For example, given past concerns that the evaluation process did not ac for a project’s impact on economic development and FTA’s lack of communication to sponsors about upcoming changes, the statute added economic development to the list of project justification criteria that FT must use to evaluate and rate New Starts projects, and requires FTA to issue notice and guidance each time significant changes are made to th process and criteria. SAFETEA-LU also established the Small Starts program, a new capital investment grant program, simplifying the requirements imposed for those seeking funding for lower cost projects e such as bus rapid transit, streetcar, and commuter rail projects. This program is intended to advance smaller-scale projects through an expedited and streamlined evaluation and rating process. FTA also subsequently introduced a separate eligibility category within the Small Starts program called Very Small Starts, which is for projects with a capital cost of less than $50 million. Very Small Starts projects qualify for an even simpler and more expedited evaluation and rating process than other Small Starts projects. FTA, like most federal agencies, must document its activities, including work related to the New Starts program, in accordance with the Federal Records Act of 1950, as amended. Each federal agency must maintain a records management program and must preserve records that (1) document the organization, functions, policies, decisions, procedures, and essential transactions of the agency and (2) provide the information necessary to protect the legal and financial rights of the government and of persons directly affected by the agency’s activities. The National Archives and Records Administration (NARA) is given general oversight responsibilities for records management programs and practices. The activities of an agency records management program include, among other things, the development of a records schedule—that is, for all records created and received by the agency, where and how long records need to be retained and their final disposition (destruction or preservation) based on time, or event, or a combination of time and event—subject to the approval of NARA. No record may be destroyed unless it has been scheduled and, for temporary records, the schedule is of critical importance because it provides the authority to dispose of the record after a specified time period. There is insufficient data available to determine the time it takes for a project to move through the New Starts process. Nevertheless, 9 of the 40 projects that have received a FFGA since 1997, and with complete data available, had milestone dates that ranged from about 4.5 to 14 years to complete the project development phases. However, the data from these 9 projects are not generalizeable to the 40 New Starts projects. FTA has not historically retained all milestone data for the 40 projects, such as the dates project sponsors apply to enter a project development phase, in a consistent manner. However, FTA has retained some milestone data from some projects and is taking steps to improve its New Starts data retention and collection. In addition, we found that project sponsors do not systematically retain milestone data for projects that have completed the New Starts process. Congress and FTA have taken action to expedite projects through the New Starts process through Penta-P and training workshops for project sponsors. FTA has not historically retained all milestone data, such as the dates that project sponsors apply to enter a project development phase, and FTA’s subsequent approval, in a consistent or comprehensive manner. According to FTA, its record schedule requires that FTA retain documents related to milestone approvals for 2 years after the close of the project and FTA meets this requirement. For example, FTA retains documents that notify project sponsors of their approval to enter preliminary engineering and final design. Although not required, FTA has also retained milestone data from some, but not all, projects longer than 2 years. We were unable to obtain complete and reliable project milestone data from FTA. FTA has historically retained milestone data from some projects using a variety of techniques, such as maintaining hard copies of milestone approval letters or internal memos in binders and saving electronic copies of some documents in a computer filing system. Using these sources, FTA provided us with milestone approval dates— preliminary engineering, final design, and FFGA—for the 40 projects that received a FFGA since 1997. However, when we attempted to verify the milestone approval dates from a random sample of 10 projects, we found that the data were unreliable and, in some cases, inaccurate. For example, the approval dates for some projects did not match the dates contained in the source documents (e.g., letters from FTA approving a project’s advancement into preliminary engineering); in other cases the source documents for some projects were missing from the project files. In addition to milestone approval dates, we asked FTA to provide the dates that these 40 projects began alternatives analysis and submitted applications for preliminary engineering, final design, and FFGA. Because FTA is not required by its record schedule to retain these dates, FTA was unable to provide these dates. In addition, FTA officials cited several challenges to collecting this information. First, FTA told us that it does not have records on when a project begins alternatives analysis because this phase is conducted at the local level, generally without FTA involvement. Second, FTA told us that it does not record when a project sponsor submits an application for preliminary engineering, final design, and FFGA because project sponsors almost never submit complete applications. According to FTA officials, they begin to review applications while simultaneously working with project sponsors to submit additional documentation to complete the application. However, according to FTA officials, because the application process is iterative, they have not historically assigned a date when the application was fully submitted. We have previously reported that federal agencies can use performance information to make various types of management decisions to improve programs and results. In particular, managers can use performance information to identify problems in existing programs, to try to identify the causes of problems, or to develop corrective actions. Further, GAO’s Standards for Internal Control in the Federal Government states that internal control activities include, among other activities, appropriate documentation of records. More specifically, internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. All documentation and records should be properly managed and maintained. FTA officials acknowledged that while not historically perfect; the agency has retained sufficient milestone data to help manage the New Starts program. For example, FTA officials noted that they have used the data to help identify “pain points” in the process and options to streamline the process. Furthermore, FTA officials stated that even with the most comprehensive information on the time it takes for New Starts projects to complete the project development process, each project represents a unique set of challenges from local decision making, funding availability, and local legal structure that will impact the time it takes to pass through the FTA decision phases. Nevertheless, recognizing the importance of having complete milestone data to better understand and improve the project development process, FTA has taken several steps in recent years to more consistently collect and retain such data. For example, FTA officials told us that, since late 2006, they now retain all letters that contain preliminary engineering and final design approval dates and electronically document the date a project sponsor’s application to enter preliminary engineering is received in internal memos. Also, according to FTA, the agency has begun to document the date when it considers project sponsors’ preliminary engineering applications complete. In addition, in 2008, FTA officials said that they began requiring project sponsors to submit a copy of their alternatives analysis initiations packages for FTA review and comment. Finally, FTA officials said they were in the process of developing a spreadsheet to record various project approval dates—including statutorily required approval dates and internal FTA review dates—and just completed a year long pilot project of an electronic case management system. Project sponsors also do not consistently retain milestone data for projects that have completed the New Starts process. Because of the limitations of FTA’s data, we attempted to collect data from project sponsors that have received a FFGA since 1997, on the time it takes for a project to move through the New Starts evaluation and rating process. We queried the project sponsors for several New Starts milestone dates. However, we found that some of the project sponsors do not consistently maintain records on completed projects. In addition, some projects had multiple project sponsors during the New Starts evaluation and rating process, which complicated record keeping. Nonetheless, we were able to gather some milestone dates for 30 of the 40 projects, but these data were not complete due to missing milestone dates and therefore we were not able to calculate valid timelines for all projects. (See app. III for more information on these data.) However, of the 30 project sponsors that provided information to us, only 9 had complete sets of New Starts milestone dates. Figure 3 shows the time it took for each of these projects with complete data to move from the beginning of alternatives analysis to the approval for a FFGA, ranging from about 4.5 years for 3 projects to over 14 years for 2 projects. Due to the number of projects with complete data, the data from these 9 projects are not generalizeable to the 40 New Starts projects. The small sample size also makes it difficult to determine whether mode (i.e., heavy rail, light rail, or bus), cost, or the year that the completed projects entered the New Starts evaluation and rating process impacts the time each project spends in each phase. Furthermore, FTA officials told us that each New Start project’s experience in the evaluation and rating process is unique, making it difficult to identify trends or patterns. Some project sponsors, transit consultants, and a transportation industry association official told us that, over the years, the New Starts process has become too time consuming. Specifically, several project sponsors told us that the amount of time it takes for FTA to determine whether a project can advance into the next phase can be significant and causes additional costs. In addition, a 2007 Deloitte study on the New Starts program found that the New Starts process is perceived by project sponsors as intensive, lengthy, and burdensome. For example, one project sponsor believes that FTA reviews prolonged its project development by approximately 1 year, which they estimate cost an additional $24 million. FTA officials have acknowledged that the requirements of the New Starts process could add time to project development and have acted to streamline the process. For example, FTA has allowed projects to conduct additional engineering while FTA reviews applications for final design. FTA has also maintained that thorough reviews of project information can identify issues and challenges with proposed investments that may later prolong project development. However, FTA officials also noted that not all project delays can be attributed to FTA or the New Starts process. FTA officials cited a number of reasons that a project could be delayed during preliminary engineering or final design that are outside FTA’s control such as changes to a project’s scope, changes in local political leadership, or the loss of local financial commitment. For example, according to FTA officials, the Northern Virginia (Dulles Corridor Metrorail Project— Extension to Wiehle Avenue) project was about to receive FTA approval to enter the final design phase when the Governor of Virginia requested a period of 6 months to evaluate a potential change in the project’s scope— digging a large-bore 4-mile tunnel for a portion of the project—that the project sponsor eventually discarded in favor of the original design. The lack of reliable comprehensive data makes it difficult to develop a complete understanding of the time it takes projects to move through the New Starts process. The limited information available and anecdotal examples suggest that the process can be lengthy. But, without complete and accurate data, it is difficult to know whether and to what extent the process has become more time consuming over the years or the addition of new requirements add to the length of the process. Without such information, Congress and FTA cannot reliably identify the location, causes, or extent of the pain points and which options would be an appropriate response to expedite this process. Moreover, as we have previously reported, having such information can help agencies identify weaknesses in programs, assess factors causing the problems, and modify processes. SAFETEA-LU created what is commonly called the Small Starts program, a new capital investment grant program, simplifying the requirements imposed for those seeking funding for lower cost projects, such as bus rapid transit, streetcar, and commuter rail projects. This program is intended to advance smaller scale projects through an expedited and streamlined evaluation and rating process. In July 2007, FTA established the eligibility parameters for the Small Starts program. FTA created a separate eligibility category within the Small Starts program called Very Small Starts, which is for projects with a total capital cost of less than $50 million. According to FTA, as of June 2009, one Small Starts project has been awarded a project construction grant agreement (PCGA), and another is currently being processed. In addition, two projects have received construction funding through standard grants rather than PCGAs. SAFETEA-LU also established the Public-Private Partnership Pilot Program (Penta-P) to demonstrate the advantages and disadvantages of public-private partnerships for certain new fixed guideway capital projects funded by FTA. In January 2007, FTA published the terms of Penta-P in the Federal Register. Penta-P projects may be eligible for a simplified and accelerated New Starts review process that is intended to reduce the time and cost to project sponsors. For example, under Penta-P, the projects are eligible for consideration, on a case-by-case basis, for accelerated design approvals. Specifically, FTA could issue concurrent approvals for preliminary engineering and final design to commence, thus allowing the project to proceed with final design immediately upon completion of preliminary engineering without requiring additional approval. To date, FTA has not issued such concurrent approvals. In 2007, FTA executed memorandums of understanding for three pilot Penta-P projects that are candidates for New Starts funding: Houston, Texas; Denver, Colorado; and Oakland, California. According to FTA officials, as of July 2009, FTA has not issued concurrent approvals of the type described above as the Penta-P projects have not yet demonstrated the distribution of risk among the private and public sectors that would enable FTA to relax its normal due diligence for approvals into preliminary engineering or final design. FTA has also implemented administrative changes designed to expedite the New Starts process. Examples of these changes include the following: Regular training workshops: FTA has developed and offered regular training workshops for project sponsors and offered information to project sponsors to end misconceptions about the New Starts process. For example, in March 2009, FTA offered two New Starts workshops in Phoenix, Arizona, and Tampa, Florida, on travel forecasting and provided the materials from these workshops on its Web site. In addition, in June 2009, FTA offered a course on alternatives analysis in Los Angeles. FTA also offers New Start roundtables that are usually 2-day meetings between FTA staff and project sponsors of projects in preliminary engineering and final design seeking New Starts funding. They consist of presentations by FTA staff and local project sponsors on topics related to New Starts planning, project development, and the evaluation and rating process. Project delivery tools: In addition to training, FTA has introduced project delivery tools to assist project sponsors with the New Starts evaluation and rating process. FTA now requires the submittal of an alternatives analysis initiation package summarizing corridor problems, conceptual alternatives, and preliminary evaluation measures to be used, which, according to FTA, can help to foster coordination among local participating agencies and FTA. FTA has also developed checklists for project sponsors to improve their understanding of the requirements of each phase of the New Starts process. Lastly, FTA has begun to use road maps with some project sponsors that include schedules and roles for both FTA and the sponsor. Project sponsors, transit consultants, transit industry associations, and academics we contacted identified several options for streamlining the New Starts project development process, including combining project development phases, using nonbinding or binding agreements, adopting a more risk-based approach, and promoting project development tools. Although each of these options could streamline the New Starts evaluation and rating process, each option has advantages and disadvantages to consider. Project sponsors and transit consultants cited combining project development phases, such as preliminary engineering and final design, as an option for expediting the New Starts project development process. Project sponsors and transit consultants told us that waiting for FTA’s approval to enter preliminary engineering, final design, and construction can prolong project development. According to project sponsors, while FTA determines whether a project can advance to the next project development phase, work on the project essentially stops. Project sponsors can advance the project at their own risk, meaning they could have to redo the work if FTA does not subsequently approve an aspect of the project. The amount of time it takes for FTA to determine whether a project can advance can be significant. For example, one project sponsor told us that FTA’s review of its application to advance from alternatives analysis to preliminary engineering took 8 months, about the same amount of time it took the project sponsor to complete alternatives analysis. FTA officials told us the length of time for reviews depends on a number of factors, most importantly the completeness and accuracy of the project sponsor’s submissions. To reduce the “start/stop” phenomena project sponsors described, a legislative change would be necessary to eliminate the requirement that FTA approve advancement of a project into final design, which would effectively combine the preliminary engineering and final design phases into one “project development” phase, as was done in SAFETEA-LU when creating a more streamlined version of the process under the Small Starts program. Furthermore, another option for legislative change would be to replace the requirement that FTA approve the advancement of a project into the preliminary engineering phase with a requirement that FTA approve a project into the overall New Starts program, which would streamline and simplify the process. In addition, the Deloitte study recommended combining preliminary engineering and final design, while simultaneously adjusting the FFGA review date to occur in the middle of this expanded phase, rather than after final design, where it traditionally happened. In this regard, the Deloitte study reflected the sentiments of project sponsors and consultants we interviewed, who said that combining phases and/or creating a programmatic approval would allow FTA to signal its intent to recommend a project for funding at an earlier point than the current project development process allows. This would give sponsors more opportunity to pursue private financing arrangements and alternative project delivery methods, such as those being carried out under Penta-P, as this federal funding provides the certainty needed to encourage private sector participation. In addition to combining phases, the Deloitte study also recommended that FTA redefine or more clearly define the project phases to more accurately reflect FTA’s current requirements and to better accommodate alternative delivery methods. There are limitations to combining phases of the New Starts project development process. One limitation to combining phases and clarifying them is that a legislative change would be necessary. Another limitation is that, depending on how it is accomplished, combining phases could impact how FTA integrates NEPA requirements into the project development process. Finally, combining phases would reduce the opportunities for FTA to monitor and evaluate high-value projects at important interim phases; therefore, increasing the potential for issues or problems to go undetected. The linear, phased evaluation process of the New Starts program has historically hampered project sponsors’ ability to utilize alternative project delivery methods, such as design-build, according to project sponsors. These alternative project delivery methods have the potential to develop a project cheaper and quicker than traditional project delivery methods can. However, project sponsors told us it is difficult to attract private sector interest early enough in the project development process to use alternative project delivery methods because there is no guarantee that the project will ultimately receive federal funding through the New Starts program. The Deloitte study also noted that New Starts project sponsors miss the opportunity to use alternative project delivery methods because of the lack of early commitment of federal funding for the projects. To encourage the private sector involvement needed, project sponsors, consultants, and experts we interviewed suggested that FTA use letters of intent, which are nonbinding agreements, or early system work agreements, which are binding agreements. Through a letter of intent, FTA announces its intention to obligate an amount from future available budget authority to a project. According to private sector entities we interviewed, such an intended obligation sends a signal of federal support for a project and, therefore, attaches more certainty to the project. A challenge of using letters of intent is that they can be misinterpreted as an obligation of federal funds, when in fact they only signal FTA’s intention to obligate future funds should the project meet all New Starts criteria and requirements, and budget authority is available. In addition, because FTA reserves, or sets aside, commitment authority, or contract authority, when it issues letters of intent, issuing more such letters would reduce the availability of this authority at a faster pace than issuing more early systems work agreements. Letters of intent cover the project’s full federal share and, while early systems work agreements actually obligate federal funds, they obligate only a portion of a project’s federal share. As such, it is possible that, with more frequent use of letters of intent, FTA’s commitment authority could be depleted earlier than expected, which could affect the anticipated funding stream for future projects. Finally, another challenge of using an early systems work agreement is that the law specifies that FTA can only enter into this type of agreement with a project if a Record of Decision under NEPA has been issued, and the Secretary finds that a FFGA for the project will be made and the terms of the agreement will promote ultimate completion of the project more rapidly and at less cost, thus limiting FTA’s ability to use these agreements. Project sponsors, consultants, and experts we interviewed suggested that FTA adopt a more risk-based evaluation process for New Starts projects based on a project’s cost or complexity, the federal share of the project’s cost, or the project sponsor’s New Starts experience. For example, FTA could align the level of oversight with the proposed federal share of the project—that is, the greater the financial exposure for the federal government, the greater the level of oversight. This was employed with the creation of the Small Starts program, which is intended to provide a more streamlined process for smaller and less costly projects. Similarly, FTA could reduce or eliminate certain reviews for project sponsors who have successfully developed New Starts projects in the past, while applying greater oversight to project sponsors who have no experience with the New Starts project development process. We have noted the value in using risk-based approaches to oversight. For example, we have previously reported that assessing risks can help agencies allocate finite resources and help policymakers make informed decisions. By adopting a more risk-based approach, based on, for example, project sponsor experience, project scope, total project cost, or federal share of the cost, FTA could allow select projects to move more quickly through the New Starts project development process and more efficiently use its scarce resources. However, a trade-off of not applying all evaluation measures to every project is that FTA could miss the opportunity to detect problems early in the project’s development. Further, this practice may move FTA away from their stated management objective of treating “all projects equitably across the U.S.” Project sponsors said that FTA should more consistently use road maps or similar tools to define the project sponsor’s and FTA’s expectations and responsibilities for moving the project forward. Without establishing these expectations, project sponsors have historically had little information about how long it will take FTA to review, for example, their request to move from alternatives analysis to preliminary engineering. This lack of information makes it difficult for the project sponsor to effectively manage the project. Additionally, FTA previously identified an “adequate schedule” as a key factor of successful project implementation. Given the benefits of clearly setting these expectations, Deloitte recommended that FTA use road maps for all projects. The Deloitte study also observed that project sponsors would like to see FTA use more project development agreements, or similar vehicles, early in the development process because they help clarify expectations on both sides. The following project development tools could increase the transparency of and help project sponsors navigate the New Starts project development process: Road maps or similar project schedules: FTA has used road maps for select projects, but the agency does not consistently use them for all projects. According to FTA, the agency is currently working with project sponsors to establish road maps for all projects. However, according to some project sponsors, a limitation of using road maps is that expected time frames are subject to change—that is, project schedules often change as a project evolves throughout the development process. Furthermore, every project is unique, making it difficult to set a realistic time frame for each phase of development. Consequently, the road maps can provide only rough estimates of expected time frames. Project development agreements (PDA): FTA has used project development agreements, on a limited basis, to help streamline the New Starts project development process. PDAs require project sponsors and FTA to agree on three components: a delivery schedule, a review of key project development deliverables, and clear expectations from both sides for demonstrating project development progress, so that each would be held accountable for the advancement of a project. However, an FTA official stated that there are differences of opinion inside FTA as to the relative efficacy of road maps versus project development agreements. In addition, FTA told us that, as legal documents, PDAs take so long to negotiate with project sponsors that they may not offer a streamlining advantage. Because of that, some FTA staff members have stated a preference for road maps over PDAs and, as an alternative to PDAs, are currently using the informal road maps described above to establish milestones and timelines. Project sponsors told us that the frequent policy and guidance changes to the New Starts program can result in additional costs and delays as project sponsors are required to redo analyses to reflect the changes. In May 2006, FTA modified its policy so that a project that has been approved for entry into final design would no longer be subject to changes in New Starts policy and guidance. However, this policy change does not apply to projects approved for entry into preliminary engineering, which is the New Starts project development phase that has the most requirements for project sponsors and the phase where project sponsors told us that frequent changes result in additional costs and delays. For example, sponsor officials for one project told us that shortly after they submitted their preliminary engineering approval materials to FTA, FTA established a new, internal rule that required a risk assessment to take place prior to FTA’s approval to enter preliminary engineering, instead of during preliminary engineering. To protect the development schedule, the officials asked for, but were denied, approval for the project to proceed under the existing guidance that placed risk assessment activities during preliminary engineering, or at least to perform the risk assessment concurrently with preliminary engineering approval to maintain the schedule. The sponsor said the overall effect of the change was a delay of the preliminary engineering approval by about 4 months. According to FTA officials, FTA typically allows “grace periods” when implementing major policy changes to provide sponsors stability and time to adapt to those changes. Furthermore, another project sponsor noted that new requirements can prolong project development because each element of a proposed project is interrelated, so changing one requirement can stop momentum on a project. To avoid this rework, some project sponsors, consultants, and experts we interviewed suggested that FTA apply changes only to future projects, not projects currently in preliminary engineering. However, by not applying changes to projects in preliminary engineering, FTA could miss the opportunity to enhance its oversight of these projects. Also, applying changes to some projects but not to others would require FTA staff to create and apply multiple sets of rules to the project management process, which could create an administrative burden and move away from a consistent evaluation process. Project sponsors told us that FTA could minimize delays due to the stop/start nature of the development process by an adjustment to FTA staffing or contractor support levels to allow for multiple, simultaneous reviews of sponsors’ projects, and could reduce uncertainty by changing the way the agency selects and trains oversight contractors. Consultants and sponsors told us that FTA’s “first-in, first-out” approach to the review process, while not agency policy, sometimes can result in FTA reviewing only one project at a time, in the order they arrive. FTA told us this happens occasionally because of overlapping demands placed on oversight contractors, who are not able to perform simultaneous reviews. As a result, the development of low-risk projects is often prolonged if they happen to sit in the queue behind more complex projects that were submitted earlier. The Deloitte study recommended, and consultants and a sponsor we interviewed agreed, that FTA could adjust its process or staffing, as needed, to enable multiple reviews to be conducted in parallel. In addition, sponsors and consultants we interviewed told us some of FTA’s PMOCs have little experience with New Starts or Small Starts projects, leaving them uncertain about FTA requirements. As a consequence, inexperienced PMOCs sometimes provide inconsistent guidance, resulting in sponsors having to re-do work, adding time to the development process. To reduce the PMOC’s uncertainties about FTA’s requirements, FTA could provide them with additional training, especially when regulatory and administrative requirements change. FTA could also streamline the process by using staff, instead of contractors, to oversee project sponsors. Since staff possesses more institutional knowledge, they would provide sponsors more certain guidance. However, shifting more oversight work inside FTA would add to the scope and complexity of FTA’s work and could, therefore, create staffing challenges. FTA’s New Starts program is often cited as a model for other federal transportation programs. FTA’s recommendations for funding are based on a rigorous examination of proposed projects, and Congress has generally followed FTA’s funding recommendations. However, there is concern among some Members of Congress and the transit industry about the project development process, namely that it has become too time consuming, costly, and complex. Despite congressional and FTA actions to streamline the New Starts project development process, it continues to be viewed as time consuming and lengthy. However, the specific areas of concern that lead to delays, are difficult to determine because of a paucity of information about the time it has taken projects to move through the New Starts process. Moreover, this lack of adequate data makes it difficult for Congress and FTA to assess the extent to which federal efforts designed to expedite the New Starts process are succeeding. Although each project is unique, providing this information could also help set general expectations about the length of the process for potential project sponsors. While FTA has taken some steps to improve its data collection and retention, additional work is needed. As stewards of the New Starts program, which provides millions of dollars to local communities for transit projects each year, it is FTA’s responsibility to ensure that program changes are based on accurate and reliable information. Through our interviews with project sponsors, transit industry consultants, and transportation experts, as well as our review of existing research, we identified a number of potential options to expedite project development within the New Starts program. However, FTA must also strike the appropriate balance between expediting project development and maintaining the rigor and accountability of the New Starts program. As FTA works to develop its proposal for the New Starts program for the upcoming surface transportation reauthorization, considering the advantages and disadvantages of these options, including any potential trade-offs, could help FTA select any options that expedite the process while maintaining the rigorous oversight of the process. It is important that the length of project development within New Starts program does not serve as a deterrent as more communities turn to transit to solve their transportation challenges. To improve the New Starts program, we recommend that the Secretary of Transportation direct the FTA Administrator to take the following two actions: continue to improve data collection and retention for statutorily defined milestones and determine if additional data would help to better describe the time it takes for a project to move through the New Starts process. In doing so, FTA should establish mechanisms to ensure the accuracy of the data and routinely analyze the data in order to identify the length of time it takes projects to move through each phase, potential causes for perceived delays, and potential solutions. FTA should make its analysis available to Congress and other interested parties. analyze the streamlining options identified in this report, along with any additional options, to determine which options, if any, to implement— seeking legislative change if necessary—to expedite the project development within the New Starts program. We provided the DOT, including FTA, with a draft copy of this report for review and comment. In e-mail comments, DOT agreed with our recommendation to consider options to expedite project development, noting that the options we identified to help expedite project development within the New Starts program are consistent with the options that FTA has been discussing with transit stakeholders and congressional staff. However, DOT disagreed with our recommendation on data, as originally drafted, because it did not recognize FTA’s ongoing efforts to improve its data collection. In addition, in its comments, FTA acknowledged that there are always opportunities to improve various aspects of the program, including some of the data collection efforts discussed in this report, but noted that the agency has maintained, and has access to, the information necessary to effectively track active projects and review progress through milestones for past projects. Furthermore, FTA expressed concern that the report uses a standard for data management that is neither intended nor necessary for effective project management. FTA officials also stated that, even with the most comprehensive milestone data, each project represents a unique set of challenges that will impact the time it takes for a project to pass through the New Starts process. More broadly, FTA officials stated that they use milestone data to manage the program and make changes to improve the program. To address these comments, we incorporated additional information about FTA’s ongoing efforts to strengthen its data management process in the report. We also revised in the report the recommendation on data collection to reflect FTA’s ongoing work while still emphasizing the need to improve the agency’s milestone data collection and retention, including the reliability and accuracy of the data. In addition, we agree that each New Starts project represents a unique set of circumstances that will affect the time it takes to pass through FTA decision phases and further recognized this fact in the report. However, we disagree with the assertion that we hold this information to a standard neither intended nor necessary for effective program management. An effective system of internal controls requires that managers have relevant and reliable information to better achieve agencies’ missions and program results. As we note in the report, FTA project milestone data are unreliable and, in some cases, inaccurate, which can jeopardize effective program management. We and others have recognized FTA’s New Starts process as providing a sound, rigorous and systematic process for identifying projects worthy of federal discretionary funding for major transit investments, and analysis based on reliable data will only help strengthen FTA’s management of the program. We, therefore, believe that this recommendation, as revised, is valid. DOT also provided technical clarifications, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees and DOT. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at [email protected] or (202) 512-2834. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The Federal Transit Administration evaluated and rated 14 New Starts projects in preliminary engineering and final design during the fiscal year 2010 cycle. FTA also reviewed the progress but did not rate 5 projects that are statutorily exempt from being rated. (See table 4 for a full list of these projects.) Of the 14 New Starts projects evaluated and rated during the fiscal year 2010 cycle, FTA recommended five new projects for funding through full funding grant agreements (FFGA) or early system work agreements (ESWA) this year. In its annual report, FTA states that these projects recommended for funding are in final design or expected to be approved into final design before the end of summer 2009, the environmental process has been completed, and any needed railroad agreements have been negotiated and are at or near completion. For these projects, FTA recommends a total of $430 million in New Starts funding in fiscal year 2010. The total capital cost of these projects is estimated to be approximately $11.14 billion. FTA also recommended, as part of the President’s budget request, reserving $81.79 million in New Starts funding for projects that may attain the FFGA milestone in the budget year but have not sufficiently progressed in project development for FTA to recommend them in the budget request. FTA has not specified which projects will be eligible for this funding or allocated a particular amount for any given project. According to the annual report and officials we spoke with at FTA, this approach will allow the agency to make “real time” funding recommendations as project uncertainties are mitigated, and Congress makes final appropriations decisions. FTA does not expect that all of the projects in preliminary engineering will advance to final design in fiscal year 2010. FTA evaluated and rated 21 eligible Small Starts and Very Small Starts projects for the fiscal year 2010 cycle. These include 1 project with a pending project construction grant agreement, 16 projects that have demonstrated sufficient readiness to be considered for funding in the fiscal year 2010 President’s budget request, and 4 projects that have not yet demonstrated readiness to be considered for funding. (See table 5 for a full list of these projects.) FTA recommends a total of $174.27 million in funding for Small Starts, including Very Small Starts, projects. The total capital cost of the 16 projects that FTA recommended for funding is estimated to be $895.11 million. Most of these projects are proposed to be funded under a multiyear PCGA. However, if a project requests less than $25 million in Small Starts funding or has received its full appropriations, FTA will award funds in a single-year capital grant rather than a PCGA. The administration’s fiscal year 2010 budget proposal recommends that $1.83 billion be made available for the New Starts program. This amount is $81.093 million more than the program’s fiscal year 2009 appropriation. Figure 4 illustrates the planned uses of the administration’s proposed request for the New Starts fiscal year 2010 budget, including the following: $1,123.03 million would be allocated among the 19 projects with existing FFGAs, $430.0 million would be allocated among the 5 projects newly recommended for funding through FFGAs or ESWAs, $81.79 million would be allocated to projects that may attain the FFGA milestone in the budget year but have not sufficiently progressed for FTA to recommend them in the budget request, $174.25 million would be allocated among the 16 Small Starts projects newly recommended for funding, and $18.27 million for management and oversight activities. The American Recovery and Reinvestment Act (Recovery Act) provided FTA with over $740 million in funding for the New Starts program. This funding for surface transportation projects allowed FTA to accelerate payments to transit projects with existing FFGAs and PCGAs. FTA distributed Recovery Act funding to 11 projects in construction with fiscal year 2010 commitments. More specifically, FTA distributed at least 40 percent of each project’s scheduled fiscal year 2010 payment in Recovery Act funding. According to FTA, 5 projects with demonstrated cash flow needs that exceeded this distribution received additional funding. This funding will not require amendments or significant changes to any FFGAs because the overall federal share of the total project costs did not change. The Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users Act (SAFETEA-LU), like the Transportation Equity Act for the 21st Century, allowed FTA to make contingent commitments for funding to projects beyond what is authorized in law, subject to future authorizations and appropriations. According to FTA, SAFETEA-LU and the Recovery Act gave FTA a total contingent commitment authority of $14.37 billion, of which $12.684 billion has been committed through FFGAs and preliminary engineering and final design activities for projects through fiscal year 2009. FTA officials said that the agency has approximately $879 million remaining in contingent commitment authority after consideration of fiscal year 2010 funding recommendations. FTA officials also told us that they need additional authority for commitment beyond fiscal year 2010 because FTA is not permitted to spend money beyond its authorized level. FTA officials noted that the available level of contingent commitment authority did not influence their fiscal year 2010 recommendations. Further, they stated that they were able to recommend all of the projects deemed ready for funding because of the additional Recovery Act funding. To evaluate the time it has generally taken for projects to move through the New Starts process we collected and attempted to verify FTA data on New Starts projects that have advanced through the New Starts process and received FFGAs after June 1997 to determine the length of time each project spent in each stage of the process. However, we found this data to be unreliable based on our reviews of FTA files on a random sample of 10 projects’ milestone dates. We also attempted to collect data from project sponsors to establish how long it has taken projects to move through the New Starts process. We contacted each project sponsor and requested seven milestone dates from alternatives analysis through FFGA. We received verifiable data for 30 of the 40 projects approved into a FFGA since June 1997. However, of the 30 projects, we received complete sets of milestone data for 9 projects. Due to the number of projects with complete data, the data from these 9 projects are not generalizeable to the 40 New Starts projects. The verifiable data included the dated letters the project sponsors sent to or received from FTA. The data for the beginning of alternatives analysis are based on several documents, including local government board meeting minutes that record a decision for the locality to begin alternatives analysis. We also examined the 2007 Deloitte Development, LLC, report on FTA’s New Starts process and interviewed the project leader for this study to obtain information on the time it takes for New Starts projects to move through the process. To determine the steps Congress and FTA have taken to expedite the New Starts process, we reviewed documents including our reports on the New Starts program, federal legislation such as SAFETEA-LU, as well as other applicable New Starts requirements, and FTA New Starts policy guidance. In addition, we interviewed FTA officials and attended the American Public Transportation Association’s March 11, 2009, legislative conference, at which FTA gave a presentation on the New Starts and Small Starts programs, to obtain information on steps taken by Congress and FTA to expedite the New Starts process. To assess the options that exist to expedite the process, we collected and analyzed information from relevant reports. In particular, we examined the recommendations from the 2007 Deloitte Development, LLC, report on FTA’s New Starts process and the American Pubic Transportation Association’s October 2008 report on transportation authorizing law. We interviewed FTA officials, transportation experts and consultants, industry groups, and project sponsors that chose not to enter the New Starts pipeline to identify factors contributing to New Starts project timeline challenges, as well as actions FTA and Congress have taken to expedite the New Starts process. We also interviewed these officials to identify additional changes that could streamline the project development process, as well as the advantages and disadvantages. Additionally, we interviewed 9 project sponsors about 10 projects, including those currently in the New Starts pipeline and those under a FFGA, about their experiences with and perceptions of the New Starts process. For each of these projects we interviewed the relevant project sponsor or contractor, as well as FTA officials with experience evaluating and overseeing the project. We selected these projects based on the following criteria: (1) timing (i.e., when projects received a FFGA); (2) mode (e.g., rail, light rail, or bus); (3) scope (i.e., the total cost of the project); and (4) projects from different geographic areas. Because the projects were selected as a nonprobability sample, the results cannot be generalized to all projects. Table 3 lists the New Starts and Small Starts project sponsors we interviewed for our review. To describe the New Starts and Small Starts projects evaluated, rated, and recommended for funding in fiscal year 2010 by FTA, we reviewed FTA’s Annual Report on New Starts for Fiscal Year 2010 and interviewed FTA officials. We spoke to the FTA officials about the number of projects evaluated, rated, and recommended for funding, the amount of funding requested for these projects, the total costs of proposed projects, as well as how FTA allocated its Recovery Act funding. In addition to the individual named above, A. Nicole Clowers, Acting Director; Kyle Browning; Lauren Calhoun; Gary Guggolz; Brandon Haller; and Carrie Wilks made key contributions to this report.
The New Starts program is an important source of new capital investment in mass transportation. To be eligible for federal funding, a project must advance through the different project development phases of the New Starts program, including alternatives analysis, preliminary engineering, and final design. The Federal Transit Administration (FTA) evaluates projects as a condition for advancement into each project development phase of the program. FTA has acted recently to streamline the process. This report discusses the (1) time it has generally taken for projects to move through the New Starts process and what Congress and FTA have done to expedite the process and (2) options that exist to expedite the process. In response to a legislative mandate, GAO reviewed statutes, FTA guidance and regulations, and project data. GAO also interviewed Department of Transportation (DOT) officials, projects sponsors, and industry stakeholders. Insufficient data are available to describe the time it has taken for all projects to move through the New Starts process. Nevertheless, 9 of 40 projects that have received full funding grant agreements since 1997, and had complete data available, had milestone dates that ranged from about 4 to 14 years to complete the project development phases. However, the data from these 9 projects are not generalizeable to the 40 New Starts projects. FTA has not historically retained all milestone data for every project, such as the dates that project sponsors apply to enter preliminary engineering and FTA's subsequent approval. Although not required by its records retention policy, FTA has retained milestone data from some projects longer than 2 years. However, GAO was unable to obtain complete and reliable project milestone data from FTA. FTA officials acknowledged that, while not historically perfect, the agency has retained sufficient milestone data to help manage the New Starts program. Nevertheless, recognizing the importance of having complete milestone data, FTA has taken several steps in recent years to more consistently collect and retain such data. In addition, GAO found that project sponsors do not consistently retain milestone data for projects that have completed the New Starts process. Congress and FTA have taken action to expedite projects through the New Starts process. For example, legislative action created the Public-Private Partnership Pilot Program (Penta-P) to study the benefits of using public-private partnerships for certain new fixed-guideway capital projects, such as accelerating project delivery. In addition, FTA has implemented administrative changes to expedite the New Starts process. For example, FTA has developed and offered training workshops for project sponsors and has introduced project delivery tools. These tools include checklists for project sponsors to improve their understanding of the requirements of each phase of the New Starts process. Project sponsors and industry stakeholders GAO interviewed identified options to help expedite project development within the New Starts program. These options include tailoring the New Starts evaluation process to risks posed by the projects, using letters of intent more frequently, and applying policy and guidance changes only to future projects. Each option has advantages and disadvantages to consider. In addition, FTA must also strike the appropriate balance between expediting project delivery and maintaining the accountability of the program. For example, by signaling early federal support of projects, letters of intent could help project sponsors use potentially less costly and time-consuming alternative project delivery methods, such as design-build. However, such early support poses some risk. It is possible that with more frequent use of letters of intent, FTA's commitment authority could be depleted earlier than expected, which could affect the anticipated funding stream for future projects. Furthermore, some options, like combining one or more statutorily required project development phases, would require legislative action.
Insurance is a mechanism for spreading risk over time, across large geographical areas, and among industries and individuals. While insurers assume some financial risk when they write policies, they employ various strategies to manage risk so that they earn profits, limit potential financial exposures, and build capital needed to pay claims. For example, they charge premiums for coverage and establish underwriting standards, such as refusing to insure customers who pose unacceptable levels of risk, or limiting coverage in particular geographic areas. Insurance companies may also purchase reinsurance to cover specific portions of their financial risk. Reinsurers use similar strategies to limit their risks, including charging premiums, establishing underwriting standards, and maintaining close, long-term business relationships with certain insurers. Both insurers and reinsurers must also predict the frequency and severity of insured losses with some reliability to best manage financial risk. In some cases, these losses may be fairly predictable. For example, the incidence of most automobile insurance claims is predictable, and losses generally do not occur to large numbers of policyholders at the same time. However, some infrequent weather-related events—hurricanes, for example—are so severe that they pose unique challenges for insurers and reinsurers. Commonly referred to as catastrophic or extreme events, the unpredictability and sheer size of these events—both in terms of geography and number of insured parties affected—have the potential to overwhelm insurers’ and reinsurers’ capacity to pay claims. Catastrophic events may affect many households, businesses, and public infrastructure across large areas, resulting in substantial losses that deplete insurers’ and reinsurers’ capital. Given the higher levels of capital that reinsurers must hold to address catastrophic events, reinsurers generally charge higher premiums and restrict coverage for such events. Further, in the wake of catastrophic events, reinsurers and insurers may sharply increase premiums to rebuild capital reserves and may significantly restrict insurance and reinsurance coverage to limit exposure to similar events in the future. Under certain circumstances, the private sector may determine that a risk is uninsurable. For example, while homeowner insurance policies typically cover damage and losses from fire and other perils, they usually do not cover flood damage because private insurance companies are largely unwilling to bear the financial risks associated with its potentially catastrophic impact. In other instances, the private sector may be willing to insure a risk, but at rates that are not affordable to many property owners. Without insurance, affected property owners must rely on their own resources or seek out disaster assistance from local, state, and federal sources. In situations where the private sector will not insure a particular type of risk, the public sector may create markets to ensure the availability of insurance. For example, several states have established Fair Access to Insurance Requirements (FAIR) plans, which pool resources from insurers doing business in the state to make property insurance available to property owners who cannot obtain coverage in the private insurance market, or cannot do so at an affordable rate. In addition, six southern states have established windstorm insurance pools that pool resources from private insurers to make insurance available to property owners who cannot obtain it in the private insurance market. Similarly, at the federal level, the Congress established the NFIP and the FCIC to provide coverage where voluntary markets do not exist. The Congress established the NFIP in 1968, partly to provide an alternative to disaster assistance for flood damage. Participating communities are required to adopt and enforce floodplain management regulations, thereby reducing the risks of flooding and the costs of repairing flood damage. FEMA, within the Department of Homeland Security, is responsible for, among other things, oversight and management of the NFIP. Under the program, the federal government assumes the liability for covered losses and sets rates and coverage limitations. The Congress established the FCIC in 1938 to temper the economic impact of the Great Depression and the weather effects of the dust bowl. In 1980, the Congress expanded the program to provide an alternative to disaster assistance for farmers that suffer financial losses when crops are damaged by droughts, floods, or other natural disasters. Farmers’ participation is voluntary, but the federal government encourages it by subsidizing their insurance premiums. USDA’s RMA is responsible for administering the crop insurance program, including issuing new insurance products and expanding existing insurance products to new geographic regions. RMA administers the program in partnership with private insurance companies, which share a percentage of the risk of loss or the opportunity for gain associated with each insurance policy written. Global temperatures have increased in the last 100 years and are projected to continue to rise over the next century. Using observational data and computer modeling, climatologists and other scientists are assessing the likely effects of temperature rise associated with climate change on precipitation patterns and on the frequency and severity of weather- related events. The key scientific assessments we reviewed generally found that warmer temperatures are expected to alter the frequency or severity of damaging weather-related events, such as flooding or drought, although the timing, magnitude, and duration of these changes are as yet undetermined. Additional research on the effect of increasing temperature on weather events is expected in the near future. Nevertheless, research suggests that the potential effects of climate change on damaging weather- related events could be significant. We reviewed the reports released by IPCC, NAS, and the federal Climate Change Science Program (CCSP) that are shown in figure 1. These leading scientific bodies report that the Earth warmed during the twentieth century—0.74 degrees Celsius from 1906 to 2005 according to a recent IPCC report—and is projected to continue to warm for the foreseeable future. IPCC, NAS, CCSP, and other scientific bodies report that this increase in temperature cannot be explained by natural variation alone. IPCC’s 2001 assessment of the impact of increasing temperatures on extreme weather events found that it was likely the frequency and severity of several types of events will increase as greenhouse gas emissions continue. The earth’s climate system is driven by energy from the sun and is maintained by complex interactions between the atmosphere, the oceans, and the reflectivity of the earth’s surface, among other factors. Upon reaching the earth, the sun’s energy is either reflected back into space, or is absorbed by the earth and is subsequently reemitted. However, certain gases in the earth’s atmosphere—such as carbon dioxide and methane— act like the glass in a greenhouse to trap some of the sun’s energy and prevent it from returning to space. While these gases play an important part in maintaining life on earth, their accumulation in the atmosphere can significantly increase global temperatures. The earth warmed by roughly 0.74 degrees Celsius over the past 100 years, and is projected to continue warming for the foreseeable future. While temperatures have varied throughout history, triggered by natural factors such as volcanic eruptions or changes in the earth’s orbit, the key scientific assessments we reviewed have generally concluded that the observed increase in temperature in the past 100 years cannot be explained by natural variability alone. In recent years, major scientific bodies such as the IPCC, NAS, and the Royal Academy (the United Kingdom’s national academy of science) have concluded that human activities, including the combustion of fossil fuels, industrial and agriculture processes, landfills, and some land use changes, are significantly increasing the concentrations of greenhouse gases and, in turn, global temperatures. Although climate models produce varying estimates of the extent of future changes in temperature, NAS and other scientific organizations have concluded that available evidence points toward continued global temperature rise. Assuming continued growth in atmospheric concentration of greenhouse gases, the latest assessment of computer climate models projects that average global temperatures will warm by an additional 1.8 to 4.0 degrees Celsius during the next century. Some scientists have questioned the significance of the earth’s present temperature rise relative to past fluctuations. To address this issue, the NAS recently assessed the scientific community’s efforts to reconstruct temperatures of the past 2,000 years and place the earth’s current warming in an historical context. Based on its review, the NAS concluded with a high level of confidence that global mean surface temperature was warmer during the last few decades of the twentieth century than during any comparable period during the preceding 400 years. Moreover, NAS cited evidence that temperatures at many, but not all, individual locations were higher during the past 25 years than any period of comparable length over the past 1,100 years. Determining the precise nature and extent of the relationship between average global temperatures and weather-related events is an exceedingly challenging task. Several key assessments of the state of this science have addressed the large body of work on this topic. Using observational data and computer models, scientists are examining the effects of rising temperatures on precipitation patterns and the frequency and severity of extreme weather-related events. The complexity of weather systems, together with the limited statistical precision of projections of the extent of future temperature change, often produces different model results, and the results themselves represent a range of potential future conditions. Nonetheless, a key assessment of climate model projections indicates that an increase is likely in the frequency or severity of damaging extreme weather-related events. In 2001, the IPCC, a leading scientific authority on climate science, released its Third Assessment Report, which assessed the state of knowledge of, among other things, the potential for global changes in extreme weather-related events. The IPCC described the relationship between temperatures, precipitation, and weather-related events. Increased global mean surface temperatures are linked to global-scale oceanographic, meteorological, and biological changes. For example, as the earth warms, more water evaporates from oceans or lakes, eventually falling as rain or snow. IPCC reported that permafrost is thawing, and the extent of sea ice, snow cover, and mountain glaciers are generally shrinking. The IPCC also noted that global sea level rose between 0.1 and 0.2 meters during the twentieth century through thermal expansion of seawater and widespread loss of land ice, and that this sea level rise could increase the magnitude of hurricane storm surge in some areas. Warming is expected to change rainfall patterns, partly because warmer air holds more moisture. Based on model projections and expert judgment, the IPCC reported that future increases in the earth’s temperature are likely to increase the frequency and severity of many damaging extreme weather-related events (summarized in table 1). For instance, IPCC reported that increased drought is likely across many regions of the globe, including the U.S. Great Plains. Also, IPCC concluded that the intensity of precipitation events is very likely to increase across almost all regions of the globe and that heavy precipitation events are expected to become more frequent. Compared with projected temperature increases, changes in the frequency and severity of extreme events can occur relatively rapidly, according to the IPCC. Much research has been done since the IPCC’s Third Assessment Report, but there has not been a similarly rigorous assessment of what is known with regard to temperature increase, precipitation, and weather-related events for the United States. However, significant assessments will be completed in the near future. In particular, the IPCC is expected to release its Fourth Assessment Report throughout 2007. While we were completing our review, the IPCC released a summary of the first of three components of its Fourth Assessment Report, which builds upon past IPCC assessments and incorporates new findings from the physical science research since the Third Assessment Report. The summary reports higher confidence in projected patterns of warming and other regional-scale features, including changes in wind patterns, precipitation, and some aspects of extreme events. In particular, the summary reports that it is very likely that hot extremes, heat waves, and heavy precipitation events will continue to become more frequent. Moreover, based on a range of models, IPCC’s summary states that it is likely that future tropical cyclones (typhoons and hurricanes) will become more intense, with larger peak wind speeds and more heavy precipitation associated with ongoing increases in tropical sea surface temperatures. IPCC reports less confidence in projections of a global decrease in the number of tropical cyclones, and that the apparent increase in the proportion of very intense storms since 1970 in some regions is much larger than simulated by current models for that period. The full first component report was not publicly released prior to the issuance of our report and is expected some time after May 2007. The other two components of the Fourth Assessment Report will cover impacts, adaptation, and vulnerability, and mitigation. These reports are expected to assess, among other things, key vulnerabilities and risks from climate change, including changes in extreme events. Additionally, the IPCC has committed to producing a capping report that is intended to synthesize and integrate material contained in the forthcoming reports, as well as other IPCC products. In addition to the IPCC’s work, CCSP is assessing potential changes in the frequency or intensity of weather-related events specific to North America in a report scheduled for release in 2008. According to a National Oceanic and Atmospheric Administration (NOAA) official and agency documents, the report will focus on weather extremes that have a significant societal impact, such as extreme cold or heat spells, tropical and extra-tropical storms, and droughts. Importantly, officials have said the report will provide an assessment of the observed changes in weather and climate extremes, as well as future projections. Extreme weather-related events impact communities and economic activity by damaging homes and vehicles (e.g., see fig. 2), interrupting electrical service and business operations, or destroying crops. IPCC reported that the insurance industry—especially the property and casualty segment—are sensitive to the effects of weather-related events. This was highlighted in the Department of Commerce’s comments on a draft of this report, which observed that altering either the frequency or severity of high impact extreme weather-related events could result in a significant increase in the risk posed to an insurer. For example, the agency said that what had been considered a 500-year event (i.e., its probability of occurring in a given year is 1 in 500) could shift under climate change to become a 100-year event (i.e., its probability of occurring in a given year is 1 in 100). Consequently, more frequent or more severe events have a greater potential for damage and, in turn, insured losses. As an official from Aon Re Australia, a large global reinsurer, reported, “The most obvious impact of climate change on the insurance sector will be the increase in insured property losses from extreme weather events.” Notably, the economic damages associated with some extreme weather- related events could increase at a greater rate in comparison with changes in the events themselves. Seemingly small changes in the characteristics of certain weather-related events can lead to substantial increases in damage. For example, recent work on hurricanes by researchers at the University of Colorado, the National Weather Service, and other institutions examined losses associated with hurricanes that made landfall in the United States since 1900. Holding constant the increased population and development in coastal counties during this period, the study compared the economic damage of stronger storms with weaker storms, based on the Saffir-Simpson Hurricane Scale. The researchers found that stronger storms have caused many times more economic damages than weaker storms, as shown in figure 3. These findings are consistent with other independent analyses conducted by insurers and catastrophe modelers. Moreover, public reports from several of the world’s largest reinsurance companies and brokers underscore the potential for substantially increased losses. These reports note that, in addition to greater losses in absolute terms, the potential for greater variability in weather-related events could significantly enhance the volatility of losses. Taken together, insurers paid more than $320 billion in claims for weather- related losses between 1980 and 2005. Claims varied significantly from year to year—largely due to the effects of catastrophic weather events such as hurricanes and droughts—but generally increased during this period. The growth in population in hazard-prone areas, and consequent real estate development and increasing real estate values, have generally increased insurers’ exposure to weather-related events and help to explain their increased losses. Due to these and other factors, the federal insurance programs’ liabilities have grown significantly, leaving the federal government increasingly vulnerable to the financial impacts of extreme events. Based on an examination of loss data from several different sources, insurers incurred more than $320 billion in weather-related losses from 1980 through 2005 (see fig. 4). Weather-related losses accounted for 88 percent of all property losses paid by insurers during this period. All other property losses, including those associated with earthquakes and terrorist events, accounted for the remainder. Weather-related losses varied significantly from year to year, ranging from just over $2 billion in 1987 to more than $75 billion in 2005. Of the $321.2 billion in weather-related loss payments we reviewed, private insurers paid $243.5 billion—over three-quarters of the total. Figure 5 depicts the breakdown of these payments among key weather-related events. Of the $243.5 billion paid by private insurers, hurricanes accounted for $124.6 billion, or slightly more than half. Wind, tornados, and hail associated with severe thunderstorms accounted for $77 billion, or nearly one-third of the private total. Winter storms were associated with $25.1 billion, or about 10 percent. The two major federal insurance programs—NFIP and FCIC—paid the remaining $77.7 billion of the $321.2 billion in weather-related loss payments we reviewed. Although the performance of both NFIP and FCIC is sensitive to weather, the two programs insure fundamentally different risks and operate in very different ways. NFIP provides insurance for flood damage to homeowners and commercial property owners in more than 20,000 communities. Homeowners with mortgages from federally regulated lenders on property in communities identified as being in high flood risk areas are required to purchase flood insurance on their dwellings. Optional, lower cost flood insurance is also available under the NFIP for properties in areas of lower flood risk. NFIP offers coverage for both the property and its contents, which may be purchased separately. NFIP claims totaled about $34.1 billion, or about 11 percent of all weather- related insurance claims during this period. As shown in figure 6, NFIP covers only one cause of loss—flooding. Claims averaged about $1.3 billion per year, but ranged from $75.7 million in 1988 to $16.7 billion in 2005. FCIC insures commodities on a crop-by-crop and county-by-county basis based on farmer demand for coverage and the level of risk associated with the crop in a given region. Over 100 crops are covered by the program. Major crops, such as grains, are covered in almost every county where they are grown, and specialty crops, such as fruit, are covered only in some areas. Participating farmers can purchase different types of crop insurance, including yield and revenue insurance, and at different levels. For yield insurance, participating farmers select the percentage of yield of a covered crop to be insured and the percentage of the commodity price received as payment if the producer’s losses exceed the selected threshold. Revenue insurance pays if actual revenue falls short of an assigned target level regardless of whether the shortfall was due to low yield or low commodity market prices. Since 1980, FCIC claims totaled $43.6 billion, or about 14 percent of all weather-related claims during this period. FCIC losses averaged about $1.7 billion per year, ranging from $531.8 million in 1987 to $4.2 billion in 2002. Figure 7 shows the three causes of loss—drought, excess moisture, and hail––that accounted for more than three-quarters of crop insurance claims. In particular, drought accounted for $18.6 billion in losses, or more than 40 percent of all insured crop losses. Excess moisture totaled $11.2 billion, followed by hail with total claims of $4.2 billion. The remaining $9.6 billion in claims was spread among 27 different causes of loss, including frost and tornados. Importantly, the insured loss totals used in our analysis do not account for all economic damage associated with weather-related events. Specifically, data are not available for several categories of economic losses, including uninsured, underinsured, and self-insured losses. As we reported in 2005, FEMA estimates that one-half to two-thirds of structures in floodplains do not have flood insurance because the uninsured owners either are unaware that homeowners insurance does not cover flood damage, or they do not perceive a serious flood risk. Furthermore, industry analysts estimate that 58 percent of homeowners in the United States are underinsured—that is, they carry a policy below the replacement value of their property—by an average of 21 percent. Finally, some individuals and businesses have the means to “self-insure” their assets by assuming the full risk of any damage. Various public and private disaster relief organizations provide assistance to communities and individuals who suffer noninsured economic losses, although it was beyond the scope of this report to collect data on these losses. In particular, since 1989, $78.6 billion in federal disaster assistance funds have been obligated through the Disaster Relief Fund administered by FEMA, the largest—but not only—conduit for federal disaster assistance money provided in the wake of presidentially declared disasters and emergencies. Overall, according to data obtained from Munich Re, one of the world’s largest reinsurers, the type of insured losses we reviewed account for no more than about 40 percent of the total losses attributable to weather- related events. NOAA’s National Hurricane Center (NHC) uses a similar proportion to produce the agency’s estimates of total economic damage attributable to hurricanes. Although we did not independently evaluate the reliability of these estimates, subject area experts we spoke with confirmed that it was the best such estimate available and is widely used as an approximation of the relative distribution of losses. The difficulties we and others faced in accounting for weather-related losses were the subject of the National Academies’ The Impacts of Natural Disasters: A Framework for Loss Estimation. Reporting how best to account for the costs of natural disasters, including weather- related events, NAS found that there was no system in place in either the public or the private sectors to consistently capture information about the economic impact. Specifically, the NAS report found no widely accepted framework, formula, or method for estimating these losses. Moreover, NAS found no comprehensive clearinghouse for the disaster loss information that is currently collected. To that end, NAS recommended that the Office of Management and Budget, in consultation with FEMA and other federal agencies, develop annual, comprehensive estimates of the payouts for disaster losses made by federal agencies. Reviewing the status of this recommendation was beyond the scope of this report. Nevertheless, our experience with trying to obtain comprehensive information on disaster costs and losses underscores the NAS findings. The largest insured losses in the data we reviewed were associated with catastrophic weather events. These events have a low probability of occurrence, but their consequences are severe. Notably, both crop insurers and other property insurers face the catastrophic risks posed by extreme events, although the nature of the events for each is very different. In the case of crop insurance, drought accounted for more than 40 percent of all insured losses from 1980 to 2005, and the years with the largest losses were associated with drought. Taken together, though, hurricanes were the most damaging event experienced by insurers in the data we reviewed. Although the United States experienced an average of only two hurricanes per year from 1980 through 2005, weather-related claims attributable to hurricanes totaled more than 45 percent of all weather-related insured losses—more than $146 billion. Moreover, these losses appear to be increasing. In the data we reviewed, the years with the largest insured losses were generally associated with major hurricanes, defined as Category Three, Four, or Five on the Saffir-Simpson Hurricane Scale. Table 2 shows that, while 29 Category One and Two storms account for nearly $18 billion in losses, the 21 major storms account for over $126 billion in losses. In fact, claims associated with major hurricanes comprised 40 percent of all weather-related insured losses since 1980. Importantly, hurricane severity is only one factor in determining the size of a particular loss—the location affected by the hurricane is also important. Generally, the more densely populated an area, the greater the extent of economic activity and accumulated value of the building stock. For instance, several studies have reviewed the economic impact of Hurricane Andrew, which tracked over Florida in 1992, in light of the dramatic real estate development that has occurred in the meantime. Researchers have normalized losses associated with the storm to account for societal changes by holding constant the value of building materials, real estate, and other factors so that the storm’s impact could be adjusted to reflect contemporary conditions. Hurricane Andrew, which resulted in roughly $25 billion in total economic losses in 1992, would have resulted in more than twice that amount—$55 billion—were it to have occurred in 2005, given current asset values. Several recent studies have commented on the apparent increases in hurricane losses during this time period, and weather-related disaster losses generally, with markedly different interpretations. Some argue that loss trends are largely explained by changes in societal and economic factors, such as population density, cost of building materials, and the structure of insurance policies. Others argue that increases in losses have been driven by changes in climate. To address this issue, Munich Re and the University of Colorado’s Center for Science and Technology Policy Research jointly convened a workshop in Germany in May 2006 to assess factors leading to increasing weather- related loss trends. The workshop brought together a diverse group of international experts in the fields of climatology and disaster research. Among other things, the workshop sought to determine whether the costs of weather-related events were increasing and what factors account for increasing costs in recent decades. Workshop participants reached consensus on several points, including that analyses of long-term records of disaster losses indicate that societal change and economic development are the principal factors explaining observed increases in weather-related losses. However, participants also agreed that changing patterns of extreme events are drivers for recent increases in losses and that additional increases in losses are likely given IPCC’s projected increase in the frequency or severity of weather-related events. The growth in population in hazard-prone areas, and consequent real estate development and increasing real estate values, are leaving the nation increasingly exposed to higher insured losses. The close relationship between the value of the resource exposed to weather-related losses and the amount of damage incurred may have ominous implications for a nation experiencing rapid growth in some of its most disaster-prone areas. We reported in 2002 that the insurance industry faces potentially significant financial exposure due to natural catastrophes. Heavily populated areas along the Northeast, Southeast, and Texas coasts have among the highest value of insured properties in the United States and face the highest likelihood of major hurricanes. According to insurance industry estimates, a large hurricane in Miami could cause up to $110 billion in insured losses with total losses as high as $225 billion. Several states—including Florida, California, and Texas—have established programs to help ensure that coverage is available in areas particularly prone to these events. AIR Worldwide, a leading catastrophe modeling firm, recently reported that insured losses should be expected to double roughly every 10 years because of increases in construction costs, increases in the number of structures, and changes in their characteristics. AIR’s research estimates that, because of exposure growth, probable maximum catastrophe loss grew in constant dollars from $60 billion in 1995 to $110 billion in 2005, and it will likely grow to over $200 billion during the next 10 years. Data obtained from both the NFIP and FCIC programs indicate the federal government has grown markedly more exposed to weather-related losses regardless of the cause. For example, NFIP data show that the number of policyholders and the value of the properties insured have both increased since 1980. Figure 8 shows the growth of NFIP’s exposure in terms of both number of policies and the total coverage. The number of policies has more than doubled in this time period, from 1.9 million policies to more than 4.6 million. Moreover, although NFIP limits coverage to $250,000 for a personal structure and $100,000 for its contents, and $500,000 of coverage for a business structure and $500,000 on its contents, more policyholders’ homes are approaching (or exceeding) these coverage limits. Accordingly, the total value covered by the program increased fourfold in constant dollars during this time from about $207 billion to $875 billion in 2005. Similarly, RMA data show that FCIC has effectively increased its exposure base 26-fold during this period (in constant dollars). In particular, the program has significantly expanded the scope of crops covered and increased participation. Figure 9 shows the growth in FCIC exposure since 1980. A senior RMA official told us that the main implication of FCIC’s growth is that the magnitude of potential claims, in absolute terms, is much greater today than in the past. For example, if the Midwest floods of 1993 were to occur today, losses would be five times greater than the $2 billion paid in 1993, according to RMA officials. Although the relative contribution of event intensity versus societal factors in explaining the rising losses associated with weather-related events is still under investigation, both major private and federal insurers are exposed to increases in the frequency or severity of weather-related events associated with climate change. Nonetheless, major private and federal insurers are responding to this prospect differently. Many large private insurers are incorporating some elements of near-term climate change into their risk management practices. Furthermore, some of the world’s largest insurers have also taken a long-term strategic approach toward changes in climate. On the other hand, for a variety of reasons, the federal insurance programs have done little to develop the kind of information needed to understand the programs’ long-term exposure to climate change. We acknowledge the different mandate and operating environment in which the major federal insurance programs operate but believe that better information about the federal government’s exposure to potential changes in weather-related risk would help the Congress identify and manage this emerging high-risk area; one which may not constitute an immediate crisis but which may pose an important longer term threat to the nation’s welfare. Extreme weather events pose a unique financial threat to private insurers’ financial success because a single event can cause insolvency or a precipitous drop in earnings, liquidation of assets to meet cash needs, or a downgrade in the market ratings used to evaluate the soundness of companies in the industry. To prevent these disruptions, the American Academy of Actuaries (AAA)—the professional society that establishes, maintains, and enforces standards of qualification, practice, and conduct for actuaries in the United States—has outlined a five-step process for private insurers to follow to manage their catastrophic risk. These steps include the following: identifying catastrophic risk appetite by determining the maximum potential loss they are willing to accept; measuring catastrophic exposure by determining how vulnerable their total portfolio is to loss, both in absolute terms and relative to the company’s risk management goals; pricing for catastrophic exposure by setting rates to collect sufficient premiums to cover their expected catastrophic loss and other expenses; controlling catastrophic exposure by reducing their policies in areas where they have too much exposure, or transferring risk using reinsurance or other mechanisms; and evaluating their ability to pay claims by determining the sufficiency of their financial resources to cover claims in the event of a catastrophe. Additionally, insurers monitor their exposure to catastrophic weather- related risk using sophisticated computer models called “catastrophe models.” AAA emphasizes the shortcomings of estimating future catastrophic risk by extrapolating solely from historical losses and endorses catastrophe models as a more rigorous approach. Catastrophe models incorporate the underlying trends and factors in weather phenomena and current demographic, financial, and scientific data to estimate losses associated with various weather-related events. According to an industry representative, catastrophe models assess a wider range of possible events than the historical loss record alone. These models simulate losses from thousands of potential catastrophic weather-related events that insurers use to better assess and control their exposure and inform pricing and capital management decisions. Figure 10 illustrates the difference between estimating future catastrophic losses using historical data versus catastrophe models. To determine what major private insurers are doing to estimate and prepare for risks associated with potential changes in climate arising from natural or human factors, we contacted 11 of the largest private insurers operating in the U.S. property casualty insurance market. Representatives from each of the 11 major insurers we interviewed told us they use catastrophe models that incorporate a near-term higher frequency and intensity of hurricanes. Of the 11 private insurers, 6 specifically attributed the higher frequency and intensity of hurricanes to the Atlantic Multidecadal Oscillation, which—according to NOAA—is a 20- to 40-year climatic cycle of fluctuating temperatures in the north Atlantic Ocean. The remaining 5 insurers did not elaborate on the elements of climate change driving the differences in hurricane characteristics. Industry reports indicate that insurance companies’ perception of increased risk from hurricanes has prompted them to reduce their near- term catastrophic exposure, in both reinsurance and primary insurance coverage along the Gulf Coast and eastern seaboard. For example, a recent industry analysis from a leading insurance broker reported that reinsurance coverage is substantially limited in the southeastern United States and that reinsurance prices have more than doubled from 2005 to 2006, following a record-setting hurricane season. According to the Insurance Information Institute, a leading source of information about the insurance industry, primary insurance companies have also raised prices in coastal states to cover rising reinsurance costs. Additionally, a recent report co-authored by a major international insurance company cites several examples of large primary insurers either limiting coverage or withdrawing from vulnerable areas such as Florida, the Gulf Coast, and Long Island. As private insurers limit their exposure, catastrophic risk is transferred to policyholders and the public sector. Insurance companies transfer risk to policyholders by increasing premiums and deductibles, or by setting lower coverage limits for policies. Insurers can also transfer risk to policyholders by passing along the mandatory participation costs of state-sponsored insurance plans. For example, after the 2004 hurricane season, insurers assessed a surcharge of about 7 percent to every policyholder in Florida to recoup the cost of insurers’ participation in the state-sponsored wind insurance plan. The public sector assumes management of weather-related risk at the local, state, and national level by providing disaster relief and recovery, developing mitigation projects, appropriating funds and, ultimately, providing insurance programs when private insurance markets are not sufficient or do not exist. In addition to managing their aggregate exposure on a near-term basis, some of the world’s largest insurers have also taken a long-term strategic approach to changes in catastrophic risk. For example, major insurance and reinsurance companies, such as Allianz, Swiss Re, Munich Re, and Lloyds of London, have published reports that advocate increased industry awareness of the potential risks of climate change and outline strategies to address the issue proactively. Moreover, 6 of the 11 private insurers we interviewed provided one or more additional activities they have undertaken when asked if their company addresses changes in climate through their weather-related risk management processes. These activities include monitoring scientific research (4 insurers), simulating the impact of a large loss event on their portfolios (3 insurers), and educating others in the industry about the risks of climate change (3 insurers), among others. Furthermore, recent research on insurers’ activities to address climate change outlines several other actions that private sector companies are taking, such as developing specialized policies and new products, evaluating risks to company stock investments, and disclosing to shareholders information about company-specific risks due to climate change. Additionally, concern over the potential impacts of climate change on the availability and affordability of private insurance has led state insurance regulators to establish a task force to formally address the issue. The report, issued by the NAIC, is expected to be published in the summer of 2007. The goals of the major federal insurance programs are fundamentally different from those of private insurers. Specifically, whereas private insurers stress the financial success of their business operations, the statutes governing the NFIP and FCIC promote affordable coverage and broad participation by individuals at risk. Although both programs manage risk within their statutory guidelines, unlike the private sector, neither program is required to limit its catastrophic risk strictly within the programs’ ability to pay claims on an annual basis. One important implication of the federal insurers’ risk management approach is that they each have little reason to develop information on their long-term exposure to the potential risk of increased low-frequency, high-severity weather events associated with climate change. The statutes governing the NFIP and FCIC promote broad participation over financial self-sufficiency in two ways: (1) by offering discounted or subsidized premiums to encourage participation and (2) by making additional funds available during high-loss years. For example, discounted insurance premiums are available under the NFIP for some older homes situated within high flood risk areas where insurance would otherwise have been prohibitively expensive. FEMA is also authorized to borrow additional federal funds for the NFIP on an as-needed basis, subject to statutory limits, to cope with catastrophes. One effect has been that the NFIP’s exposure has expanded well beyond the ability to pay claims in high-loss years. Similar to the discounted premiums offered by the NFIP, the FCIC’s subsidized premiums are designed to make crop insurance available and affordable to as many participants as possible. For example, the FCIC is mandated to provide fully subsidized catastrophic coverage for producers in exchange for a minimal administrative fee, as well as partial subsidies for additional levels of coverage. Also like the NFIP, the FCIC is authorized to use additional federal funds on an as-needed basis during high-loss years—although, unlike the NFIP, the FCIC is not required to reimburse those additional funds. Unlike the private sector, the NFIP and the FCIC can use additional federal funds, and so neither program is required to assess and limit its catastrophic risk strictly within its ability to pay claims on an annual basis. Instead, each program manages its risk to the extent possible, within the context of its broader purposes, in accordance with its authorizing statutes and implementing regulations. For example, the FCIC uses coverage limits, exclusions, and premium rates to meet their statutory goal of a long-term loss ratio no greater than 1.075—including premium subsidies. Although the program has experienced high-loss years that required additional federal funds, over time, these high-loss years have been offset by low-loss years, which have allowed the program to meet its goal and build reserves. By developing a goal to generate sufficient revenue to pay for an average loss year, the NFIP has also been able to generate a surplus in low-loss years despite borrowing funds in high-loss years. In the past, the program has been able to repay borrowed funds with interest to the Department of the Treasury, however, it is unlikely FEMA will be able to repay the nearly $21 billion borrowed following the 2005 hurricane season based on the program’s current premium income. Although neither program faces the potential of financial ruin like the private sector, both programs have occasionally attempted to estimate their aggregate losses from potential catastrophic events. For example, FCIC officials stated that they had modeled past events, such as the 1993 Midwest floods, using current participation levels to inform negotiations with private crop insurers over reinsurance terms. NFIP and FCIC officials explained that these efforts were informal exercises and were not performed on a regular basis. FCIC officials also said they use a hurricane model developed by NOAA to inform pricing decisions for some commodities such as citrus crops, according to FCIC officials. However, unlike the catastrophic risk faced by private insurers, hurricane damages have not been a primary source of crop insurance claims. According to NFIP and FCIC officials, their risk management processes adapt to near-term changes in weather as they affect existing data. As one NFIP official explained, NFIP is designed to assess and insure against current—not future—risks. Over time, agency officials stated, this process has allowed their programs to operate as intended. However, unlike the private sector, neither program has conducted an analysis to assess the potential impacts of an increase in the frequency or severity of weather- related events on their program operations over the near- or long-term. While comprehensive information on federal insurers’ long-term exposure to catastrophic risk associated with climate change may not inform the NFIP’s or FCIC’s annual operations, it could nonetheless provide valuable information for the Congress and other policymakers who need to understand and prepare for fiscal challenges that extend well beyond the two programs’ near-term operational horizons. We have highlighted the need for this kind of strategic information in recent reports that have expressed concern about the looming fiscal imbalances facing the nation. In one report, for example, we observed that, “Our policy process will be challenged to act with more foresight to take early action on problems that may not constitute an urgent crisis but pose important long-term threats to the nation’s fiscal, economic, security, and societal future.” The prospect of increasing program exposure, coupled with expected increases in frequency and severity of weather events associated with climate change, would appear to pose such a problem. Agency officials identified several challenges that could complicate their efforts to assess these impacts at the program level. Both NFIP and FCIC officials stated there was insufficient scientific information on projected impacts at the regional and local levels to accurately assess their impact on the flood and crop insurance programs. However, members of the insurance industry have analyzed and identified the potential risks climate change poses, despite similar challenges. Moreover, as previously discussed, both the IPCC and CCSP are expected to release significant assessments of the likely effect of increasing temperatures on weather events in coming months. The experience of many private insurers, who must proactively respond to long-term changes in weather-related risk to remain solvent, suggests the kind of information that might be developed to help congressional and other policymakers in assessing current and alternative strategies. Specifically, to help ensure their future viability, a growing number of private insurers are actively incorporating the potential for climate change into their strategic level analyses. In particular, some private insurers have run a variety of simulation exercises to determine the potential business impact of an increase in the frequency and severity of weather events. For example, one insurer simulated the impact of large weather events occurring simultaneously. A similar analysis could provide the Congress with valuable information about the potential scale of losses facing the NFIP and FCIC in coming decades, particularly in light of the programs’ expansion since 1980. Recent assessments by leading scientific bodies provide sufficient cause for concern that climate change may have a broad range of long-term consequences for the United States and its citizens. While a number of key uncertainties regarding the timing, location, and magnitude of impacts remain, climate change has implications for the fiscal health of the federal government, which already faces other significant challenges in meeting its long-term fiscal obligations. NFIP and FCIC are two major federal programs which, as a consequence of both future climate change and substantial growth in exposure, may see their losses grow by many billions of dollars in coming decades. We acknowledge that to carry out their primary missions, these public insurance programs must focus on the near-term goals of ensuring affordable coverage for individuals in hazard-prone areas. Nonetheless, we believe the two programs are uniquely positioned to provide strategic information on the potential impacts of climate change—information that would be of value to key decision makers charged with such a long-term focus. Most notably, in exercising its oversight responsibilities, the Congress could use such information to examine whether the current structure and incentives of the federal insurance programs adequately address the challenges posed by potential increases in the frequency and severity of catastrophic weather events. While the precise content of these analyses can be debated, the activities of many private insurers already suggest a number of strong possibilities that may be applicable to assessing the potential implications of climate change on the federal insurance programs. We recommend that the Secretary of Agriculture and the Secretary of Homeland Security direct the Administrator of the Risk Management Agency and the Under Secretary of Homeland Security for Emergency Preparedness to analyze the potential long-term implications of climate change for the Federal Crop Insurance Corporation and the National Flood Insurance Program, respectively, and report their findings to the Congress. This analysis should use forthcoming assessments from the Climate Change Science Program and the Intergovernmental Panel on Climate Change to establish sound estimates of expected future conditions. Key components of this analysis may include: (1) realistic scenarios of future losses under anticipated climatic conditions and expected exposure levels, including both potential budgetary implications and consequences for continued program operation and (2) potential mitigation options that each program might use to reduce their exposure to loss. We provided a draft of this report to the Departments of Agriculture (USDA), Commerce, Energy, and Homeland Security (DHS) for their review. DHS agreed via email with the report’s recommendation, noting that conducting an assessment of the impact of climate change beyond FEMA’s current statistical modeling (which is based on historical loss experience) could be helpful if resources were available to pursue such an analysis. USDA also agreed with the report’s recommendation, and commented on the presentation of several findings. (See app. V for the letter from the Under Secretary for Farm and Foreign Agricultural Services and GAO’s point-by-point response.) In particular, USDA disagreed that it had thus far taken little action to prospectively assess potential increases in catastrophic risk associated with climate change. USDA explained that RMA does assess both the current and long-term exposure of the crop insurance program to catastrophic weather events, noting specifically that RMA (1) updates and publishes total program liability on a weekly basis and (2) estimates expected changes in liability up to 10 years ahead through its baseline projections. We acknowledge these activities, but believe it is important to note that they are limited in scope, focusing almost exclusively on retrospective measures of performance and not on the potential for increasingly frequent and intense weather-related events. These events, including drought and heavy precipitation events, are the key events acknowledged by USDA as posing catastrophic risk to the crop insurance program. Moreover, other RMA efforts to capture changes in weather-related risk rely on data reflecting what has been experienced in the past, not on what could be experienced in the future. The Department of Commerce neither agreed nor disagreed with the report’s findings, but instead offered several comments on the presentation of several issues in the draft (particularly the depth in which several issues are discussed) as well as technical comments. We have incorporated these comments as appropriate and address them in detail in appendix VI. Notably, the Department of Commerce underscored the vulnerability of high-risk coastal development, stating that such vulnerabilities will only be amplified by climate change-related increases in the frequency or severity of weather-related events. Finally, the Department of Energy elected not to provide comments on the draft. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of Agriculture, Commerce, Energy, and Homeland Security, as well as other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions regarding this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix VII. We were asked us to (1) describe what is known about how climate change might affect insured and uninsured losses, (2) determine insured losses incurred by major federal agencies and private insurers and reinsurers resulting from weather-related events, and (3) determine what major federal agencies and private insurers and reinsurers are doing to assess and manage the potential risk of increased losses due to changes in the frequency and severity of weather-related events associated with climate change. To address the first objective, we reviewed and summarized existing literature from significant policy-oriented scientific assessments from reputable international and national research organizations including the Intergovernmental Panel on Climate Change, National Academy of Sciences, and the multifederal agency U.S. Climate Change Science Program, as specified in table 3. It was beyond the scope of this report to independently evaluate the results of these studies. To address the second objective, we analyzed insured loss data from January 1, 1980, through December 31, 2005, from the Federal Emergency Management Agency (FEMA) for the National Flood Insurance Program (NFIP); the Department of Agriculture’s Risk Management Agency (RMA) for the Federal Crop Insurance Corporation (FCIC); and the Property Claim Services (PCS) for private property insurance. Through electronic testing and other means, we assessed the reliability of each of the data sets to determine whether the data were sufficiently reliable for our purposes. Specifically, we interviewed the sources for each of the data sets to gather information on how records were collected, processed, and maintained. Because not all catastrophes are weather-related, we excluded all events attributable to terrorist acts, tsunamis, earthquakes, and other nonweather-related losses, based on discussions with the data provider. To adjust for the general effects of inflation over time we used the chain- weighted gross domestic product price index to express dollar amounts in inflation-adjusted 2005 dollars. We reviewed any changes in data collection methodologies that have occurred over time, and evaluated the effect of any changes on our ability to report losses. We believe that these data are sufficiently reliable for the purpose of describing insured losses. We note, however, that these data likely understate the actual insured losses. PCS data are estimates of insured losses, or claims paid by private insurance companies, for catastrophe loss events for the 50 states, as well as the District of Columbia, Puerto Rico, and the U.S. Virgin Islands. PCS defines “catastrophes” as events that, in their estimation, affect a significant number of policyholders and that cause more than $25 million in damages. To identify catastrophes, PCS reviews daily weather reports and wire service news stories to determine if potentially damaging weather has occurred anywhere in the nation. PCS contacts adjusters, insurance claims departments, or public officials to gather additional information about the scope of damage and potential insured losses for events. Damages associated with a single storm event are grouped together as a single catastrophe, even if they are separated by distance. PCS obtains its insured loss data from information reported by insurers. PCS estimates include losses under personal and commercial property insurance policies covering real property, contents, business interruption, vehicles, and boats. PCS estimates also typically include amounts paid by state wind pools, joint underwriting associations, and certain other residual market mechanisms, such as Fair Access to Insurance Requirements (FAIR) plans. However, PCS estimates do not include damage to uninsured or self-insured property including uninsured publicly owned property and utilities; losses involving agriculture, aircraft and property insured under NFIP or certain specialty lines (such as ocean marine), or loss adjustment expenses. Generally, PCS finalizes its estimates within 6 months of the occurrence of a PCS-identified catastrophe, according to company documents. PCS does not independently verify or audit the accuracy of the reported losses. Thus, loss totals are the best estimates of primary insurers compiled by PCS professionals, and may or may not accurately and completely reflect actual industry-insured losses. Nevertheless, PCS has determined their data to be very close to other independent estimates. PCS officials said that, when compared with state insurance commissioners’ estimates based on all loss data from insurance companies following particularly large catastrophes, PCS data are within 3 to 5 percent of actual amounts. For the data used in our review, company officials told us that most estimates included in the data provided to us are final, except the 2005 hurricanes. NFIP data are actual claim payment totals, not estimated amounts. NFIP data represent the budget outlays that satisfy claims submitted by NFIP policyholders to their participating program companies. The companies report these data to the NFIP on a monthly basis. According to a senior program official, the Department of Homeland Security performs periodic audits of company records reported to NFIP. Although nearly all claims in the NFIP data we reviewed are considered closed by the agency (and, therefore, final), a small portion of claims associated with 2004 and 2005 hurricane season are not reflected in data we reviewed, according to the agency’s database manager. The loss data provided by FCIC represent the actual amount paid to policyholders, not estimates. FCIC data represent the budget outlays that satisfy claims submitted by policyholders to their participating insurance companies. Participating insurance companies submit claims information for processing through a computerized validation system. Automated processing of claims information occurs annually for a period going back 5 years, but agency officials said that indemnities may have changed after automated processing closed in very specific cases, such as settlement of litigation or arbitration cases. To determine the insured losses associated with major and nonmajor hurricanes, we identified losses associated with hurricanes in both the PCS and NFIP data sets. We used the name and year of each hurricane to link loss records to information from the National Oceanic and Atmospheric Administration (NOAA) on the peak intensity of each hurricane at or near landfall. We supplemented our descriptive analysis with a review of existing literature and the views of subject area experts on the primary drivers of changes in the weather-related loss record in general. Given the data challenges faced by natural hazard researchers, the data sets used in these studies are generally different. To address the third objective, we conducted semistructured interviews with officials from the NFIP, RMA, and a nonprobability sample of the largest private property/casualty primary insurance and reinsurance companies as defined by national market share. In the private sector, 11 out of 14 potential respondents elected to participate, drawing from companies in the United States, Europe, and Bermuda. Although the results from this sample should not be generalized to represent all insurance companies, the companies we interviewed represent about 45 percent of the total domestic insurance market. In developing our semistructured questionnaire, we reviewed existing literature on risk assessment and management practices, GAO guidance on risk management, and interviewed subject area experts knowledgeable about the insurance industry and federal insurance programs. Insurance industry experts included representatives from insurance brokers, catastrophe modeling firms, industry associations, the Insurance Information Institute, and academics. To reduce response error, we pretested our questions for clarity, relevancy, and sensitivity with representatives from several insurance industry associations, including the American Insurance Association, the National Association of Mutual Insurance Companies, the Property Casualty Insurance Association of America, and the Reinsurance Association of America. On the basis of feedback from the pretests, we modified the questions as appropriate. We distinguished proactive risk management responses to climate change from other responses according to whether insurers indicated that they were adjusting their activities based on projected changes in underlying weather trends rather than adapting only as changes in weather conditions reveal themselves in historical data. During our interviews, some private insurers attributed their actions to changes in the Atlantic Multidecadal Oscillation (AMO). Because NOAA considers the AMO to be a climatic cycle, we categorized the actions of these insurers as responding to climate change. We asked the participating federal agencies and private insurance and reinsurance companies to identify individuals knowledgeable about their weather-related risk management practices for our interviews. Based on these criteria, we spoke with a range of senior officials and representatives that included actuaries, underwriters, catastrophe specialists, regulatory affairs and counsel. During the interviews, we asked a series of questions about risk assessment and management practices for weather-related risk, significant drivers of changes to past and future weather-related risk, respondents’ perception of and actions to address climate change in their risk management processes, and risk management best practices that might be transferable to federal insurers. We also interviewed officials from rating agencies, catastrophe modeling firms, insurance industry associations, the National Association of Insurance Commissioners, and universities to provide additional context for respondents’ statements. To supplement our interviews, we reviewed documentary evidence of risk management practices from federal agencies, studies from subject area experts, industry reports, publicly available insurance company documents, and previous work from GAO to provide context and support for respondents’ statements. We performed our work between February 2006 and January 2007 in accordance with generally accepted government auditing standards. Floods are the most common and destructive natural disaster in the United States. According to NFIP statistics, 90 percent of all natural disasters in the United States involve flooding. Because of the catastrophic nature of flooding and the inability to adequately predict flood risks, private insurance companies largely have been unwilling to underwrite and bear the risk of flood insurance. As a result, flooding is generally excluded from homeowner policies that cover damages from other types of losses, such as wind, fire, and theft. The NFIP was established in 1968 to address uninsured losses due to floods. Prior to the establishment of the NFIP, structural flood controls on rivers and shorelines (e.g., dams and levees) and disaster assistance for flood victims were the federal government’s primary tools for addressing floods. The Mississippi River Commission, created in 1879 to oversee the development of a levee system to control the river’s flow, was the first of these federal efforts to address flooding. Due to the limited effectiveness of structural flood controls, continued development in flood-prone areas, and a desire to reduce postdisaster assistance payments, the Congress began examining the feasibility of prefunding flood disaster costs via federal insurance in the 1950s. Although the first federal flood insurance program authorized by the Congress in 1956 failed due to lack of funding, a series of powerful hurricanes and heavy flooding on the Mississippi River in the early 1960s prompted the Congress to revisit the issue and direct the Department of Housing and Urban Development (HUD) to conduct a feasibility study of a federal flood insurance program. The 1966 HUD feasibility study helped lead to the passage of the National Flood Insurance Act of 1968, which authorized the creation of the NFIP. Since its inception, the NFIP has undergone several major changes in response to significant flood events. Hurricane Agnes in 1972 led to the mandatory flood insurance requirements on certain persons in flood-prone areas included in the Flood Disaster Protection Act of 1973, which also significantly increased coverage limits in a further effort to increase participation. Following the Midwest floods of 1993, the Congress enacted the National Flood Insurance Reform Act of 1994, which strengthened lender compliance requirements with mandatory purchase provisions requiring mortgage-holders in flood-prone areas to purchase flood insurance and prohibited flood disaster assistance for properties that had not maintained their mandatory coverage. In 2004, recognizing that losses from repetitive flooding on some insured properties was straining the financial condition of the NFIP, the Congress passed the Flood Insurance Reform Act of 2004, which provided NFIP with additional tools to reduce the number and financial impact of these properties. These tools include: increased authorization of funding for mitigation of repetitive loss properties and statutory authority to penalize policyholders who refuse government assistance to mitigate certain structures that have been substantially or repetitively damaged by flooding, among others. Recently, the Congress has begun exploring additional changes to the NFIP to address the financial and operational challenges presented by the 2005 hurricane season. FEMA, within the Department of Homeland Security (DHS), is responsible for the oversight and management of the NFIP. Under this program, the federal government assumes the liability for covered losses and sets rates and coverage limitations, among other responsibilities. The NFIP combines three elements: (1) property insurance for potential flood victims, (2) mapping to identify the boundaries of the areas at highest risk of flooding, and (3) incentives for communities to adopt and enforce floodplain management regulations and building standards (such as elevating structures) to reduce future flood damage. The effective integration of all three of these elements is needed for the NFIP to achieve its goals of providing property flood insurance coverage for a high proportion of property owners who would benefit from such coverage, reducing taxpayer-funded disaster assistance when flooding strikes, and reducing flood damage through floodplain management and the enforcement of building standards. Over 20,000 communities across the United States and its territories participate in the NFIP by adopting and agreeing to enforce state and community floodplain management regulations to reduce future flood damage. In exchange, the NFIP makes federally backed flood insurance available to homeowners and other property owners in these communities. As of 2005, the program had over 4.9 million policyholders, representing about $875 billion in assets. Homeowners with mortgages from federally regulated lenders on property in communities identified to be in high flood risk areas are required to purchase flood insurance on their dwellings. Optional, lower cost coverage is also available under the NFIP to protect homes in areas of low to moderate risk. The mandated coverage protects homeowners’ dwellings only; to insure furniture and other personal property items against flood damage, homeowners must purchase separate NFIP personal property coverage. Prior to the 2005 hurricanes, NFIP had paid about $14.6 billion in flood insurance claims, primarily from policyholder premiums that otherwise would have been paid through taxpayer-funded disaster relief or borne by home and business owners themselves. According to FEMA, every $3 in flood insurance claims payments saves about $1 in disaster assistance payments, and the combination of floodplain management and mitigation efforts save about $1 billion in flood damage each year. To make flood insurance available on “reasonable terms and conditions to persons who have need for such protection,” the NFIP strikes a balance between the scope of the coverage provided and the premium amounts required to provide that coverage. Policy coverage limits arise from statute and regulation, including FEMA’s standard flood insurance policy (SFIP), which is incorporated in regulation and issued to policyholders when they purchase flood insurance. As of 2006, FEMA estimated 26 percent of its policies were subsidized, and 74 percent were charged “full-risk premium” rates. In 1981, FEMA set the operating goal of generating premiums at least sufficient to cover losses and expenses relative to the “historical average loss year.” However, the heavy losses from the 2005 hurricane season may increase the historical average loss year to a level beyond the expected long-term average. In light of this, FEMA is currently revisiting the use of the historical average loss year as a premium income target. The NFIP uses hydrologic models to estimate loss exposure in flood-prone areas, based on the method outlined in the 1966 HUD report, Insurance and Other Programs for Financial Assistance to Flood Victims. These techniques of analysis were first developed by hydrologists and hydraulic engineers to determine the feasibility of flood protection. The hydrologic method uses available data on the occurrence of floods and flood damages to establish both the frequency of flood recurrence and the damage associated with a flood of a given height. The NFIP augments available flood data with detailed engineering studies, simulations, and professional judgment to establish the scientific and actuarial basis for its risk assessment process and rates. Flood-elevation frequency data for specific communities is published in Flood Rate Insurance Maps, which differentiate areas based on their flood risk. These maps are the basis for setting insurance rates, establishing floodplain management ordinances, and identifying properties where flood insurance is mandatory. To estimate expected annual losses and determine the basis for rate setting, NFIP combines flood-elevation frequency data with depth-damage calculations to estimate a range of flood probabilities and associated damages. Each possible flood is multiplied by the expected damage should such a flood occur, and then each of these is added together. The total of each possible flood’s damage provides an expected per annum percentage of the value of property damage due to flooding. This expected damage can then be converted to an expected loss per $100 of property value covered by insurance. This per annum expected loss provides the fundamental component of rate setting. Rates are also adjusted to incorporate additional expense factors, such as adjustment costs and deductibles. To the extent possible within the context of its broader purposes, the NFIP is expected to pay operating expenses and flood insurance claims with premiums collected on flood insurance policies rather than with tax dollars. However, as we have reported, the program is not actuarially sound by design because the Congress authorized subsidized insurance rates to be made available for policies covering certain structures to encourage communities to join the program. As a result, the program does not collect sufficient premium income to build reserves to meet the long- term future expected flood losses. FEMA has statutory authority to borrow funds from the Department of the Treasury to keep the NFIP solvent. Prior to the 2005 hurricane season, FEMA had exercised its borrowing authority four times, when losses exceeded available fund balances. For example, FEMA borrowed $300 million to pay an estimated $1.8 billion on flood insurance claims resulting from the 2004 hurricane season. Following hurricanes Katrina, Rita, and Wilma, FEMA estimates it will need to borrow nearly $21 billion dollars to cover outstanding claims. Although FEMA has repaid borrowed funds with interest in the past, FEMA does not expect to be able to meet the $1 billion in annual interest payments for these borrowed funds. In general, farm income is determined on the basis of farm production and prices, both of which are subject to wide fluctuations due to external factors. Because a substantial part of farming depends on weather, farm production levels can vary substantially on an annual basis. Commodity prices are also subject to significant swings due to supply and demand on the domestic and international markets. The Congress created FCIC in 1938 to administer a federal crop insurance program on an experimental basis to temper the weather effects of the dust bowl and the economic effects of the Great Depression. The federal crop insurance program protects participating farmers against financial losses caused by droughts, floods, or other natural disasters. Until 1980, the federal crop insurance program was limited to major crops in the nation’s primary production areas. The Federal Crop Insurance Act of 1980 expanded crop insurance both in terms of crops and geographic areas covered. The expansion was designed to allow the disaster assistance payment program provided by the government under previous farm bills to be phased out. To encourage participation, the 1980 act required a 30 percent premium subsidy for producers who purchased coverage up to the 65 percent yield level. Despite the subsidies, program participation remained low, and the Congress authorized several ad hoc disaster payments between 1988 and 1993. Congressional dissatisfaction with the size and frequency of these payments prompted the Congress to pass the Federal Crop Insurance Reform Act of 1994, which mandated participation in the crop insurance program as a prerequisite for other benefits, including agriculture price support payments. The 1994 act also introduced catastrophic risk protection coverage, which compensated farmers for losses exceeding 50 percent of their average yield at 60 percent of the commodity price. Premiums for catastrophic risk protection coverage were completely subsidized, and subsidies for other coverage levels were also increased. As part of the 1996 Farm Bill, the Congress created the Office of Risk Management under the U.S. Department of Agriculture (USDA), and USDA established RMA to administer the FCIC insurance programs, among other things. The Congress also required the creation of a revenue insurance pilot project and repealed the mandatory participation provision of the 1994 Act. However, participation in the crop insurance program has not necessarily precluded the need for further disaster assistance. For example, due to low commodity prices in 1997 and multiple years of natural disasters, the Congress enacted an emergency farm financial assistance package totaling almost $6 billion in 1998, which included over $2 billion in crop disaster payments, and an $8.7 billion financial assistance package in 1999 that included $1.2 billion in crop disaster payments. In 2000, the Congress enacted the Agricultural Risk Protection Act, which further increased subsidies for insurance above the catastrophic risk protection coverage level, subsidized a portion of the cost of revenue insurance products, improved coverage for farmers affected by multiple years of natural disasters, required pilot insurance programs for livestock farmers, and authorized pilot programs for growers of other commodities not currently covered, gave the private sector greater representation on the FCIC Board of Directors, reduced eligibility requirements for permanent disaster payment programs for noninsured farmers, and provided new tools for monitoring and controlling program abuses, among other provisions. These changes required $8.2 billion in additional spending from fiscal years 2001 through 2005. RMA has overall responsibility for supervising the federal crop insurance program, which it administers in partnership with private insurance companies. Insurance policies are sold and completely serviced through approved private insurance companies that have their losses reinsured by USDA. These companies share a percentage of the risk of loss or opportunity for gain associated with each insurance policy written. In addition, RMA pays companies a percentage of the premium on policies sold to cover the administrative costs of selling and servicing these policies. In turn, insurance companies use this money to pay commissions to their agents who sell the policies and fees to adjusters when claims are filed. RMA oversees the development of new insurance products and the expansion of existing insurance products to new areas to help farmers reduce the chance of financial loss. The USDA determines whether the federal crop insurance program will insure a commodity on a crop-by-crop and county-by-county basis, based on farmer demand for coverage and the level of risk associated with the crop in the region, among other factors. Over 100 crops are covered; major crops such as grains are covered in almost every county where they are grown, and specialty crops such as fruit are covered in some areas. For many commodities, producers may also purchase revenue insurance. Based on commodity market prices and the producer’s production history, producers are assigned a target revenue level. The producer receives a payment if their actual revenue falls short of the target level, whether the shortfall was due to low yield or low prices. Premiums for revenue insurance are subsidized at the same level as traditional crop insurance policies. Farmers’ participation in the federal crop insurance program is voluntary, but the federal government encourages it by subsidizing the insurance premiums. Participating farmers are assigned a “normal” crop yield based on their past production history and a commodity price based on estimated market conditions. The producer selects both the percentage of yield to be covered and the percentage of the commodity price received as payment if the producer’s losses exceed the selected threshold. Premium prices increase as levels of yield and price coverage rise. However, all eligible producers can receive fully subsidized catastrophic risk protection coverage that pays producers for losses exceeding 50 percent of normal yield, at a level equal to 55 percent of the estimated market price, in exchange for a $100 administrative fee. Producers who purchase this coverage can buy additional insurance at partially subsidized rates up to 85 percent of their yield and 100 percent of the estimated market price. As an alternative, the Group Risk Plan provides coverage based on county yields rather than a producer’s actual production history. If county yield falls below the producer’s threshold yield (a percentage of the historical county yield), then the producer receives a payment. RMA’s risk assessment/rate-setting methodology is complex because the risk of growing a particular crop varies by county, farm, and farmer. Because of all the possible combinations involved, hundreds of thousands of rates are in place. Each year, RMA follows a multistep process to establish rates for each crop included in the program. The process involves establishing base rates for each county crop combination and adjusting these basic rates for a number of factors, such as coverage and production levels. In addition, rates are adjusted to account for the legislated limitations in price increases. For each crop, RMA extracts data on counties’ crop experience from its historical database. The data elements for each crop, crop year, and county include (1) the dollar amount of the insurance coverage sold, (2) the dollar amount of the claims paid, and (3) the average coverage level. The historical data are adjusted to the 65 percent coverage level (the most commonly purchased level of coverage) so that liability and claims data at different coverage levels can be combined to develop rates. Using the adjusted data, FCIC computes the loss-cost ratio for each crop in each county. The loss-cost ratio is calculated by dividing the total claim payments by the total insurance in force; the result is stated as a percentage. To reduce the impact a single year will have on the average loss-cost ratio of each county, RMA caps the adjusted average loss-cost ratio for any single year at 80 percent of all years. To establish the base rate for each county, the average for all the years since 1975 is calculated using the capped loss-cost ratios and a weighting process to minimize the differences in rates among counties. Rates are further adjusted by: a disaster reserve factor, a surcharge for catastrophic coverage for each crop based on pooled losses at the state level, a prevented planting factor, farm divisions, crop type, and differences in both average yield and coverage levels. The crop insurance program is financed primarily through general fund appropriations and farmer-paid premiums. In addition to the premiums paid by producers, FCIC receives an annual appropriation to cover necessary costs for the program’s premium subsidies, excess losses, delivery expenses, and other authorized expenses. According to USDA budget documents, for fiscal year 2005, insurance premium and administrative fee revenue from farmers was approximately $2.1 billion, and gross claims equaled almost $3.3 billion. Total government operating costs in fiscal year 2005 were approximately $3 billion. RMA is required to set crop insurance premiums at actuarially sufficient rates, defined as a long-run loss ratio target of no more than 1.075. From its initial expansion in 1981 through 1994, the crop insurance program had an average loss ratio of 1.47 and paid roughly $3.2 billion in claims excess of subsidized premium income during that period. From 1995 to 2005, the program had an average loss ratio of 0.91, and collected roughly $2.7 billion in subsidized premium excess of claims during that period. Excluding subsidies and measuring performance on the basis of a producer premium, from 1981 to 1994, the crop insurance program averaged a loss ratio of 1.93 and paid roughly $5.2 billion in claims excess of producer premium over that period; from 1995 to 2005, the program averaged a loss ratio of 2.15 and paid roughly $14.2 billion in claims excess of a producer premium during that period. Generally, producers can purchase crop insurance to insure up to 85 percent of their normal harvest (yield), based on production history. In 2007, the USDA expects the FCIC to provide $48 billion in risk protection on 287 million acres nationwide, which represents approximately 80 percent of the nation’s acres planted to principal crops. The USDA estimates this level of coverage will cost the federal government $4.2 billion in 2007. Munich Re, one of the world’s largest reinsurance companies, and the University of Colorado jointly convened an international workshop on climate change and disaster loss trends in May 2006 in Hohenkammer, Germany. The workshop brought together 32 experts in the fields of climatology and disaster research from 13 countries. White papers were prepared and circulated by 25 participants in advance of the workshop and formed the basis of the discussions. In the course of the event, participants developed a list of statements that each represent a consensus among participants on issues of research and policy as related to the workshop’s two central organizing questions: (1) What factors account for increasing costs of weather related disasters in recent decades? and (2) What are the implications of these understandings, for both research and policy? Consensus (unanimous) statements of the workshop participants: 1. Climate change is real, and has a significant human component related to greenhouse gases. 2. Direct economic losses of global disasters have increased in recent decades with particularly large increases since the 1980s. 3. The increases in disaster losses primarily result from weather related events, in particular storms and floods. 4. Climate change and variability are factors which influence trends in disasters. 5. Although there are peer reviewed papers indicating trends in storms and floods there is still scientific debate over the attribution to anthropogenic climate change or natural climate variability. There is also concern over geophysical data quality. 6. IPCC (2001) did not achieve detection and attribution of trends in extreme events at the global level. 7. High quality long-term disaster loss records exist, some of which are suitable for research purposes, such as to identify the effects of climate and/or climate change on the loss records. 8. Analyses of long-term records of disaster losses indicate that societal change and economic development are the principal factors responsible for the documented increasing losses to date. 9. The vulnerability of communities to natural disasters is determined by their economic development and other social characteristics. 10. There is evidence that changing patterns of extreme events are drivers for recent increases in global losses. 11. Because of issues related to data quality, the stochastic nature of extreme event impacts, length of time series, and various societal factors present in the disaster loss record, it is still not possible to determine the portion of the increase in damages that might be attributed to climate change due to greenhouse gas emissions. 12. For future decades the IPCC (2001) expects increases in the occurrence and/or intensity of some extreme events as a result of anthropogenic climate change. Such increases will further increase losses in the absence of disaster reduction measures. 13. In the near future the quantitative link (attribution) of trends in storm and flood losses to climate changes related to greenhouse gas emissions is unlikely to be answered unequivocally. 14. Adaptation to extreme weather events should play a central role in reducing societal vulnerabilities to climate and climate change. 15. Mitigation of greenhouse gas emissions should also play a central role in response to anthropogenic climate change, though it does not have an effect for several decades on the hazard risk. 16. We recommend further research on different combinations of adaptation and mitigation policies. 17. We recommend the creation of an open-source disaster database according to agreed upon standards. 18. In addition to fundamental research on climate, research priorities should consider needs of decision makers in areas related to both adaptation and mitigation. 19. For improved understanding of loss trends, there is a need to continue to collect and improve long-term and homogenous data sets related to both climate parameters and disaster losses. 20. The community needs to agree upon peer reviewed procedures for normalizing economic loss data. The following are GAO’s comments on the U.S. Department of Agriculture’s letter dated February 23, 2007. 1. We agree that the loss experiences of NFIP, FCIC, and private insurers are distinct and sought to reflect these distinctions in our draft report. For example, we acknowledged on page 23 of the draft the specific distinction USDA highlights—that the main cause of catastrophic losses for FCIC is drought in the nation’s interior (see pages 24 and 25 of this document). Despite these and other differences, however, we believe the report’s findings and underlying message are still applicable to the NFIP, the FCIC, and private insurers. 2. Our analysis of insured losses does not attempt to attribute increases in past losses to changes in the severity of weather events in the data sets we reviewed, as implied by the comment. Moreover, we acknowledge that the increase in FCIC’s losses (indemnities) largely reflected the rapid growth of the crop insurance program. However, given the IPCC’s projections for potential increase in the frequency and severity of weather-related events—including those that affect crops—we believe that limiting an evaluation of FCIC’s future weather-related risk to the program’s loss ratio—which only captures historical performance of the program based on past climatic and market conditions—to be a potentially misleading metric upon which to make a prospective assessment. 3. We acknowledged these activities in the draft report. However, we believe that USDA’s actions are limited in scope, focusing almost exclusively on actuarial performance and not on the potential implications of climate change for FCIC’s operations (i.e., changes in the frequency and severity of weather-related events, weather variability, growing seasons, and pest infestations). Accordingly, we believe the program should do more to prospectively assess the implications of climate change. 4. We employed the IPCC’s definition of climate change, which includes statistically significant variations in climate, brought on by factors that are both internal and external to the earth’s climate system, and that persist over time—typically decades or longer. Under this definition, the Atlantic hurricane cycle, as with other significant variations that are understood to be internal to the earth’s climate system, can be considered climatic changes. Our use of the definition was corroborated by a senior NOAA scientist. 5. We updated our discussion of FCIC’s modeling activities (see page 36) to reflect this hurricane model. However, as stated on page 22, 75 percent of FCIC’s claims were associated with drought, excess moisture, and hail from 1980 to 2005, whereas hurricanes were associated with a much smaller portion of FCIC’s claims during this period. Accordingly, we believe that if more sophisticated, prospective risk assessment techniques (such as those used in FCIC’s hurricane model) were applied to drought, moisture, and hail events, it would allow for a far more useful assessment of the potential implications of climate change for FCIC’s operations. The following are GAO’s comments on the Department of Commerce’s letter dated February 26, 2007. 1. We agree that a clear and accurate definition of climate change is a necessary prerequisite for any discussion of the issue. While a variety of definitions for the term are in use, we did not attempt to independently define the term. Rather, we relied upon the IPCC’s most current publicly-available definition. 2. We revised the introductory statement referred to in Commerce’s comments for editorial purposes (see page 2). To the extent practicable, we also incorporated the Working Group I Summary for Policymakers of the IPCC’s Fourth Assessment Report into the detailed discussion of the potential changes in the frequency and severity of weather-related events identified in the 2001 Third Assessment Report (see pages 8 to 13). 3. We included an elaboration on page 14 of how altering the frequency and severity of weather-related events is linked to risk. 4. It was outside the scope of this report to conduct our own quantitative trend analysis of the relative roles of societal factors (such as development or agricultural prices) and climate change in shaping the increases in weather-related insured losses observed in the data. In response to the comment, however, we clarified which studies we reviewed that addressed this question, both for coastal hazards (such as hurricanes) and inland hazards (such as drought and excess moisture). In addition to the individual named above, Steve Elstein, Assistant Director; Chase Huntley; Alison O’Neill; Michael Sagalow; and Lisa Van Arsdale made key contributions to this report. Charles Bausell, Jr.; Christine Bonham; Mark Braza; Lawrence Cluff; Arthur James, Jr.; Marisa London; Justin Monroe; and Greg Marchand also made important contributions to this report. We also wish to give special tribute to our dear friend and colleague, Curtis Groves, who died many years too soon after a long battle with multiple myeloma near the conclusion of our work. National Flood Insurance Program: New Processes Aided Hurricane Katrina Claims Handling, but FEMA’s Oversight Should Be Improved. GAO-07-169. Washington, D.C.: December 15, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Crop Insurance: More Needs To Be Done to Reduce Program’s Vulnerability to Fraud, Waste, and Abuse. GAO-06-878T. Washington, D.C.: June 15, 2006. High-Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Federal Emergency Management Agency: Improvements Needed to Enhance Oversight and Management of the National Flood Insurance Program. GAO-06-119. Washington, D.C.: October 18, 2005. Crop Insurance: Actions Needed to Reduce Program’s Vulnerability to Fraud, Waste, and Abuse. GAO-05-528. Washington, D.C.: September 30, 2005. Catastrophe Risk: U.S. and European Approaches to Insure Natural Catastrophe and Terrorism Risks. GAO-05-199. Washington, D.C.: February 28, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Climate Change: Information on Three Air Pollutants’ Climate Effects and Emissions Trends. GAO-03-25. Washington, D.C.: April 28, 2003.
Weather-related events have cost the nation billions of dollars in damages over the past decade. Many of these losses are borne by private insurers and by two federal insurance programs--the National Flood Insurance Program (NFIP), which insures properties against flooding, and the Federal Crop Insurance Corporation (FCIC), which insures crops against drought or other weather disasters. GAO was asked to (1) describe how climate change may affect future weather-related losses, (2) determine past insured weather-related losses, and (3) determine what major private insurers and federal insurers are doing to prepare for potential increases in such losses. In response, among other things, GAO reviewed key scientific assessments; analyzed insured loss data; and contacted private insurers, NFIP, and FCIC. Key scientific assessments report that the effects of climate change on weather-related events and, subsequently, insured and uninsured losses, could be significant. The global average surface temperature has increased by 0.74 degrees Celsius over the past 100 years and climate models predict additional, perhaps accelerating, increases in temperature. The key assessments GAO reviewed generally found that rising temperatures are expected to increase the frequency and severity of damaging weather-related events, such as flooding or drought, although the timing and magnitude are as yet undetermined. Additional research on the effect of increasing temperatures on weather events is expected in the near future, including a highly anticipated assessment of the state of climate science this year. Taken together, private and federal insurers paid more than $320 billion in claims on weather-related losses from 1980 to 2005. Claims varied significantly from year to year--largely due to the effects of catastrophic weather events such as hurricanes and droughts--but have generally increased during this period. The growth in population in hazard-prone areas and resulting real estate development have generally increased liabilities for insurers, and have helped to explain the increase in losses. Due to these and other factors, federal insurers' exposure has grown substantially. Since 1980, NFIP's exposure quadrupled, nearing $1 trillion in 2005, and program expansion increased FCIC's exposure 26-fold to $44 billion. Major private and federal insurers are both exposed to the effects of climate change over coming decades, but are responding differently. Many large private insurers are incorporating climate change into their annual risk management practices, and some are addressing it strategically by assessing its potential long-term industry-wide impacts. The two major federal insurance programs, however, have done little to develop comparable information. GAO acknowledges that the federal insurance programs are not profit-oriented, like private insurers. Nonetheless, a strategic analysis of the potential implications of climate change for the major federal insurance programs would help the Congress manage an emerging high-risk area with significant implications for the nation's growing fiscal imbalance.
A component of DOJ, BOP has obligations to confine offenders in a controlled, safe, and humane prison environment, while providing a safe workplace where officers can perform their duties without fear of injury or assault. In fiscal year 2010, $6.2 billion was appropriated for BOP to carry out its mission. For all 116 of its institutions, BOP has dedicated an average of almost $17 million annually from fiscal year 2000 through 2010 to expenditures that include protective equipment for its officers. In fiscal year 2010, BOP oversaw more than 209,000 inmates, housing more than 170,000 of these inmates in its 116 institutions. In addition, BOP utilizes privately managed secure facilities; residential re-entry centers— also known as halfway houses; bed space secured through agreements with state and local entities; and home confinement to house inmates. In fiscal year 2010, more than 22,000 inmates—or about 11 percent of the 209,000 inmates overseen by BOP—were housed in privately managed facilities, while more than 14,000—or about 7 percent—were housed in residential re-entry centers, bed space secured through agreements with state or local entities, or home confinement. BOP’s 116 institutions generally have one of four security level designations: minimum, low, medium, and high. The designations depend on the level of security and staff supervision the institution is able to provide, such as the presence of security towers; perimeter barriers; the type of inmate housing, including dormitory, cubicle, or cell-type housing; and the staff-to-inmate ratio. Further, BOP designates some of its institutions as administrative institutions, which specifically serve inmates awaiting trial, or those with intensive medical or mental health conditions, regardless of the level of supervision these inmates require. To determine the institution in which an inmate is confined, BOP considers the level of security and supervision the inmate requires and that the institution is able to provide; the inmate’s rehabilitation needs; the level of overcrowding at the institution; and any recommendations from the court at the inmate’s sentencing. Table 1 depicts the percentage of inmates incarcerated in BOP institutions, by security level of the institution in fiscal year 2010. Since fiscal year 2000, BOP’s inmate population has grown by 45 percent, as shown in figure 1. See appendix II for information on the characteristics of BOP’s inmate population. BOP tracks information related to inmate assaults on staff in two data systems: SENTRY and TRUINTEL. First created in 1974, BOP’s SENTRY system maintains most of BOP’s operational and management information, such as inmate data and property management data, among others. According to the Acting Director of BOP’s Office of Research and Evaluation (ORE), SENTRY was updated in 1997 to capture reports of inmate incidents, including assaults on staff. Assaults on staff can include a variety of violent acts. For instance, BOP officials with whom we spoke provided examples of assaults, such as stabbing a staff member with a homemade weapon, punching or kicking staff, or throwing bodily fluids on a staff member. Assaults are classified as serious or less serious based upon the injury sustained or intended as a result of the assault. For instance, officials at one BOP institution reported that they would classify an assault in which an inmate threw food at an officer as a less serious assault, but an assault in which the officer was stabbed as a serious assault. To report an inmate assault on a BOP staff member in SENTRY, BOP instructs its personnel to follow the procedures for incident reporting and investigations described in BOP’s Program Statement on Inmate Discipline and Special Housing Units. Figure 2 depicts this process. In addition to the information captured in SENTRY, BOP’s TRUINTEL system—created in October 2009—provides BOP with a number of capabilities, including an intelligence gathering function that provides real- time information on assaults on staff. Unlike SENTRY, Correctional Services Branch officials reported that TRUINTEL captures only data from the initial incident report, and is not updated based on the subsequent investigation or hearings related to the assault. According to these officials, TRUINTEL allows managers at BOP institutions to see trends in incidents, including assaults, across BOP institutions. The Correctional Services Branch officials stated that if an assault on an officer occurs, an individual at the institution—generally the lieutenant on duty—completes a Form 583 Report of Incident (Form 583) in the TRUINTEL system, indicating that the incident was an assault on staff. The lieutenant also records information on the incident’s cause, such as alcohol or a disrespect issue; the inmate(s) involved in the assault; whether restraints were applied to the inmate; and whether any lethal or less-than-lethal weapons were used to resolve the incident. The officer involved in the assault may also submit a description of the incident, which is entered into the Form 583. After the lieutenant completes the Form 583, the institution’s captain generally reviews the report before it is reviewed and finalized by the institution’s warden. Once the warden finalizes the Form 583, managers across BOP institutions can view the information in the TRUINTEL system. Further, following any incident involving an officer’s use of force against an inmate, such as the use of a less-than-lethal weapon, BOP requires that a Form 586 After Action Review Report be completed in TRUINTEL. To complete this report, an after action review committee first meets to review the incident. The facility’s warden, the associate warden responsible for correctional services, the health services administrator, and the captain comprise this review committee and their purpose is to assess the rationale for why the staff involved took the actions or used the equipment they did. The committee also determines if these actions, including the use of any equipment, were appropriate given BOP policy. Since BOP’s inmate population changes each year, BOP calculates the rate of inmate assaults—both of a serious and less serious nature—per 5,000 inmates incarcerated based on the information submitted in its SENTRY system. For example, in fiscal year 2010, the total number of assaults on staff was almost 1,700, for a rate of about 49 serious and less serious assaults per 5,000 inmates. Figure 3 displays the serious and less serious assaults on BOP staff, as recorded in SENTRY from fiscal year 2000 through 2010. As the trends illustrate, less serious assaults have followed a generally upward trend, while serious assaults have experienced fewer fluctuations over time. According to BOP officials from the Correctional Services Branch, upward trends in assault data may be influenced by a number of factors, including the number of inmates affiliated with gangs, the staff-to-inmate ratio in the institutions experiencing assaults, or the opening of additional BOP institutions because inmates incarcerated in these new institutions are not familiar with each other, which can lead to initial tension between the inmates. Correspondingly, the officials explained that the decrease in assaults may be a result of the inmate population at a new institution stabilizing and becoming less tense. In addition, the officials reported that the downward trend in assaults from 2009 to 2010 may be related to BOP creating Special Management Units (SMU) to house inmates who present unique security and management concerns, such as those who participated or had a leadership role in gang activity, by removing them from other BOP facilities. While these data systems track inmate assaults on staff while staff are on duty, officers may also encounter former inmates or inmates’ families or associates while in the community, including while commuting to and from work. In part due to these potential threats to officers’ safety in their communities, the Law Enforcement Officers Safety Act of 2004 (LEOSA) was passed. LEOSA exempts qualified law enforcement officers and qualified retired law enforcement officers from state and local laws that prohibit carrying concealed firearms. BOP staff who have primary and secondary law enforcement status are “qualified law enforcement officers” as defined by statute and qualify to carry concealed firearms. However, with limited exceptions, BOP prohibits anyone, including officers, from storing personal firearms carried while commuting to and from work on institution property. In addition to BOP, other federal government and nongovernmental organizations also engage in activities that relate to officer safety. The National Institute of Justice (NIJ) is DOJ’s research, development, and evaluation component. In addition to awarding grants and cooperative agreements to research, develop, and evaluate criminal justice programs, NIJ coordinates various technical working groups comprised of subject matter experts who work in the field of criminal justice to address a variety of law enforcement issues. Three of NIJ’s technical working groups relevant to officer safety in correctional settings are: Institutional Corrections, Personal Protective Equipment, and Less Lethal Technologies. Further, NIJ funds the National Law Enforcement and Corrections Technology Center (NLECTC), which assists state, local, tribal, and federal correctional agencies, as well as law enforcement and criminal justice agencies, in addressing technology needs and challenges, which can help address officer safety. In addition, BOP’s National Institute of Corrections (NIC) provides training, technical assistance, information services, and policy and program development assistance to federal, state, and local correctional agencies. The NIC also maintains an extensive library of research and evaluations related to corrections, including those related to officer safety. Further, the Office of Law Enforcement Standards within the National Institute of Standards and Technology (NIST), an agency of the Department of Commerce, helps criminal justice, public safety, emergency responder, and homeland security agencies make decisions, primarily by developing performance standards, measurement tools, operating procedures and equipment guidelines. For instance, NIST has conducted research on the long-term durability of body armor, which is worn by correctional officers to ensure their safety. The American Correctional Association’s (ACA) Commission on Accreditation provides all accreditations for BOP institutions. The ACA’s standards provide guidance to all correctional organizations on correctional issues such as programming, officer staffing, and officer safety. In order for a correctional institution to be accredited by the ACA, the institution must show compliance in key areas, including officer safety. Additionally, the Council of Prison Locals (CPL) is the union that represents employees within BOP’s bargaining unit, which includes correctional officers. The CPL is a part of the Association of Federal Government Employees (AFGE), a union that represents federal government employees. There are 105 local CPL branches nationwide that represent employees from BOP’s 116 facilities, and advocate for the interests of their constituents, including officer safety issues. In addition to BOP’s role in ensuring the safety of federal correctional officers, state departments of corrections work to ensure the safety of correctional officers working in state institutions. All 50 states have agencies that are responsible for housing the state’s inmate populations. See appendix III for the inmate populations and characteristics in these states as of December 31, 2009. BOP and the selected states with whom we spoke provide their officers with a variety of equipment to protect them. BOP generally requires officers working within the secure perimeter of its institutions to carry a radio, body alarm, and keys while on duty. BOP also provides officers with the option to carry flashlights and wear stab-resistant vests. This policy regarding the equipment worn or carried by officers is largely consistent across BOP facilities. Further, with limited exceptions, BOP prohibits anyone, including officers, from storing personal firearms the officers carried while commuting to and from work on facility property. States have discretion over the equipment they make available to their officers, and officials in the 14 states with whom we spoke provided examples of three types of equipment they allow their officers to carry while on duty that BOP generally does not, including pepper spray and batons. In addition, officials from 9 of the 14 states reported that they allow their officers to store personal firearms that they have carried when commuting to and from work on facility property, which BOP generally does not. However, BOP and states provide similar equipment and weapons—such as less-than-lethal launchers, shotguns, or rubber bullets—to protect their officers in an emergency situation, which can include responding to an inmate riot or attack, removing a noncompliant inmate from a cell, or capturing an escaping inmate. Most BOP officers and union officials with whom we spoke reported that carrying additional equipment while on duty and while commuting to and from work would better protect officers, while BOP management largely reported that officers did not need to carry additional equipment in order to better ensure their safety. BOP officers working within the secure perimeter of a BOP institution are generally required to carry a radio, body alarm, and keys while on duty. In addition, officers have the option to carry a flashlight, handcuffs, latex or leather gloves, or a stab-resistant vest. These policies are largely consistent across BOP institutions, although officers in certain posts carry additional equipment beyond what the typical officer carries. For instance, officers in armed posts carry a lethal weapon and have the option to wear a ballistic vest while on duty. Further, institutions can request waivers to permit their officers to carry or wear additional equipment. According to BOP officials in the Correctional Services Branch, such waivers are granted when the institution demonstrates that it has a unique need to deviate from BOP’s national policy. For example, BOP approved a waiver for officers working at BOP’s Administrative Maximum (ADX) institution in Florence, Colorado, which houses inmates requiring the tightest controls in BOP, to carry batons while on duty. Similarly, officers working with inmates in SMUs, which house inmates that present unique security and management concerns, such as those who participated or had a leadership role in gang activity, were also granted a waiver to carry batons while on duty. According to BOP, it has granted 5 institutions waivers related to officers carrying additional equipment. These waivers include permitting officers in the ADX and SMUs to carry batons inside the institutions. In addition, BOP granted waivers allowing officers patrolling the perimeter of 3 institutions located in downtown areas to carry smaller canisters of pepper spray than those in BOP’s inventory because the larger size was too cumbersome. Further, BOP reported that it has granted waivers to 25 institutions permitting them to store less-than-lethal munitions closer to, or in some cases inside, Special Housing Units (SHU) in order to provide officers more rapid access to the equipment. State DOCs determine the type of equipment their officers carry, and officials in the 14 states with whom we spoke provided examples of three types of equipment that they made available to their officers working within the secure perimeter of the institution to carry or wear while on duty that BOP generally does not. For example, officials from 10 states reported that their officers were permitted to carry pepper spray. In the case of pepper spray and other equipment, state officials told us that it may be carried or worn by all officers in the state; optional for officers; or dependent on the security level of the institution in which the officer works, the officer’s post, or the warden’s discretion. Table 2 displays the equipment that BOP routinely provides to the majority of its officers to carry or wear while on duty, and the number of officials from the 14 states reporting that their officers carry or wear this equipment. According to BOP officials with whom we spoke, officers carry limited equipment while on duty because BOP stresses the importance of officers communicating with inmates to ensure officer safety. For instance, management officials at one BOP institution explained that, regardless of the amount of equipment officers carry, inmates will always outnumber officers. Therefore, the officers’ ability to manage the inmates through effective communication, rather than the use of equipment, is essential to ensuring officer safety. BOP officials reported that carrying additional equipment would impede this communication. For example, according to officials from the Correctional Services Branch, if officers carried equipment in addition to what BOP currently provides, the officers may rely more on this equipment than on their communication with inmates to resolve a situation. Further, officials in 9 of the 14 states with whom we spoke reported that they allow their officers to store personal firearms that they have carried while commuting to and from work on facility property, while BOP, with limited exceptions, does not allow its officers to store such personal weapons. Specifically, BOP policy prohibits anyone, including officers, from bringing personal firearms into or onto the grounds of any BOP institution without the knowledge or consent of the warden, or storing personal firearms in any vehicle parked on BOP property. According to an official from the Correctional Services Branch, BOP does not permit officers to store personal weapons on BOP property because visitors or inmates working on the institution grounds may be able to gain access to the weapon, which would threaten the security of all individuals at the institution. See table 3 for the state department of corrections’ policies pertaining to personal firearms storage on facility property. BOP’s policy prohibiting officers from storing personal firearms on BOP property is largely consistent across its institutions; however, there are limited exceptions to this policy. For instance, BOP policy permits wardens to allow officers to bring personal firearms onto BOP grounds. As such, in 1995, the warden at BOP’s Metropolitan Detention Center (MDC) in Guaynabo, Puerto Rico issued a local policy permitting officers to store personal firearms in a personal weapons locker outside the facility’s secure perimeter while on duty. According to the policy, to store a personal firearm in the MDC’s gun locker, officers must first submit a request to the MDC’s security officer through the MDC’s captain. The request must contain the brand, caliber, and serial number of each weapon to be stored, as well as the number and expiration date of the officer’s permit to carry a firearm. Once the request is approved, the officer receives a key to a locked box within the personal weapons locker. To access the personal weapons locker, the officer must first be identified by staff in the MDC’s control room on a camera located outside the personal weapons locker. Once identified, the officer is granted access to the personal weapons locker and must log his or her entry in and out of the locker in a log book located inside the locker. Figure 4 depicts the MDC’s personal weapons locker and an open locker. According to officials at the Guaynabo MDC, the policy was enacted when the MDC was constructing an armory and requested approval to build the personal weapons locker attached to the armory; the policy is reviewed annually. The officials reported that officers at the MDC at the time were concerned for their safety due to criminal activity surrounding the institution. For instance, the officials reported that an associate warden at the institution was the victim of an attempted car jacking when leaving work. In addition, officers residing in housing located on BOP property—known as reservation housing—are prohibited from storing personal firearms in their housing, and are instead required to place personal firearms in the institution’s armory for safekeeping. According to BOP, as of January 2011, 32 of its 116 institutions have reservation housing available, and officers at 14 of these 32 institutions store personal firearms in the institution’s armory. The number of firearms stored in the armories at these 14 institutions ranges from 1 to 32, with an average of about 10. Moreover, BOP has leased parking space for its officers on non-BOP property at 5 of BOP’s institutions, on which BOP’s policy prohibiting the storage of personal weapons does not apply. Depending on the laws of the state in which the officers work, they may legally be able to store their personal firearms in their cars while on duty. In contrast to what officers carry on a routine basis, in cases of emergency, such as an inmate riot or attack, BOP provides officers with access to a variety of equipment that is largely consistent with what our selected state departments of corrections provide. This equipment includes less-than-lethal weapons, protective gear, and lethal weapons. The equipment is located in specific locations throughout the institutions, such as in secure control rooms, watchtowers in the institutions’ yards, or in the institutions’ armories outside the secure perimeter. Table 4 shows the type of equipment that BOP makes available to its officers in an emergency and the number of officials in the 14 states with whom we spoke who also reported making it available. The 68 officers, officials from six unions, and management officials from BOP’s Correctional Services Branch and the eight BOP institutions with whom we spoke had different opinions about whether additional equipment would better protect officers. As shown in figure 5, most officers and all the union officials with whom we spoke reported that additional equipment would enhance officer safety, while most management officials reported that additional equipment would not enhance officer safety. These officers and officials who said that carrying additional equipment would better ensure safety reported that officer safety would be enhanced if officers carried pepper spray (41 of 45 officers, all union officials, and management officials from one BOP institution); batons (15 of 45 officers); TASERs (4 of 45 officers); or a portable phone (1 officer). Moreover, the officers and officials cited a number of safety benefits to this additional equipment. For instance, 9 officers, officials from four unions, and management at one BOP facility reported that carrying additional equipment would allow officers to defend themselves in case of an attack by an inmate. Four officers reported that carrying additional equipment would help officers deter inmates from engaging in disruptive behavior. For example, 1 officer stated that if an inmate saw an officer carrying a baton, the inmate would be less likely to do something wrong. Further, 4 officers reported that carrying additional equipment could help officers to prevent injuries to inmates, as they could break up fights between inmates more quickly with the additional equipment on hand. However, 7 officers and officials from two unions expressed the need for officers to be trained on the additional equipment in order to enhance their safety. Five officers also reported that the need to carry additional equipment would depend on the situation. Specifically, 4 of the 5 noted that it could particularly aid officers whose posts included open recreational yards where inmates congregate and the potential for fighting or misconduct was greater. Eighteen officers and eight BOP management officials that reported that carrying additional equipment would not enhance officer safety cited concerns with the additional equipment. Specifically, officers most frequently cited concerns that the equipment could be taken from the officer and used against him or her by an inmate. BOP management officials most frequently reported that carrying additional equipment might hinder officers’ communication with inmates either because the officer would be more likely to utilize the equipment to prevent an inmate from engaging in misconduct than talk with the inmate, or the inmate would perceive officers carrying additional equipment as more threatening and be less willing to engage in communication with officers. Similarly, the 68 officers, officials from six unions, and management officials from BOP’s Correctional Services Branch and the eight BOP institutions with whom we spoke had different opinions about whether safety is a concern for officers while they are commuting to and from work. As displayed in figure 6, all of the union officials with whom we spoke reported that safety is a concern for officers when commuting to and from work, most BOP management officials reported that it was not, and the officers with whom we spoke were evenly split regarding safety concerns while commuting to and from work. The officers and officials reporting safety concerns most frequently cited the presence of former inmates, inmates’ families, or associates of inmates in the communities in which officers work who may wish to harm the officers. For instance, one officer explained that he has confiscated contraband from inmates during visiting hours, then later saw the visitors in the community and felt concerned that the visitors might retaliate. In addition, 2 officers and officials from two unions reported that officers’ safety may be at risk when they are wearing their uniforms, either because they are recognized as a BOP officer or other law enforcement personnel. Further, 4 officers, officials from one union and BOP management officials from one institution cited crime in the community or the lack of security in the employee parking lot as a safety concern for officers while commuting to and from work. The 33 officers who reported that safety while commuting to and from work was not a concern cited a number of reasons, including living in close proximity to the institution in which they work; working in an institution that is in a quiet, non-urban setting; the local community’s positive perception of officers; and officers’ good relationship with inmates. Management officials also reported that officers often change out of their uniforms when commuting to and from work, which mitigates safety concerns during the commute. Given the varying opinions regarding officer safety concerns while commuting to and from work, the officers, union officials, and BOP management officials with whom we spoke also reported different opinions about whether allowing officers to carry personal firearms to and from work and store them on BOP property would enhance officer safety. As shown in figure 7, most officers and all union officials reported that being permitted to store personal firearms on BOP property would enhance officer safety, while most BOP management officials reported that doing so would not enhance officer safety. Of the 50 officers reporting that allowing officers to store personal firearms on BOP property would enhance their safety, 7 told us that they would not take advantage of this policy if it were instituted, though they did not elaborate, and another 2 expressed the need for additional training on the firearms before the policy is implemented. The 7 officers who indicated to us that allowing officers to store personal firearms on BOP property would enhance officer safety at another institution reported that having the ability to carry a personal firearm to work and store it on BOP property was not necessary to ensure their safety at the current institution at which they work. However, these 7 officers stated that such a policy would better ensure the safety of other officers, such as those working at institutions in large cities. The 7 officers and six BOP management officials who told us that allowing officers to store personal firearms on BOP property would not enhance officer safety explained their reasons. These reasons included officers not needing to carry firearms during their commute because danger is minimal if non existent, officers having the potential to misuse firearms if not properly trained, and inmates potentially obtaining the firearms if stored in officers’ cars or carried into the facility. Further, 2 officers at one BOP institution and 2 officers and union officials at a second BOP institution cited additional safety measures that would enhance officer safety while officers are commuting to and from work that did not involve authorization to carry weapons while commuting. Three of these officers and the union officials reported that increased monitoring of the parking lot and checks on visitors’ cars would improve officer safety. One of these three officers and the union officials also stated that posting a guard at the entrance to an institution would enhance officer safety. Finally, one officer told us that staggering officers’ shifts with visiting hours would help improve safety because it would help ensure that visitors would not be able to identify the officers’ cars and then follow them while the officers are off duty. BOP and states provide a variety of equipment to their officers to ensure their safety; however, none of the BOP officials, state correctional officials, and correctional experts with whom we spoke reported that they were aware of or had conducted evaluations of the effectiveness of equipment in ensuring officer safety. If BOP were to acquire new equipment, correctional equipment experts from the National Law Enforcement and Corrections Technology Center (NLECTC), the National Institute of Standards and Technology (NIST), and the National Institute of Justice (NIJ) reported to us that BOP would need to consider factors such as training, replacement, and maintenance costs; potential liability issues; whether the equipment met technical performance standards; and the benefits and risks of using the equipment. BOP officials from the Correctional Services Branch and BOP’s Office of Security Technology—which is responsible for identifying and evaluating new security-related equipment—reported that their offices had not assessed whether the equipment BOP provides to its officers has improved the officers’ safety. Similarly, officials from NIJ, DOJ’s research, development, and evaluation agency, told us that NIJ has not conducted any evaluations of the effectiveness of the set of equipment that BOP uses in ensuring the safety of its officers. Moreover, BOP’s NIC, which provides technical assistance, training, and information to BOP and state and local correctional agencies, found no record of studies related to officer safety. In addition, officials from BOP’s Office of Research and Evaluation (ORE), which conducts research and evaluations on behalf of BOP, reported that ORE had not conducted such studies. According to BOP’s mission statement, BOP protects society by confining offenders in prisons that are, among other things, safe, cost-efficient, and appropriately secure. Further, BOP states in its vision statement that it will know that it has realized these goals when, among other things, the workplace is safe and staff perform their duties without fear of injury or assault and BOP is a model of cost-efficient correctional operations. In addition, DOJ stresses the importance of evidence based knowledge in achieving its mission. For instance, when soliciting federally funded research in crime and justice, DOJ’s Office of Justice Programs (OJP) states that it supports DOJ’s mission by sponsoring research to provide objective, independent, evidence based knowledge to meet the challenges of crime and justice. According to OJP, practices are evidence based when their effectiveness has been demonstrated by causal evidence, generally obtained through outcome evaluations, which documents a relationship between an intervention—including technology—and its intended outcome, while ruling out, to the extent possible, alternative explanations for the outcome. Standards for Internal Control in the Federal Government state that managers need to compare actual performance to planned or expected results throughout the organization and analyze significant differences, as well as that program managers need both operational and financial data to determine whether they are meeting their agencies’ strategic and annual performance plans and meeting their goals for accountability for effective and efficient use of resources. Given that BOP’s SENTRY and TRUINTEL systems maintain data on inmates and related incidents, including assaults on officers and the equipment officers utilize in instances where they use force against an inmate, ORE officials reported that such data would allow them to assess the effectiveness of equipment in ensuring officer safety, even though they told us that this assessment may be time intensive. Further, BOP officials from the Office of Security Technology reported that, while they do not assess the impact of equipment on officer safety, they obtain information about the equipments’ performance by obtaining feedback on equipment from those using it at their facilities, such as during a pilot test, and testing whether the equipment performs in accordance with the manufacturer’s intent. While information obtained from these methods helps inform the officials about staff perspectives on the usefulness of the equipment and the equipment’s performance, these methods do not provide information about the equipment’s impact on officer safety. Given BOP’s rising inmate population and the increasing number of inmates per BOP staff member, assessing the effectiveness of officer equipment in a range of scenarios and settings could help BOP better understand which of the equipment it currently provides—or could provide to officers—improves officer safety. For instance, such an assessment might indicate whether the use of a certain piece of equipment appears to prevent injuries, or whether one type of equipment appears to have a greater impact on reducing assaults on officers than another. Conducting such an assessment also could better position BOP to achieve its goal of operating in a cost-efficient manner by effectively targeting limited resources to those equipment investments that clearly demonstrate protective benefit. Officials from the NLECTC, NIST, and NIJ reported that BOP would need to consider factors such as training, replacement, and maintenance costs; potential liability issues; and whether the equipment met technical performance standards if it acquired new equipment, as well as the price of new equipment. Additionally, these organizations suggested that any decision must first be based on a close examination of the benefits and risks of using certain types of equipment. Officials from the NLECTC emphasized the need to examine other costs related to equipment acquisition, such as new officer training related to the equipment, and costs related to the frequency of replacing equipment, such as canisters of pepper spray that must be replaced once used or other munitions with contents that must be refilled to maintain their potency. The NLECTC also explained that there are liability issues a facility or a state can incur if officers misuse equipment, are subsequently sued by inmates for their actions, and compelled to pay for associated legal expenses. Officials from NIST stated that it is important to ensure that any new equipment considered meets the technical performance standards, if any, associated with certain types of equipment. For example, officials noted that adherence to standards when purchasing bulletproof vests is critical to ensuring that the materials used in vests have been proven to stop bullets. In addition, experts from NIJ’s Institutional Corrections Technology Working Group suggested assessing where in the field of corrections less-than-lethal weapons have been used and whether the benefits of using certain less-than-lethal weapons outweigh the risks. Table 5 provides examples of what officials from BOP and the 14 state DOCs included in this review cited as benefits and risks associated with the use of specific types of less-than-lethal weapons. BOP officials from the Correctional Services Branch stated that they first establish whether new or additional equipment is needed through a variety of means. For example, officials said they obtain information from BOP’s Office of Security Technology about the performance of the equipment, such as through a pilot test; identify trends related to incidents in institutions’ data; and also review feedback from officers and other BOP staff on how well the current inventory of equipment is meeting their needs. Officials stated that the next steps involve reviewing factors such as equipment benefits, risks, and costs related to training and maintenance. Officials also noted that before they acquire new equipment it must undergo a legal review by BOP’s Office of General Counsel. Equipment available to officers is one important part of officer safety; however, there are other factors—such as those related to the movement of inmates throughout the facility and the skills and training of prison personnel—that impact both officers’ safety and the overall safety of the institution. BOP has conducted evaluations to measure the impact of several efforts it has undertaken to address such institutional factors on officer safety, among other outcomes, and officials report using these evaluations to inform BOP operations. Throughout our audit work, we asked BOP and state correctional officials with whom we spoke to identify institutional factors that impact officer safety, as well as efforts they have made to mitigate these factors’ consequences. We then analyzed their responses and found 14 common institutional factors the BOP and state officials identified. In order to determine which of the 14 factors have the greatest impact on officer safety, we surveyed 30 correctional accrediting experts at the ACA and asked them to rank which of the factors—if they existed in an institution— would pose the greatest threat to officer safety. We received responses from 21 experts who also provided examples of efforts to address these factors that they believe to be cost effective—that is, efforts that strike a balance between their effectiveness in addressing the factor and their implementation costs. See appendix IV for a copy of our survey and appendix V for a full description of each of the 14 factors identified by BOP and state correctional officials. These experts most frequently reported that the existence of ineffective inmate management, insufficient officer training, inmate gangs, correctional officer understaffing, and inmate overcrowding in an institution would most affect officers’ safety. At one BOP fcility we viited, line re pinted on the floor on oth ide of corridor to ditingh the re where inmtelk from the re deignted for ff. Ineffective Inmate Management: Inmate management refers to the various strategies employed to control and manage the inmate population within a facility. For example, if inmates are not managed effectively, there could be instances where groups of inmates are allowed to congregate, which could lead to increased tension and violence. In one BOP facility a race riot between the Aryan Brotherhood and African-American inmates broke out in the recreation yard on Adolf Hitler’s birthday in April 2008, resulting in injuries and two inmate deaths. After putting up fences that separated the recreation yard into sections, the warden reported that assaults decreased. Fifteen of 21 correctional experts reported that ineffective inmate management is one of the most important factors that could jeopardize officer safety. Further, these experts identified examples of potential cost- effective efforts to manage inmates effectively. One expert reported that managers should assess the risk of housing certain inmates together. Once it assesses its population, management can control inmates’ movement accordingly. Another suggested that institutions utilize video cameras and a “pass” system, which allows only those authorized to enter or exit (i.e., pass through) a certain area, to improve monitoring of inmates’ movement. Further, 1 expert stated that institutions should control inmate movement times, and only allow inmates to move when authorized, such as at the top of the hour, while restricting all other movement unless an inmate is accompanied by an escort or otherwise authorized in advance. BOP and state officials reported making efforts to address inmate management. For instance, BOP employs a direct supervision strategy where officers interact and communicate frequently with inmates, and officials from 3 of the 14 states with whom we spoke reported that they also employ direct supervision over inmates. Officials from another 2 of the 14 states reported that they employ an indirect supervision strategy that minimizes the interaction between officers and inmates by having the supervision take place in a centralized control center within the housing unit. A lieutenant at a facility in one of these states explained that because the facility houses a large number of violent inmates, the state has chosen to apply a less direct supervisory approach to minimize inmate and officer contact. See appendix III for characteristics of state inmates, including types of offenses. Insufficient Correctional Training: Insufficient correctional training refers to a level of training that does not adequately prepare officers to fulfill their duties at their assigned post or other collateral duties they may be asked to perform. For example, one officer we spoke with stated that he felt that officers did not receive enough self defense training, which he indicated would have prevented some of the assaults on staff in his facility since officers would not have to depend on equipment or the backup from other staff to protect themselves. BOP’s Specil Opertion Repone Te (SORT) ttend Cri Mgement Trining, which i n intenive fll-time, weeklong progrm tht trin officer in certin pecilized kill, such ascorting high rik inmte, condcting hoge negotition, nd reching prion door nd fence, mong other kill. Seven of 21 correctional experts reported that inadequate officer training—if it exists within an institution—is one of the most important factors jeopardizing officer safety because it could result in officers not having the knowledge and skills to perform their duties safely and effectively. These experts identified some examples of potential cost- effective efforts to address insufficient correctional officer training when it exists in an institution. Two experts emphasized the need to leverage training provided by local law enforcement agencies, or training provided at no cost to the facility, such as curricula offered through NIC. Another expert recommended that institutions call upon the local law enforcement community for assistance or sharing of training needs. Both this expert and 2 others recommended the use of computer-based training to expand staff access to resources, make training available “anytime,” and combat officer complacency. Officials from BOP and the 14 state DOCs all agreed on the importance of training. However, none of the officials identified their officer training programs as being insufficient. In addition, 8 of the 68 officers we spoke with expressed criticisms over the training they receive. To ensure that their officers receive adequate training, BOP and the 14 state DOCs included in our review require that officers must complete some form of training prior to working with inmates in a facility. Such training is usually conducted through an academy that can last from 2 to 16 weeks, depending on the prison system. BOP’s training courses at the academy include self defense, “use of force” policies, and gang control in addition to any required firearms certification, and officers also receive training at the facility in which they will be working. In addition, in BOP and 9 of the 14 states with whom we spoke, officers benefit from on-the-job training programs, usually conducted through a shadowing program with a more experienced officer or supervisor. Officials from 2 states with whom we spoke reported that they have such a program and that it has helped them address staffing issues because officers in training provide additional support on a given shift. BOP officers are also required to complete some form of refresher training annually. Further, officers that are members of their institution’s Disturbance Control Team (DCT) or Special Operations Response Teams (SORT) receive additional training on a more frequent basis. Both BOP and state institutions have such teams of officers that are responsible for various duties. Inmate Gangs: Inmate gangs are the organized factions of inmates inside a prison which can be based on an inmate’s race, religion, or geographic origin, commonly referred to in corrections as security threat groups (STG). Many STGs parallel existing street gangs, such as the Bloods and the Crips. These STGs exist primarily to offer protection to their members from other STGs and to transport and distribute drugs. For example, the warden at one BOP facility told us that gang participation often encourages inmates to be violent and defiant towards staff and other inmates in order to gain respect from other gang members. Seven of the 21 correctional experts reported that the presence of inmate gangs in prisons is one of the most important factors impacting officer safety, and identified some examples of potential cost-effective efforts to address inmate gangs. According to one expert, institutions should employ the use of phone systems that allow inmates to call a hotline to talk about gangs; track and manage gang activity and provide this information to hotline staff; and provide training to staff receiving this information or observing suspicious activities. Further, another expert suggested the use of computer assisted tracking of whom the gang leaders are calling and whom they are writing. According to the expert, this electronic mapping of community linkages (prison to the streets) can assist prison staff and law enforcement in monitoring illegal activity and possibly disrupting it. Another expert stated that proper supervision and staff training are critical to controlling gangs, and that gangs cannot be tolerated. In addition, one expert reported that institutions must not allow any type of gang displays, and should transfer gang members to different institutions frequently in order to disrupt gang organization. Officials at two of the eight BOP institutions and 3 of the 14 states with whom we spoke described specific efforts to identify and manage STGs that are important in order to enhance officer safety and prevent prison violence. For instance, both BOP and the California state prison system reported that they identify STG members when they enter the system through clothing insignia, tattoos, or peer associations, and note if the inmate is identified as a member. One California official reported that the state strives to segregate inmate STG members from other members of their own STGs or rival STGs, to the extent possible. Further, an official from 1 state reported that some of that state’s institutions have a housing unit program dedicated solely for STG members, where they offer assistance aimed to rehabilitate the inmates and draw them away from STGs. Officials in another state prison system with whom we spoke reported that they manage their STG population by segregating the gang members. Correctional Officer Understaffing: Correctional officer understaffing is the level of staffing of officers that is perceived to be inadequate to prevent violence and maintain a safe facility, usually measured by the inmate-to-staff ratio. Specifically, BOP’s ORE conducted a study in 2005 entitled “The Effects of Changing Crowding and Staffing Levels in Federal Prisons on Inmate Violence Rates,” which found that lower inmate-to-staff ratios are correlated with increases in the level of inmate violence in BOP institutions. However, not all officers and officials we spoke with agreed that understaffing impacted officer safety at BOP institutions. For instance, the officers we spoke with most frequently reported understaffing as a factor impacting their safety (39 of 68 officers), with many citing concerns about staffing levels during the evening and night shifts when there is no other support staff present in the unit, while management at two of the eight BOP institutions we visited reported that the current staffing levels at their institutions are adequate to maintain a safe facility. Texasas evuated the ff level nd dty po t ech fcility cross it tem, llowing it to look t how mny ff ech fcility has, where theff re locted, nd ask whether the ff re poted where they re needed, given inmte movementring dily opertion. A result, Texasas able to identify ome po tht it no longer needed nd to dd po it needed but did not hve. Despite the potential variation in perceptions, 8 of 21 correctional experts reported that officer understaffing is one of the most important factors jeopardizing officer safety and identified some examples of potential cost- effective efforts to address correctional officer understaffing. One expert commented that prisons need to embrace technologies like cameras on walls, and utilize better designs to eliminate blind spots. Another expert stated that in many facilities, correctional officers perform support functions, such as paperwork, that may be effectively done by other staff earning lower salaries. However, the expert commented that hiring too many support staff to perform these functions could affect the ability of a correctional organization to hire more officers. In addition, another expert stated that having officers work 12 hour shifts would increase the staff on each shift. Another expert opined that the most effective strategy is a careful analysis of the institutional officer posts that involves key stakeholders, such as management and officers, and establishes mandatory minimum post numbers, adding more posts only as staffing levels permit. Officials at two of the eight BOP institutions and 4 of the 14 states with whom we spoke reported employing efforts to address officer understaffing. For example, according to BOP management officials at one institution that has multiple facilities in one location, called a complex, management has implemented a staffing plan referred to as consolidation, which allows them to fill in staffing shortages in one facility with officers from another facility within a complex. BOP management at this institution cited consolidation as an economical strategy to fill critical need posts because they do not have to pay officers overtime. However, BOP union officials at two complexes we visited and 4 out of 68 officers we spoke with expressed unease specifically over the consolidation policy, voicing concerns that at times, they feel less safe if sent to work in facilities where they were not as familiar with the inmates. For example, at one complex we visited, an officer reported that he was transferred from a medium security facility to cover shifts at the high security facility. This officer shared concerns that because he does not work with high security inmates on a regular basis, he lacks the opportunity to become familiar with various inmates who pose a greater security threat. Inmate Overcrowding: Inmate overcrowding exists when the number of inmates housed in a facility exceeds the rated capacity of a particular facility. BOP defines rated capacity as the number of prisoners that the institution is built to house safely and securely and with adequate access to services providing necessities for daily living and programs designed to support prisoners’ crime-free return to the community. In testimony before the House of Representatives Subcommittee on Commerce, Justice, Science, and Related Agencies in 2009, the BOP Director stated that correctional administrators agree that overcrowding contributes to greater tension, frustration, and anger among the inmate population, which leads to conflict and violence as the inmates’ ability to access basic services are hindered. Further, as BOP described in its 2005 study, where overcrowded conditions exist, more inmates share cells and other living units, and are thus brought together for longer periods with more high risk, violent inmates, creating more potential victims. According to this report, BOP found that an increase in the inmate population as a percentage of a facility’s rated capacity directly correlates with an increase in inmate violence. Seven of the 21 correctional experts reported that overcrowding is one of the most important factors jeopardizing officer safety. These experts identified some examples of potential cost-effective efforts to address inmate overcrowding. For instance, one expert recommended that inmate programs be carried out in shifts, from the early morning to the late evening, in order to split the amount of inmates between idle time and program time. To address overcrowding, officials from one of the BOP institutions and 3 of the 14 states with whom we spoke reported converting community space, such as television rooms, into inmate cells to accommodate a larger inmate population. This has resulted in trade-offs—to make room in existing housing units to accommodate growing inmate populations, the number of televisions inmates have available to watch has been reduced, which can increase tensions and threaten safety. Further, officials from three of the eight BOP institutions and 3 of the 14 states with whom we spoke stated that they have resorted to double or even triple bunking cells to accommodate the increasing inmate population. This occurs not only within units that house inmates from the general population, but also in the special housing units where inmates are sent for administrative detention or disciplinary segregation. According to BOP, the tradeoff for accommodating a growing population by double and triple bunking cells is the increased level of stress and conflict among inmates that results from living in such close quarters with others. However, not all prison systems are experiencing overcrowding; in fact some states, such as Michigan, are experiencing a reduction in their inmate populations. States have employed a variety of mechanisms to reduce their inmate populations in order to alleviate overcrowding, such as reviewing inmates that may be eligible for parole or considering sentence reductions. An official in Michigan with whom we spoke attributed the decline in inmate population in his state to the success of the state’s re-entry programs for inmates, which has reduced recidivism and violations of parole or probation that often bring former inmates back to jail. BOP’s ORE has conducted evaluations to measure the impact of several efforts on officer safety, among other outcomes, and officials report using these evaluations to inform BOP operations. For instance, in 2001, ORE conducted a study empirically evaluating BOP’s substance abuse treatment program’s effectiveness in reducing prisoner misconduct, which is closely related to officer safety. The study found that treatment program graduates were 74 percent less likely to engage in misconduct between program graduation and release from prison than a comparison group. In addition, in a 2008 study of BOP’s pilot faith-based residential program called Life Connections, BOP’s ORE found that Life Connections participants were less likely to engage in serious misconduct while in the program. Further, ORE has recently begun collecting data for a study to measure the impact of its SMUs—separate housing for inmates presenting unique security and management concerns, such as those who participated or had a leadership role in gang activity—on misconduct rates at both the institutions from which the inmates were removed as well as the SMUs into which they were placed. According to the Deputy Assistant Director of BOP’s Information, Policy, and Public Affairs Division, ORE provides interim data and its final evaluations to the BOP Director and executive staff members, as well as NIC—whose director is a member of BOP’s executive staff—and other DOJ components, such as OJP. Further, ORE requires its staff to publish their work to make it available publicly to the larger correctional community. This official reported that BOP’s Director and executive staff use information from ORE for a variety of purposes, including operational decision-making and budget formulation. For example, this official reported that ORE provides the BOP Director and the executive staff with interim information related to its ongoing SMU evaluation, which provides BOP management with real-time information to guide its decisions related to the SMUs. In addition, in its 2011 Budget Justification, BOP cited its findings from ORE’s study on the Life Connections Program, which demonstrated reductions in serious inmate misconduct, when providing its rationale for funding for inmate programs. Further, the official reported that, when faced with budget constraints, BOP decided to eliminate its intensive confinement centers—or “boot camps”—after an ORE study found that BOP’s boot camps were not effective at reducing re-arrest. With an increasing inmate population in BOP institutions, officer safety is continuously at risk. To protect officers from a range of threats, BOP has taken steps, such as providing additional equipment to officers to access in an emergency and routinely conducting officer training to enhance on-the- job responsiveness. Further, in limited cases, BOP has obtained information about the performance of equipment through pilot tests, officer surveys, and comparisons to manufacturer specifications. In addition, BOP has conducted studies looking at whether its efforts to address institutional factors have impacted inmate violence. However, it is difficult for BOP to determine the impact on officer safety of the equipment it provides because it has not used the data it already collects for this evaluative purpose. By conducting evidence-based evaluative research in what equipment effectively protects officers, BOP could be better positioned to dedicate resources to equipment that has the greatest impact on safety. To capitalize on the data BOP already collects and to further DOJ’s evaluation efforts, we recommend that the Attorney General direct the Director of BOP to leverage existing BOP data systems, such as TRUINTEL and SENTRY, as well as the institutional expertise available through NIJ and NIC, as appropriate, to assess the impact of the equipment BOP has provided or could provide to its officers to better protect them in a range of scenarios and settings. We received written comments on a draft of this report from BOP, which are reproduced in full in appendix VI. BOP concurred with our recommendation and stated that, with the assistance of NIJ and/or NIC, it will conduct a study to evaluate the impact of protective equipment on officer safety. BOP and NIJ also provided technical comments on the report, which we incorporated as appropriate. We are sending copies of this report to the Attorney General and interested congressional committees. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact David Maurer at (202) 512-9627 or by email at [email protected]. Contact points from our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. In this report, we describe the equipment available to protect officers as well as other institutional factors, such as inmate overcrowding and staffing shortages, that affect officer safety. Specifically, this report addresses the following questions: What equipment do the Bureau of Prisons (BOP) and selected states provide to protect officers and what are the opinions of BOP officers and other correctional practitioners regarding this equipment? To what extent has BOP evaluated the effectiveness of its equipment in ensuring officer safety, and what do correctional equipment experts report as important factors when considering the purchase of new equipment? What institutional factors do correctional accrediting experts report as most impacting officer safety, and to what extent has BOP evaluated the effectiveness of the steps it has taken to address these factors? To address all of our objectives, we reviewed existing BOP policies and procedures, such as BOP Program Statements and institution-specific policies, to catalogue the equipment BOP provides to officers and the measures it has implemented to address institutional factors affecting officer safety system-wide. We also interviewed BOP central management, such as officials from the Correctional Services Branch, who help ensure that national policies and procedures are in place that provide a safe, secure institutional environment for inmates and staff, and the Office of Security Technology, who identify and evaluate new security-related equipment. In addition, we interviewed officials from the Office of Research and Evaluation, who produce reports and also research corrections-related topics. During these interviews, we discussed BOP’s existing officer safety practices; the institutional factors they report as affecting officer safety; their views on the effectiveness of the equipment BOP provides, and the measures it has implemented to address these institutional factors; and their mechanisms for evaluating the effectiveness. We compared BOP’s mechanisms for evaluating the effectiveness of its practices in ensuring officer safety to BOP’s and DOJ’s mission statements and Standards for Internal Control in the Federal Government. Further, we visited a total of eight BOP institutions in each of BOP’s six regions. During these visits, we interviewed BOP institutional management officials and observed officer safety practices so that we could accurately reflect BOP management views on officer safety. To obtain the views of officers regarding their safety, we also conducted semistructured interviews with 68 officers who were on duty at the time of our visit. The officers were chosen at random, but were generally posted to the institutions’ housing units or yard. In selecting the institutions to visit, we considered factors such as their location, staff-to-inmate ratio, level of overcrowding, number of assaults on staff, and the security level of the institution. These institutions included Atwater U.S. Penitentiary (USP) and Victorville Federal Correctional Complex (FCC) in California; Florence FCC in Colorado; Allenwood FCC in Pennsylvania; Guaynabo Metropolitan Detention Center (MDC) in Puerto Rico; Beaumont FCC and Houston Federal Detention Center (FDC) in Texas; and Lee USP in Virginia. Because we used a nonprobability sample, our results are not generalizable to all BOP institutions; however, our interviews provided us with insights into the perspectives of management officials and officers at BOP institutions regarding officer safety. In addition, we contacted the 15 state DOCs with the largest inmate populations and conducted semi-structured interviews with 14 of these 15 DOCs. These states included Alabama, Arizona, California, Florida, Georgia, Illinois, Louisiana, Michigan, Missouri, New York, North Carolina, Ohio, Pennsylvania, Texas, and Virginia. During these interviews, state DOC officials identified equipment their officers use and their perceptions of the equipment’s effectiveness in protecting their officers. In connection with our BOP site visits, we also visited state institutions in 5 of these states: Corcoran State Prison in California, Central Florida Reception Center in Florida, Graterford State Correctional Institution in Pennsylvania, Darrington Unit in Texas, and Coffeewood Correctional Center in Virginia. Due to the overall number of correctional organizations in the United States, we conducted nonprobability sampling, which limits the ability to extrapolate the findings in this report to all correctional organizations. However, this information provided useful insight into state correctional practices. We also interviewed union officials from the Council of Prison Locals, representing BOP officers, including officials at the national union as well as local union officials at five of the eight BOP institutions we visited, in order to obtain their perspectives about the institutional factors they report as affecting officer safety, the measures in place to address these factors, and the equipment BOP uses to protect officers. In addition, we interviewed officials from correctional organizations to determine the institutional factors they report as affecting officer safety, and their perspectives on the equipment used to protect officers and the effectiveness of this equipment and BOP and state officer safety practices. These organizations included the American Correctional Association (ACA), BOP’s National Institute of Corrections (NIC), and the Association of State Correctional Administrators (ASCA). We selected these organizations based on recommendations from the correctional officials with whom we spoke, including BOP and state officials. As we selected a nonprobability sample of the officials at correctional organizations, these opinions are not generalizable. However, they provided important insights into BOP and state correctional practices. In addition, we conducted a literature search to identify and obtain evaluations of the effectiveness of BOP or state officer safety practices, such as those conducted by the states’ or DOJ’s inspectors general. In addition, to further address our second objective, we interviewed correctional equipment experts from the DOJ’s National Institute of Justice (NIJ), NIJ’s National Law Enforcement and Corrections Training Center (NLECTC), and the Department of Commerce’s National Institute of Standards and Technology (NIST). Officials from these organizations were chosen because of their expertise in correctional equipment. During these interviews, we obtained the officials’ perspectives on the factors BOP would need to consider if it acquired additional personal protective equipment for its officers. As we selected a nonprobabilty sample of correctional equipment experts, these perspectives are not generalizable. However, they provided valuable insights into equipment considerations. In order to further develop our third objective, we identified 14 institutional factors that BOP, state DOCs, and correctional experts reported as most affecting officer safety. We then surveyed a panel of 30 correctional accrediting experts who serve as audit chairs for the ACA’s Commission on Accreditation concerning the list of 14 institutional factors that BOP and state DOC officials perceived as affecting officer safety. The ACA audit chairs ranked which of these factors most affect officer safety when the factors exist in a correctional institution. The ACA audit chairs also provided a list of cost effective strategies that could be used to address these strategies. The ACA audit chairs were selected based upon their expertise in advising the ACA Accrediting Commission as to which correctional institutions in the United States should be accredited, including BOP institutions. The e-mail-based survey was launched on December 10, 2010, and by the close of the survey on December 22, 2010, we had received 21 responses from the 30 experts, for a response rate of 70 percent. We sent one follow up e-mail to the experts on December 16, 2010. Because our survey was not a sample survey, there are no sampling errors; however, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. In addition, we collaborated with a social science survey specialist to design the survey instrumentation, and the survey was pretested with a subject matter expert at ACA with over 30 years of experience in corrections. From this pretest, we made revisions as necessary. See appendix IV for a copy of our survey. We conducted this work from June 2010 to April 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The figures below depict trends in the characteristics of the Bureau of Prisons’ (BOP) total inmate population, including inmates housed in privately managed or contracted facilities, in each fiscal year, from fiscal year 2000 through 2010. As figure 8 illustrates, the average inmate age increased by more than 2 years from fiscal year 2000 through 2010. As shown in figures 9 and 10, the percentage of inmates by race, ethnicity, and gender has remained relatively constant throughout this period. As depicted in figure 11, the types of offenses for which BOP inmates are incarcerated have also remained relatively constant, with drug offenses comprising more than half the offenses each fiscal year from 2000 through 2010. As figure 12 illustrates, the length of the sentence imposed on BOP inmates has been generally stable, with a slight increase in longer sentences from fiscal year 2000 through 2010. As shown in figure 13, the percentage of inmates associated with a Security Threat Group has fluctuated from fiscal year 2000 through 2010. Specifically, it was generally constant from fiscal year 2000 through 2002, declined slightly in fiscal year 2003, and then steadily increased until fiscal year 2008 when it began to decline in fiscal year 2009 and then again in fiscal year 2010. State departments of corrections are responsible for housing the states’ inmate populations. The table and figures that follow depict the characteristics of state inmates. Inmate populations in the 50 states vary in size from each other. Table 6 displays the inmate populations in each state. Figure 14 presents DOJ Bureau of Justice Statistics estimates of sentenced prisoners under state jurisdiction by race and Hispanic origin. As figure 14 shows, the percentage of Hispanic inmates and inmates of “other” races— including American Indians, Alaska Natives, Native Hawaiians, other Pacific Islanders, and persons identifying as two or more races—under state jurisdiction has been increasing from calendar year 2000 to 2009, while the percentage of black and white inmates has decreased or stayed about the same. Figure 15 presents DOJ Bureau of Justice Statistics estimates of sentenced prisoners under state jurisdiction by gender from December 31, 2000 through December 31, 2009. As depicted in figure 15, the gender breakdown has remained largely stable over this time period. Figure 16 shows the DOJ Bureau of Justice Statistics estimates of the sentenced inmate population under state jurisdiction by the type of offense for which they were convicted, as of the end of 2008, the most currently available data. Based on responses from Bureau of Prisons (BOP) and state correctional officials with whom we spoke, we identified 14 common institutional factors that impact officer safety. In order to determine which of the 14 factors have the greatest impact on officer safety, we sent the survey below to 30 correctional accrediting experts at the American Correctional Association (ACA) and asked them to rank which of the factors—if they exist in an institution—would pose the greatest threat to officer safety. These experts are the audit chairs for the ACA’s Commission on Accreditation, who advise the commission as to which federal, state, and local correctional institutions should be accredited and were therefore selected based on this knowledge. We received responses from 21 experts, who also provided examples of efforts to address these factors that they believed to be cost effective. 1. Which of the following corrections-related positions do you hold? Please check one answer. 2. In general, how much, if at all, does each of the following affect the safety of correctional officers or of other staff performing corrections duties? Please check one answer for each row. a. Ineffective inmate management (e.g., lack of controlled inmate movement, b. Insufficient information sharing among managers and staff within institutions e. Insufficient inmate programming (e.g., prison industries, drug rehabilitation, h. Insufficient discipline of inmates following a violation i. Intoxicated inmates as a result of inmate-manufactured alcohol j. Disruptive inmate behavior due to the sale and use of illegal drugs k. Inmate possession and use of unauthorized communication devices, m. Inmates dissatisfied with food service n. Population of inmates with characteristics that may lead to increased violent behavior (e.g., younger age, longer sentences, lack of parole opportunities) 3.If you would like to elaborate on any of the factors above, please do so in the box below. The box will expand as you type. 4. Which three of the following factors do you believe most affect corrections officer safety? Please check three and no more than three factors in the list below. a. Ineffective inmate management (e.g. lack of controlled inmate movement, insufficient supervision of inmates) b. Insufficient information sharing among managers and staff within institutions e. Insufficient inmate programming (e.g., prison industries, drug rehabilitation, education, recreation) h. Insufficient discipline of inmates following a violation i. Intoxicated inmates as a result of inmate-manufactured alcohol j. Disruptive inmate behavior due to the sale and use of illegal drugs k. Inmate possession and use of unauthorized communication devices, including m. Inmates dissatisfied with food service n. Population of inmates with characteristics that may lead to increased violent behavior (e.g. younger age, longer sentences, lack of parole opportunities) 5. Besides the factors listed above, if there are any other significant factors affecting corrections officer safety please describe them in the box below. The box will expand as you type. 6. The next questions ask you to provide examples(s) of strategies to address each factor that you believe to be cost-effective. Please answer as many as you can. a. What are example(s) of cost effective strategies to address Ineffective inmate management (e.g., lack of controlled inmate movement, insufficient supervision of inmates)? The box will expand as you type. b. What are example(s) of cost effective strategies to address Insufficient information sharing among managers and staff within institutions? The box will expand as you type. c. What are example(s) of cost effective strategies to address Inmate overcrowding? The box will expand as you type. d. What are example(s) of cost effective strategies to address Corrections officer under-staffing? The box will expand as you type. e. What are example(s) of cost effective strategies to address Insufficient inmate programming (e.g., prison industries, drug rehabilitation, education, recreation)? The box will expand as you type. f. What are example(s) of cost effective strategies to address Corrections officer complacency? The box will expand as you type. g. What are example(s) of cost effective strategies to address Insufficient corrections training? The box will expand as you type. h. What are example(s) of cost effective strategies to address Insufficient discipline of inmates following a violation? The box will expand as you type. i. What are example(s) of cost effective strategies to address Intoxicated inmates as a result of inmate-manufactured alcohol? The box will expand as you type. j. What are example(s) of cost effective strategies to address Disruptive inmate behavior due to the sale and use of illegal drugs? The box will expand as you type. k. What are example(s) of cost effective strategies to address Inmate possession and use of unauthorized communication devices, including cell phones? The box will expand as you type. l. What are example(s) of cost effective strategies to address Inmate gangs? The box will expand as you type. m. What are example(s) of cost effective strategies to address Inmates dissatisfied with food service? The box will expand as you type. n. What are example(s) of cost effective strategies to address Population of inmates with characteristics that may lead to increased violent behavior (e.g., younger age, longer sentences, lack of parole opportunities)? The box will expand as you type. 7. If you have any additional comments concerning correctional officer safety, please type them in the box below. The box will expand as you type. Table 7 lists the institutional factors that the officers and officials with whom we spoke reported impacted officer safety. It also provides examples of strategies to mitigate these factors that BOP or state officials reported using or that correctional accrediting experts we surveyed suggested. In addition to the contact named above, key contributors to this report were Joy Gambino, Assistant Director; Jill Evancho, Analyst-In-Charge; Christian Montz, Julia Becker Vieweg, and Miriam Rosenau. Michele Fejfar assisted with design and methodology; Willie Commons III provided legal support; Pedro Almoguera provided economic expertise; and Katherine Davis provided assistance in report preparation.
The Department of Justice's (DOJ) Federal Bureau of Prisons (BOP) manages more than 209,000 inmates, up 45 percent between fiscal years 2000 and 2010. As the prison population grows, so do concerns about correctional officer safety. As requested, GAO examined the (1) equipment that BOP and selected state departments of corrections (DOC) provide to protect officers, and the officers' and other correctional practitioners' opinions of this equipment; (2) extent to which BOP has evaluated the effectiveness of this equipment, and factors correctional equipment experts consider important to the acquisition of new equipment; and (3) institutional factors correctional accrediting experts reported as impacting officer safety, and the extent to which BOP has evaluated the effectiveness of the steps it has taken in response. GAO reviewed BOP policies and procedures; interviewed BOP officials and officers within BOP's six regions, selected based on such factors as the level of facility overcrowding; interviewed officials at 14 of the 15 largest state DOCs; and surveyed 21 individuals selected for their expertise in corrections. The results of the interviews cannot be generalized, but provide insight into issues affecting officer safety. BOP and 14 state DOCs included in GAO's review provide a variety of protective equipment to officers, but BOP officers and management have different views on equipment. BOP generally provides officers with radios, body alarms, keys, flashlights, handcuffs, gloves, and stab-resistant vests while on duty, but prohibits them from storing personal firearms on BOP property, with limited exceptions. DOC officials in 14 states GAO interviewed provided examples of equipment they allow officers to carry while on duty that BOP does not--such as pepper spray--and officials in 9 of the 14 states reported allowing officers to store personal firearms on state DOC property. BOP and states provide similar equipment to protect officers in an emergency, such as an inmate riot or attack. Most BOP officers with whom GAO spoke reported that carrying additional equipment while on duty and commuting would better protect officers, while BOP management largely reported that officers did not need to carry additional equipment to better protect them. BOP has not evaluated the effectiveness of equipment it provides in ensuring officer safety, and correctional equipment experts report that BOP needs to consider a variety of factors in acquisition decisions. Neither the officials nor the experts with whom GAO spoke reported that they were aware of or had conducted evaluations of the effectiveness of equipment in ensuring officer safety, although BOP tracks information necessary to do so in its data systems. By using information in these existing systems, BOP could analyze the effectiveness of the equipment it distributes in ensuring officer safety, thus helping it determine additional actions, if any, to further officer safety and better target limited resources. All of the correctional equipment experts GAO spoke with reported that BOP would need to consider factors such as training, replacement, maintenance, and liability, as well as whether the equipment met performance standards, if it acquired new equipment. These experts suggested that any decision must first be based upon a close examination of the benefits and risk of using certain types of equipment. For example, while state officials reported that pepper spray is inexpensive and effective, a majority of the BOP management officials we spoke with stated that it could be taken by inmates and used against officers. Correctional accrediting experts most frequently cited control over the inmate population, officer training, inmate gangs, correctional staffing and inmate overcrowding as the institutional factors--beyond equipment--most impacting officer safety. These experts suggested various strategies to address these factors, and BOP reported taking steps to do so, such as conducting annual training on BOP policies, identifying and separating gang members, and converting community space into inmate cells. BOP has assessed the effectiveness of steps it has taken in improving officer safety. For instance, a 2001 BOP study found that inmates who participated in BOP's substance abuse treatment program were less likely than a comparison group to engage in misconduct for the remainder of their sentence following program completion. BOP utilizes such studies to inform its decisions, such as eliminating programs found to be ineffective. GAO recommends that BOP's Director assess whether the equipment intended to improve officer safety has been effective. BOP concurred with this recommendation.
In order for students attending a school to receive Title IV funds, a school must be: 1. licensed or otherwise legally authorized to provide higher education in the state in which it is located, 2. accredited by an agency recognized for that purpose by the Secretary 3. deemed eligible and certified to participate in federal student aid programs by Education. Under the Higher Education Act, Education does not determine the quality of higher education institutions or their programs; rather, it relies on recognized accrediting agencies to do so. As part of its role in the administration of federal student aid programs, Education determines which institutions of higher education are eligible to participate in Title IV programs. Education is responsible for overseeing school compliance with Title IV laws and regulations and ensuring that only eligible students receive federal student aid. As part of its compliance monitoring, Education relies on department employees and independent auditors of schools to conduct program reviews and audits of schools. Institutions that participate in Title IV programs must comply with a range of requirements, including consumer disclosure requirements, which include information schools must make available to third parties, as well as reporting requirements, which include information schools must provide to Education. Congress and the President enact the statutes that create federal programs; these statutes may also authorize or direct a federal agency to develop and issue regulations to implement them. Both the authorizing statute and the implementing regulations may contain requirements that recipients must comply with in order to receive federal funds. The statute itself may impose specific requirements; alternatively, it may set general parameters and the implementing agency may then issue regulations further clarifying the requirements. Federal agencies may evaluate and modify their regulatory requirements, but they lack the authority to modify requirements imposed by statute. In addition, when issuing rules related to programs authorized under Title IV, Education is generally required by the HEA to use negotiated rulemaking, a process that directly involves stakeholders in drafting proposed regulations. Once the department determines that a rulemaking is necessary, it publishes a notice in the Federal Register, announcing its intent to form a negotiated rulemaking committee, and holds public hearings to seek input on the issues to be negotiated. Stakeholders, who are nominated by the public and selected by Education to serve as negotiators, may include schools and their professional associations, as well as student representatives and other interested parties. A representative from Education and stakeholders work together on a committee that attempts to reach consensus, which Education defines as unanimous agreement on the entire proposed regulatory language. If consensus is reached, Education will generally publish the agreed-upon language as its proposed rule. If consensus is not reached, Education is not bound by the results of the negotiating committee when drafting the proposed rule. According to proponents, the negotiated rulemaking process increases the flow of information between the department and those who must implement requirements. Once a proposed rule is published, Education continues the rulemaking process by providing the public an opportunity to comment before issuing the final rule. The Paperwork Reduction Act (PRA) requires federal agencies to assess and seek public comment on certain kinds of burden, in accordance with its purpose of minimizing the paperwork burden and maximizing the utility of information collected by the federal government. Under the PRA, agencies are generally required to seek public comment and obtain Office of Management and Budget (OMB) approval before collecting information from the public, including schools. Agencies seek OMB approval by submitting information collection requests (ICR), which include among other things, a description of the planned collection efforts, as well as estimates of burden in terms of time, effort, or financial resources that respondents will expend to gather and submit the information. Agencies are also required to solicit public comment on proposed information collections by publishing notices in the Federal Register. If a proposed information collection is part of a proposed rulemaking, the agency may include the PRA notice for the information collection in the Notice of Proposed Rulemaking for that rule. The PRA authorizes OMB to approve information collections for up to 3 years. Agencies seeking an extension of OMB approval must re-submit an ICR using similar procedures, including soliciting public comment on the continued need for and burden imposed by the information collection. Over the last two decades, there have been several efforts to examine the federal regulatory burden faced by schools (see table 1). While intending to make regulations more efficient and less burdensome, several of these efforts also acknowledge that regulation provides benefits to government and the public at large. The specific results of initiatives varied, as described below. For example, Executive Order 13563, which was issued in 2011, requires agencies to, among other things, develop plans to periodically review their existing significant regulations and determine whether these regulations should be modified, streamlined, expanded, or repealed to make the agencies’ regulatory programs more effective or less burdensome. Consistent with the order’s emphasis on public participation in the rulemaking process, OMB guidance encourages agencies to obtain public input on their plans. The specific results of initiatives varied, as described below. Although the 18 experts we interviewed offered varied opinions on which Title IV requirements are the most burdensome, 16 said that federal requirements impose burden on postsecondary schools. While no single requirement was cited as most burdensome by a majority of experts, 11 cited various consumer disclosures schools must provide or make available to the public, students, and staff (see table 2). Among other things, these disclosure requirements include providing certain information about schools, such as student enrollment, graduation rates, and cost of attendance. The most frequently mentioned consumer disclosure requirement—cited by 5 experts as burdensome—was the “Clery Act” campus security and crime statistics disclosure requirement. Two experts noted the burden associated with reporting security data, some of which may overlap with federal, state, and local law enforcement agencies. Beyond consumer disclosures, 4 experts stated that schools are burdened by requirements related to the return of unearned Title IV funds to the federal government when a student receiving financial aid withdraws from school. According to 2 experts, schools find it particularly difficult both to calculate the precise amount of funds that should be returned and to determine the date on which a student withdrew. Finally, 6 experts we interviewed stated that, in their view, it is the accumulation of burden imposed by multiple requirements—rather than burden derived from a single requirement—that accounts for the burden felt by postsecondary schools. Three stated that requirements are incrementally added, resulting in increased burden over time. Experts also described some of the benefits associated with Title IV requirements. For example, one expert stated that requiring schools to disclose information to students to help them understand that they have a responsibility to repay their loans could be beneficial. Another expert noted that consumer disclosures allow students to identify programs relevant to their interests and that they can afford. School officials who participated in our discussion groups told us that Title IV requirements impose burden in a number of ways, as shown in table 3. Participants in all eight groups discussed various requirements that they believe create burden for schools because they are, among other things, too costly and complicated. For example, participants in four groups said the requirement that schools receiving Title IV funds post a net price calculator on their websites—an application that provides consumers with estimates of the costs of attending a school—has proven costly or complicated, noting challenges such as those associated with the web application, obtaining the necessary data, or providing information that may not fit the schools’ circumstances. School officials from six discussion groups also noted that complying with requirements related to the Return of Title IV Funds can be costly because of the time required to calculate how much money should be returned to the federal government (see Appendix III for information on selected comments on specific federal requirements school officials described as burdensome). Participants in six of eight discussion groups said that consumer disclosures were complicated, and participants in seven groups said that Return of Title IV Funds requirements were complicated. For example, participants in one discussion group stated that consumer disclosures are complicated because reporting periods can vary for different types of information. Another explained that the complexity of consumer disclosures is a burden to staff because the information can be difficult to explain to current or prospective students. Also, participants in two groups stated that the complexity of consumer disclosures makes it difficult for schools to ensure compliance with the requirements. Likewise, participants noted that calculating the amount of Title IV funds that should be returned can be complicated because of the difficulty of determining the number of days a student attended class as well as the correct number of days in the payment period or period of enrollment for courses that do not span the entire period. Participants in three discussion groups found the complexity of Return of Title IV requirements made it difficult to complete returns within the required time frame. In addition, participants from four groups noted the complexity increases the risk of audit findings, which puts pressure on staff. Discussion group participants identified other types of concerns that apply primarily to consumer disclosures. For example, participants in two groups said that it is burdensome for schools to make public some disclosures, such as graduates’ job placement data, because they cannot easily be compared across schools, thereby defeating the purpose of the information. Like six of the experts we interviewed, participants in six discussion groups noted that burden results from the accumulation of many requirements rather than a few difficult requirements. Two participants said that when new requirements are added, generally, none are taken away. Similarly, two other participants commented that the amount of information schools are required to report grows over time. Another commented that it is difficult to get multiple departments within a school to coordinate in order to comply with the range of requirements to which schools are subject under Title IV. Other federal requirements, in addition to those related to Title IV, may also apply to postsecondary schools (see Appendix IV for selected examples). School officials also described some benefits of Title IV requirements. Participants in three discussion groups pointed out that some consumer information can be used to help applicants choose the right school. Other participants commented that consumer disclosures encourage transparency. For example, participants in two groups said the information schools are required to disclose regarding textbooks helps students compare prices and consider the total cost of books. Regarding Return of Title IV Funds, participants in three discussion groups stated that the process helps restore funds to the federal government that can be redirected to other students. Education seeks feedback on burden through formal channels such as publishing notices seeking comments on its burden estimates for proposed information collections, its retrospective analysis plan, and negotiated rulemaking. As shown in table 4, the department publishes notices in the Federal Register, on its website, and through a listserv to make the public aware of opportunities to provide feedback on burden.Department officials also said they receive some feedback from school officials through informal channels such as training sessions and open forums at conferences. Although Education has published notices seeking feedback on burden, officials said the department has received few comments in response to its solicitations. For example, Education said it received no comments in response to its request for public comment on burden estimates included in its 2010 “Program Integrity” Notices of Proposed Rulemaking, which proposed multiple regulatory changes with increased burden estimates. In addition, Education officials said some of the comments they receive about burden estimates are too general to make modifications in response to them. We focused on ICRs submitted by two Education offices that manage postsecondary issues: the Office of Federal Student Aid and the Office of Postsecondary Education. We selected the time period because it coincides with the 2006 launch of the OMB and General Services Administration web portal used by agencies to electronically post comments and other documents related to information collections to reginfo.gov; includes the enactment of the Higher Education Opportunity Act in 2008, which resulted in regulatory changes; and includes ICRs recently submitted. See Appendix I for additional information on the types of ICRs included in our review. shows that fewer than one-fourth (65 of 353) received public comments, of which 25 included comments that addressed burden faced by schools (see fig 1). For example, 2 ICRs received input on the difficulties of providing data requested by the department. We identified 40 ICRs that did not receive comments on burden faced by schools; several ICRs, for example, received input on simplifying the language of student loan– related forms. Further, in a review of the 30 comments received by the department in response to its proposed retrospective analysis plan, we identified 11 comments related to higher education, of which 9 mentioned regulatory burden. For example, one commenter described difficulties that smaller schools may have meeting reporting requirements. Negotiated rulemaking presents another opportunity for schools and others to provide feedback on burden. Six experts and participants in six discussion groups thought aspects of negotiated rulemaking are beneficial overall. However, some experts and discussion group participants said certain aspects of the process may limit the impact of feedback on burden. Specifically, four experts and participants in six of our discussion groups expressed concern that when the negotiated rulemaking process does not achieve consensus, the department may draft regulations unencumbered by negotiators’ input, which may have addressed burden. According to those we spoke with, consensus may not be achieved, for example, if Education includes controversial topics over which there is likely to be disagreement or declines to agree with other negotiators. Education officials responded that their goal during negotiated rulemakings is to draft the best language for the regulation. Further, department officials said that negotiators can collectively agree to make changes to the agenda, unanimous consensus provides negotiators with an incentive to work together, and that the department cannot avoid negotiated rulemaking on controversial topics. Education officials said that when consensus is not achieved, the department rarely deviates from any language agreed upon by negotiators. Notwithstanding the benefits of Title IV requirements, school officials believe that the burden created by federal requirements diverts time and resources from their primary mission of educating students. Our findings—as well as those of previous studies—indicate that the burden reported by school officials and experts not only stems from a single or a few requirements, but also from the accumulation of many requirements. While Education has solicited feedback on the burdens associated with federal requirements, our findings show that stakeholders do not always provide this feedback. As a result, stakeholders may be missing an opportunity to help reduce the burden of federal requirements on schools. We provided a draft of this report to Education for comment. Education’s written comments are reproduced in Appendix II. Education sought a clearer distinction in the report between statutory and regulatory requirements as well as Education’s authority to address statutory requirements. We have added information accordingly. Education also recommended the report distinguish between reporting and disclosure requirements, and we have provided definitions in the background in response. Education expressed concern that the report did not sufficiently consider the benefits of federal requirements. We agree that federal requirements generally have a purpose and associated benefits—such as benefits associated with program oversight and consumer awareness— which we acknowledge in our report. Analyzing the costs and benefits associated with individual requirements was beyond the scope of this report, as our primary objective was to obtain stakeholder views on burdens. Education also suggested we report more on its efforts to balance burden and benefits when designing information collections. We acknowledged these efforts in our report and incorporated additional information that Education subsequently provided. Education also provided technical comments that were incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. We are sending copies of this report to the appropriate congressional committees and the Secretary of Education. In addition, the report is available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in Appendix V. To identify which, if any, federal requirements experts say create burden for postsecondary schools, we interviewed a range of experts. We chose these experts based on factors such as: familiarity or experience with Title IV requirements, recognition in the professional community, relevance of their published work to our topic, and recommendations from others. We conducted interviews with representatives of nine higher education associations that represent public, private nonprofit, private for- profit schools, including associations representing research universities, community colleges, and minority-serving institutions. We also conducted interviews with nine other postsecondary experts, including researchers and officials from individual schools with knowledge of Title IV requirements. Because our review focused on the burden and benefits experts say requirements create, we did not evaluate consumers’ perspectives on information schools provide. To determine the types of burdens and benefits that schools say federal requirements create, we conducted eight discussion groups at two national conferences with a nongeneralizable sample of officials from 51 schools. Discussions were guided by a moderator who used a standardized list of questions to encourage participants to share their thoughts and experiences. To optimize time during each session, we focused part of the discussion on the perceived benefits and burdens associated with one of the two sets of requirements most often cited as burdensome during the interviews we conducted with experts: consumer disclosures and Return of Title IV Funds. Specifically, four groups primarily focused on the burdens and benefits associated with consumer disclosures and four groups focused primarily on Return of Title IV Funds. In addition, each group was provided the opportunity to discuss other requirements that officials found to be burdensome, as well as how, if at all, officials communicate feedback on burden to Education. Discussion groups are not an appropriate means to gather generalizable information about school officials’ awareness of feedback opportunities because participants were self-selected and may be more aware of federal requirements and feedback opportunities than others in the population. Methodologically, group discussions are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the reasons for the discussion group participants’ attitudes on specific topics and to offer insights into their concerns about and support for an issue. In addition, the discussion groups may be limited because participants represented only those schools that had representatives at the specific conferences we attended and because participants are comprised of self-selected volunteers. To determine how Education solicits feedback from stakeholders on burden, we conducted interviews with Education officials and reviewed documentation, such as agency web pages and listserv postings used by Education to inform schools and other interested parties about negotiated rulemaking and information collections. We also solicited the views of experts during interviews, and asked school officials in discussion groups about how, if at all, they communicate feedback on burden to Education. Because participants were self-selected, they are more likely to be aware of federal requirements and feedback opportunities than the general population. We reviewed Education’s ICRs related to postsecondary education submitted to OMB from August 1, 2006, to October 31, 2012, to determine how many received public comments. We also reviewed the ICRs that received comments to determine how many received comments related to burden. To do so, we used OMB’s reginfo.gov website, and took steps to verify the reliability of the database. We interviewed agency officials, tested the reliability of a data field, and reviewed documentation. We found the database to be reliable for our purposes. In our review of ICRs, we included new information collections along with revisions, reinstatements, and extensions of existing information collections without changes. We excluded ICRs that agencies are not required to obtain public comment on, such as those seeking approval of nonsubstantive changes. We also excluded ICRs for which the associated documents did not allow us to interpret the comments. To determine how many ICRs received comments that discussed burden faced by schools, one analyst reviewed comments for each ICR and classified them as being related or not related to the burden faced by schools. Another analyst verified these categorizations and counts. We also reviewed the number and nature of comments on Education’s preliminary plan for retrospective analysis by downloading comments from regulations.gov. We verified with Education the total number of comments received. To determine whether comments discussed burdens faced by schools, one analyst reviewed each comment and classified it as being related or not related to higher education regulations and whether it referenced burden faced by schools. Another analyst verified these categorizations and counts. We did not review comments submitted to Education in response to proposed rules. Education has received thousands of comments in response to proposed regulations in recent years, and the site does not contain a search feature that would have allowed us to distinguish comments regarding burden estimates from other topics. For all objectives, we reviewed relevant federal laws and regulations. We conducted this performance audit from April 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The table below lists some of the specific concerns expressed by school officials we spoke to in discussion groups in response to questions about burdensome federal requirements. GAO identified statutory or regulatory provisions that relate to the burdens described by school officials and compiled these summaries to better illustrate the underlying requirements about which we received comments. These are only examples, not a list of every requirement specifically reported to us as burdensome. The summaries provided below are not intended to be complete descriptions of each requirement, and additional statutory or regulatory provisions related to these comments may also apply. In some cases a provision may have multiple sources, such as where statutory requirements are further interpreted in a regulation or guidance document. Discussion Group Participant Concern Consumer Disclosures: This category encompasses a number of different federal requirements to collect information on various topics and make that information available to specified groups or entities. Students, prospective students, and others can use this information to be better informed. The information can help people make decisions such as whether or not to attend or seek employment at a school. Summary of Related Federal Provisions The statute and regulations require eligible institutions to collect certain information on campus crime statistics and security policies and prepare, publish, and distribute an annual security report to all current students and employees (and to any prospective student or employee upon request). The report must contain, among other information, statistics on certain crimes reported to campus security authorities or local police agencies. 20 U.S.C. § 1092(f)(1)(F), 34 C.F.R. §§ 668.41(e), 668.46. The regulations require that an institution “make a reasonable, good faith effort to obtain the required statistics” and may rely on information supplied by a local or state police agency. “If the institution makes such a reasonable, good faith effort, it is not responsible for the failure of the local or State police agency to supply the required statistics.” 34 C.F.R. § 668.46(c)(9). Discussion Group Participant Concern Placement rates. Placement rate calculations are different for different schools or within schools and confusing to students, requiring school staff to give additional explanation to some data. Summary of Related Federal Provisions The statute requires that institutions produce and make readily available upon request—through appropriate publications, mailings, and electronic media—to an enrolled student and to any prospective student the placement in employment of, and types of employment obtained by, graduates of the institution’s degree or certificate programs, gathered from such sources as alumni surveys, student satisfaction surveys, the National Survey of Student Engagement, the Community College Survey of Student Engagement, State data systems, or other relevant sources. 20 U.S.C. § 1092(a)(1)(R). According to the regulations, information concerning the placement of, and types of employment obtained by, graduates of the institution’s degree or certificate programs may be gathered from: (1) the institution’s placement rate for any program, if it calculates such a rate; (2) state data systems; (3) alumni or student satisfaction surveys; or (4) other relevant sources. The institution must identify the source of the information provided, as well as any time frames and methodology associated with it. In addition, the institution must disclose any placement rates it calculates. 34 C.F.R. § 668.41(d)(5). Return of Title IV Funds: In general, if a recipient of Title IV grant or loan assistance withdraws from an institution, the statute and regulations establish a procedure for calculating and returning unearned funds. Returning these funds can protect the interests of the federal government and the borrower. The statute provides that, for institutions required to take attendance, the day of withdrawal is determined by the institution from such attendance records. 20 U.S.C. § 1091b(c)(1)(B). The regulations prescribe in further detail which institutions are required to take attendance and how to determine the withdrawal date: For a student who ceases attendance at an institution that is required to take attendance, including a student who does not return from an approved leave of absence, or a student who takes a leave of absence that does not meet the regulatory requirements, the student’s withdrawal date is the last date of academic attendance as determined by the institution from its attendance records. 34 C.F.R. § 668.22(b). “Institutions that are required to take attendance are expected to have a procedure in place for routinely monitoring attendance records to determine in a timely manner when a student withdraws. Except in unusual instances, the date of the institution’s determination that the student withdrew should be no later than 14 days (less if the school has a policy requiring determination in fewer than 14 days) after the student’s last date of attendance as determined by the institution from its attendance records.” Federal Student Aid Handbook, June 2012, and Education “Dear Colleague Letters” GEN-04-03 Revised, Nov. 2004, and DCL GEN-11-14, July 20, 2011. Summary of Related Federal Provisions An institution is required to return any unearned Title IV funds it is responsible for returning within 45 days of the date the school determined the student withdrew. 20 U.S.C. § 1091b(b)(1), 34 C.F.R. §§ 668.22(j)(1), 668.173(b). For a student who withdraws from a school that is not required to take attendance without providing notification, the school must determine the withdrawal date no later than 30 days after the end of the earlier of (1) the payment period or the period of enrollment (as applicable), (2) the academic year, or (3) the student’s educational program. 34 C.F.R. § 668.22(j)(2). “If a student who began attendance and has not officially withdrawn fails to earn a passing grade in at least one course over an entire period, the institution must assume, for Title IV purposes, that the student has unofficially withdrawn, unless the institution can document that the student completed the period. “In some cases, a school may use its policy for awarding or reporting final grades to determine whether a student who failed to earn a passing grade in any of his or her classes completed the period. For example, a school might have an official grading policy that provides instructors with the ability to differentiate between those students who complete the course but failed to achieve the course objectives and those students who did not complete the course. If so, the institution may use its academic policy for awarding final grades to determine that a student who did not receive at least one passing grade nevertheless completed the period. Another school might require instructors to report, for all students awarded a non- passing grade, the student’s last day of attendance (LDA). The school may use this information to determine whether a student who received all “F” grades withdrew. If one instructor reports that the student attended through the end of the period, then the student is not a withdrawal. In the absence of evidence of a last day of attendance at an academically related activity, a school must consider a student who failed to earn a passing grade in all classes to be an unofficial withdrawal.” Federal Student Aid Handbook, June 2012, and Education “Dear Colleague Letter” GEN-04-03 Revised, Nov. 2004. All references to “statute” or “regulations” are references to the Higher Education Act of 1965 (HEA), as amended, and Education’s implementing regulations. All references to “eligible institutions” refer to eligible institutions participating in Title IV programs, as defined by the HEA, as amended. Postsecondary schools may be subject to numerous federal requirements in addition to those related to Title IV of the Higher Education Act of 1965, as amended, which may be established by various other statutes or regulations promulgated by different agencies. The specific requirements to which an individual school is subject may depend on a variety of factors, such as whether it conducts certain kinds of research or is tax- exempt (see the following examples). This is not intended to be a comprehensive list; rather the examples were selected to represent the variety of types of requirements to which schools may be subject. Nuclear Research: Schools licensed to conduct medical research using nuclear byproduct material must follow Nuclear Regulatory Commission requirements on safety and security, or compatible requirements issued by a state that has entered into an agreement with the Nuclear Regulatory Commission. Schools that house nuclear reactors for research purposes are also subject to additional regulations, including those on emergency management. Research Misconduct: To receive federal funding under the Public Health Service Act for biomedical or behavioral research, institutions (including colleges and universities) must have written policies and procedures for addressing research misconduct and must submit an annual compliance report to the federal government. The Public Health Service has issued regulations detailing institutions’ responsibilities in complying with these requirements. Research on animals: Applicants for funding for biomedical or behavioral research under the Public Health Service Act must provide an assurance to the National Institutes of Health that the research entity complies with the Animal Welfare Act and the Public Health Service Policy on Humane Care and Use of Laboratory Animals, and that it has appointed an appropriate oversight committee (an Institutional Animal Care and Use Committee). The oversight committee must review the care and treatment of animals in all animal study areas and facilities of the research entity at least semi-annually to ensure compliance with the Policy. Employment Discrimination: Title VII of the Civil Rights Act of 1964, as amended, prohibits employment practices that discriminate based on race, color, religion, sex and national origin. These requirements apply to schools that qualify as employers as defined by Title VII, generally including private and state or local employers that employ 15 or more employees. Disabilities. The Americans with Disabilities Act of 1990 prohibits discrimination against individuals with disabilities in several areas, including employment, state and local government activities, and public accommodations. Act of 1973, as amended, prohibits discrimination on the basis of disability under any program or activity that receives federal financial assistance. Colleges, universities, other postsecondary institutions, and public institutions of higher education are subject to these requirements. In addition, section 504 of the Rehabilitation Sex Discrimination. Title IX of the Education Amendments of 1972 prohibits discrimination on the basis of sex in any federally funded education program or activity. Title IX applies, with a few specific exceptions, to all aspects of education programs or activities that receive federal financial assistance, including athletics. Byrd Amendment: Educational institutions that receive federal funds must hold an annual educational program on the U.S. Constitution. 42 U.S.C. §§ 12101–12213. Different agencies administer different aspects of the Americans with Disabilities Act, including the Equal Employment Opportunity Commission and the Department of Justice. Internal Revenue Service Form 990: Schools that have tax-exempt status generally must annually file IRS Form 990. The form requires a range of information on the organization’s exempt and other activities, finances, governance, compliance with certain federal tax requirements, and compensation paid to certain persons. In addition to the contact named above, Bryon Gordon (Assistant Director), Debra Prescott (Assistant Director), Anna Bonelli, Joy Myers, and Daren Sweeney made key contributions to this report. Additionally, Deborah Bland, Kate Blumenreich, Tim Bober, Sarah Cornetto, Holly Dye, Kathleen van Gelder, and Elizabeth Wood aided in this assignment.
Postsecondary schools must comply with a variety of federal requirements to participate in student financial aid programs authorized under Title IV. While these requirements offer potential benefits to schools, students, and taxpayers, questions have been raised as to whether they may also distract schools from their primary mission of educating students. GAO examined (1) which requirements, if any, experts say create burden, (2) the types of burdens and benefits schools say requirements create, and (3) how Education solicits feedback from stakeholders on regulatory burden. GAO reviewed relevant federal regulatory and statutory requirements, and past and ongoing efforts examining postsecondary regulatory burden; interviewed Education officials and 18 experts, including officials from associations that represent postsecondary schools; and conducted eight discussion groups at two national conferences with a nongeneralizable sample of 51 school officials from public, nonprofit, and for-profit sectors. GAO also reviewed documentation associated with Education's requests for public comment on burden for proposed postsecondary information collections and its retrospective analysis of regulations. Experts GAO interviewed offered varied opinions on which student financial aid requirements under Title IV of the Higher Education Act of 1965, as amended, are the most burdensome. While no single requirement was cited as burdensome by a majority of the 18 experts, 11 cited various consumer disclosure requirements--such as those pertaining to campus safety--primarily due to the time and difficulty needed to gather the information. Beyond consumer disclosures, 4 experts cited "Return of Title IV Funds"--which requires schools to calculate and return unearned financial aid to the federal government when a recipient withdraws from school--as burdensome because schools find it difficult to calculate the precise amount of funds that should be returned. More broadly, 6 experts said that the cumulative burden of multiple requirements is a substantial challenge. Experts also noted some benefits. For example, an expert said required loan disclosures help students understand their repayment responsibilities. School officials who participated in each of the eight discussion groups GAO conducted expressed similar views about the types of burdens and benefits associated with Title IV requirements. Participants in all groups said requirements for consumer disclosures and Return of Title IV Funds are costly and complicated. Regarding consumer disclosures, participants questioned the value of disclosing data that cannot be readily compared across schools, like data on graduates' employment, which may be calculated using different methodologies. Participants in four groups found Return of Title IV Funds requirements difficult to complete within the required time frame. Participants also cited some benefits, such as how consumer disclosures can help applicants choose the right school and unearned Title IV funds can be redirected to other students. Education seeks feedback from schools on regulatory burden mainly through formal channels, such as announcements posted in the Federal Register, on its website, and on a department listserv. However, Education officials said they have received a limited number of comments about burden in response to these announcements. GAO reviewed Education's notices soliciting public comments on burden estimates for its postsecondary information collections--which require the public, including schools, to submit or publish specified data--and found that 65 of 353 notices (18 percent) received comments, of which 25 received comments related to burden. For example, 2 notices received input on the difficulties of providing data requested by the department. GAO makes no recommendations in this report. In its comments, Education sought clarification regarding types of federal requirements and additional information on its efforts to balance burden and benefits. We provided clarifications and additional information, as appropriate.
Pension plans defer compensation from working years to retirement years. There are two major types of pension plans. A defined benefit plan specifies a formula for computing benefits payable at retirement based on age, length of plan participation, and earnings history. A defined contribution plan provides a framework within which the employer and/or employees contribute to individual worker accounts. The balance in this account at retirement, reflecting contributions plus investment income, constitutes the source of retirement benefits from a defined contribution plan. Put simply, a defined benefit plan specifies benefits, and a defined contribution plan specifies contributions. The Employee Retirement Income Security Act (ERISA) of 1974 requires annual financial and actuarial reporting by most private pension plans. Public Law 95-595, 31 U.S.C. 9501-9504, enacted on November 4, 1978, extended financial and actuarial reporting requirements to federal government pension plans. The Comptroller General and the Office of Management and Budget (OMB) jointly prescribe the form and content of the annual pension plan reports under Public Law 95-595. The reports are due 210 days after the last day of each plan’s fiscal year and are to be sent to the Congress and the General Accounting Office. Public Law 95-595 defines the term “government pension plan” to mean a pension, annuity, retirement, or similar plan established or maintained by an agency for any of its officers or employees, regardless of the number of participants. The plans subject to Public Law 95-595 fall into three general categories: agency plans, nonappropriated fund activity plans, and federal reserve and farm credit plans. Agency plans cover employees of executive, legislative, and judicial organizations that are generally recognized as agencies and are generally funded by annual appropriations. Nonappropriated fund activity plans cover employees of organizations, such as post exchanges and commissaries, that provide morale, welfare, and recreation services to military components. In large part, these organizations are designed to be self-sufficient and operate with revenues generated from their activities. Finally, the federal reserve and farm credit plans cover employees of federal reserve and farm credit system entities, which also operate with revenues generated from their activities. The agencies, nonappropriated fund activities, and federal reserve and farm credit entities offer 34 defined benefit pension plans, which cover more than 10 million current employees, separated employees entitled to benefits, and retirees. Fifteen of the 34 defined benefit plans are agency plans. Nonappropriated fund activities and federal reserve and farm credit entities operate the other 19. Specifically, nonappropriated fund activities have 8 defined benefit plans for civilian employees who provide services to the Army, Navy, Air Force, Marines, and Coast Guard; the Federal Reserve System has a defined benefit plan for employees of the federal reserve banks and the Board of Governors; and the Farm Credit Systemhas 10 defined benefit plans for employees of the various district banks. Approximately 5.8 million active employees participate in the 34 federal government defined benefit plans. In addition, the plans provide benefits to 4.1 million annuitants, and another 119,000 separated employees are entitled to deferred retirement benefits. The defined benefit plans range in number of participants from the Civil Service Retirement and Disability Fund (CSRDF), with 5.2 million participants, to several plans which have fewer than 25 participants. Participants in the two largest plans, CSRDF and the Military Retirement System, constitute 97 percent of participants in the 34 federal defined benefit plans. CSRDF consists of the Civil Service Retirement System (CSRS) and the Federal Employees’ Retirement System (FERS). The Congress closed CSRS to new participants at the end of 1983, and employees hired since 1983 generally are covered by FERS. Appendix III, table 1, lists the number of participants for each of the defined benefit plans. The federal government also offers defined contribution plans, generally to supplement the deferred compensation employees earn under defined benefit plans. According to the most recent pension plan filings, 2.2 million individuals participate in 17 defined contribution plans sponsored by agencies, nonappropriated fund activities, and federal reserve and farm credit entities. The largest federal defined contribution plan is the Thrift Savings Plan, which has 2.1 million participants, or 97 percent of the participants enrolled in federal government defined contribution plans. Appendix III, table 5, lists the number of participants for each of the defined contribution plans. The vast majority of defined benefit plans sponsored by the federal government offer retirement, survivor, and disability benefits to their participants. As of the most recent plan filings, participants in the 34 defined benefit plans had accumulated more than $1.2 trillion in total retirement benefits, the vast majority in the 15 agency plans. The various federal government plans provide significantly different retirement benefits to their members, depending on factors such as age and salary at retirement, years of service, election of survivor annuities, and cost-of-living adjustments. In addition, some defined benefit plans are supplemented with defined contribution plans and Social Security benefits, and others are not. Certain defined benefit plans provide different levels of benefits for different employee groups. A general description of basic retirement benefits for each of the federal defined benefit plans is provided in the plan profiles in appendix I. These descriptions provide general information only and do not include details for determining or comparing actual benefit amounts. Actual benefit amounts for individual defined benefit plan participants may differ significantly as a result of early retirement, disability benefits, survivor benefit elections, and other factors applicable to specific circumstances. The majority of federal government defined contribution plans provide employer matches to employee contributions. Contribution percentages for each of the defined contribution plans are listed in the plan profiles in appendix II and are summarized in appendix III, table 5. Differences exist in the funding of federal government defined benefit plans. Of these 34 plans, 28 use trust funds, while 6 of the agency plans are referred to as pay-as-you-go plans. Trust funds are separate accounting entities established to account for government and employee contributions, investments, and benefits paid. The pay-as-you-go plans do not have trust funds to accumulate assets to pay plan benefits. For these six plans, benefits are paid to annuitants from appropriations in the year in which the benefits are due. Trust funds for agency defined benefit plans, with the exception of the Tennessee Valley Authority (TVA), invest in special issue Treasury securities, which are nonmarketable. The primary purpose of the trust funds is not to provide a source of cash for the government to pay benefits, but to provide budget authority to allow the Treasury to disburse monthly annuity checks without annual appropriations. Because these securities represent assets of the trust funds and offsetting liabilities of the Treasury, under accounting procedures, the trust fund assets are eliminated in the governmentwide financial statements. Accordingly, these trust fund assets are not included in the governmentwide financial statements, which include the federal government’s $1.2 trillion liability for the benefit obligations of the 15 agency plans. The defined benefit plans of the nonappropriated fund activities and federal reserve and farm credit entities, as well as TVA, use trust funds to set aside money or marketable assets during employees’ working years for the accruing cost of their retirement benefits. A defined benefit pension plan’s status as fully funded or underfunded is determined by comparing its net assets to the actuarial present value of its benefit obligations. The agency defined benefit plans generally are underfunded—that is, the present value of benefit obligations exceeds plan assets. As discussed more fully in the retirement system financing section below, statutory provisions are in place for the future elimination of the unfunded benefit obligations of CSRS and the Military Retirement System. A principal effect of not fully funding most agency pension plans is that agencies’ budgets have not included the full cost of the pensions. Because contributions to most agency plans, under applicable requirements, have covered less than the full accruing cost of retirement benefits to covered employees, the agencies’ budgets have not reflected the full cost of government programs. The defined benefit plans of the nonappropriated fund activities and the federal reserve and farm credit entities, as well as TVA, have generally contributed amounts sufficient to set aside money or marketable assets in trust funds to fully fund their estimated accumulated benefit obligations. One measure of a defined benefit pension plan’s obligation for benefits is represented by the present value of accrued benefits. This actuarial measure is referred to as the Accumulated Benefit Obligation. It applies to private sector defined benefit plans in accordance with Statement of Financial Accounting Standards No. 35 and also was used in the federal government’s prototype governmentwide financial statements for fiscal year 1993. It reflects all accrued benefits due under the plan as if the entity ceased as a going concern. The Accumulated Benefit Obligation is a “static” measure because it does not consider anticipated pay increases, cost-of-living adjustments, or future contributions. Another measure of benefit obligations is represented by the present value of future benefits, net of the present value of future normal cost contributions. This actuarial measure is referred to as the Actuarial Accrued Liability. It is a “dynamic” measure because it considers estimated future service and salary changes, as well as the present value of future normal cost contributions. The Federal Accounting Standards Advisory Board issued an exposure draft entitled Accounting for Liabilities of the Federal Government (November 7, 1994). Its provisions would require that the Actuarial Accrued Liability of federal government defined benefit plans be reflected in federal government financial statements. For the most recent plan filings, 21 of the 34 federal government defined benefit plans—3 of the 15 agency plans and 18 of the 19 plans sponsored by the nonappropriated fund activities and federal reserve and farm credit entities—were fully funded under the static Accumulated Benefit Obligation measure. Those filings also indicate that 15 of the 34 plans—5 agency plans and 10 others—were fully funded under the dynamic Actuarial Accrued Liability measure. Of the largest federal pension programs, FERS is fully funded under the Accumulated Benefit Obligation measure and nearly fully funded under the Actuarial Accrued Liability measure, and statutory provisions for the future elimination of the unfunded benefit obligations of CSRS and the Military Retirement System have already been enacted. Under current law, the government will amortize its unfunded actuarial accrued liabilities by increasing the amount of special issue government securities issued by the Treasury to the trust funds. (See footnote 5.) The special issue Treasury securities represent that portion of estimated future retirement benefit obligations of the agency defined benefit plans that the government has recognized on paper by providing budget authority to cover future benefit payments. The unfunded obligation of an agency plan is that portion of estimated future benefit obligations that has no paper backing in the form of special issue Treasury securities. Therefore, because special issue Treasury securities are used, whether the obligation is funded or unfunded has no effect on current budget outlays. Also, the obligation is not a measure of the government’s ability to pay retirement benefits in the future. The Treasury must obtain the necessary money through tax receipts or borrowing to pay plan benefits to annuitants when those benefits are due for plans having trust funds invested in special issue Treasury securities and for pay-as-you-go plans. This financing approach enables the federal government to defer obtaining the money until it is needed to pay the benefits. Appendix III, table 3, lists the Accumulated Benefit Obligation for each defined benefit plan. Appendix III, table 4, lists the Actuarial Accrued Liability, and the plan profiles in appendix I describe the applicable provisions for eliminating unfunded benefit obligations. By definition, because defined contribution plans do not specify the retirement benefits an individual will receive, their obligation to pay benefits is limited to the contributions made by or on behalf of each individual and any earnings on those contributions. The 15 federal agency defined benefit plans have a total of $464 billion in investments, the vast majority of which are required by law to be invested in U.S. government obligations. These investments consist primarily of nonmarketable special issue U.S. Treasury securities, as described in the preceding sections. The defined benefit plans of the nonappropriated fund activities and federal reserve and farm credit entities reported that they are not restricted to investments in government obligations. The plans of these 19 entities have a combined investment portfolio of $7 billion, of which 88 percent is invested in assets other than U.S. government obligations. The investments consist primarily of corporate stocks and bonds. Appendix III, table 2, lists the investments of each defined benefit plan. Assets in the 17 federal government defined contribution plans are primarily invested in various marketable stock, bond, and government security funds. Investments in these 17 plans totaled more than $28 billion as of the latest plan filings, of which $26 billion was held by the Thrift Savings Plan. The Thrift Savings Plan invests employee designated contributions to the plan’s government securities fund in special issue Treasury securities. Under this financing approach, which is used for the agency defined benefit plans as described in the preceding section, the Treasury must obtain the necessary money through tax receipts or borrowing to pay plan benefits when those benefits are due. However, for the Thrift Savings Plan, budget outlays are recorded for employer and employee contributions as they are made each pay period. Outlays are recorded because the Thrift Savings Plan is not included in the U.S. budget, unlike agency defined benefit plans. Appendix III, table 5, lists the total investment balance for each federal government defined contribution plan. To summarize information on pension plans of the federal government, we reviewed the most recent filings by 51 federal pension plans under Public Law 95-595 received as of the end of our fieldwork, July 21, 1995, including several small plans which had not previously filed. In addition, we contacted the plan administrators to obtain additional plan data pertaining to Social Security coverage, investment restrictions, and financial statement audits. Under Public Law 95-595, the plan filings are not due until 210 days after the plan’s fiscal year-end. Therefore, the most recent plan filing available for most plans was the plan year ending in 1993. Accordingly, legislative initiatives or other subsequent events not included in the information received through July 21, 1995, are not reflected in this report or accompanying appendixes. Also, we consulted with OMB to verify that it had not received filings for additional federal government pension plans. Finally, certain federal benefit programs are excluded from this report because they are not subject to the disclosure requirements of Public Law 95-595. The Central Intelligence Agency pension plan, as well as Social Security and Railroad Retirement benefits, are excluded from the requirements of Public Law 95-595. In addition, the monetary allowance provided to former Presidents is not covered. The Department of Veterans Affairs does not file reports under Public Law 95-595 for its Veterans Compensation and Pension Programs. Veterans and their dependents receive compensation benefits for service connected disabilities or death and pension benefits for nonservice connected disabilities or death. Neither the entitlement to nor the amount of compensation and pension benefits is based on age and length of service. Rather, compensation and pension benefits are based on the occurrence of specified events. In addition, benefits for nonservice connected disabilities or death are subject to specific income limitations. Thus, the Compensation and Pension Programs differ significantly from the defined benefit plans for which reports are filed under Public Law 95-595. Several limitations exist in the summary information presented in the report. The summary of pension plan data is a compilation we prepared from the plan reports and additional plan data provided by the plan administrators. We did not independently verify the information in the plan reports and additional data that the plan administrators provided to us and we do not assure their accuracy on matters of fact or law. Except where indicated in the pension plan profiles in appendixes I and II, the plan financial reports were not audited by independent auditors. In addition, where plans had multiple retirement provisions for certain specialized employees, we listed the retirement benefits provided to the majority of the plan participants. The projections in the plan profiles and tables are highly dependent on the actuarial cost method and the actuarial assumptions used. Several acceptable actuarial cost methods exist. The actuarial assumptions vary by plan because they are based on the best estimate of anticipated experience under the plan made by each plan actuary. These assumptions can have a significant impact on estimates of future costs. In addition, we found that the information provided by plan administrators often varied in the extent of details presented. For example, some plan filings described all significant economic assumptions used in the actuarial valuations, but others provided fewer details about the assumptions. In some cases, because some plans provided more extensive information than others, it might appear that such information was not applicable to the plans which provided less information. That is not always the case. For example, the provisions of the Military Retirement System are substantially the same as those of the separate Coast Guard Military Retirement System, Public Health Service Commissioned Corps Retirement System, and the National Oceanic and Atmospheric Corps Retirement System. Because the administrators of each plan described some provisions of the plan differently, the fact that the provisions are actually the same may not always be apparent. Similarly, the extent of financial and cost information provided by the plans varied. For example, the Retirement Annuity Plan for Employees of the Army and Air Force Exchange (Exchange Service plan) provides substantially the same benefits as the Civil Service Retirement System, except that Exchange Service plan benefits are reduced by a “Social Security offset.” The normal cost reported for the CSRS is 25.14 percent of salary, whereas the Exchange Service plan reported normal cost of 9.81 percent of salary. Part of the difference in normal cost may be caused by differing economic assumptions used by the plans’ actuaries, but the primary reason for the difference is that employees in the Exchange Service plan are also covered by Social Security while CSRS employees are not. Thus, the offset provision cuts plan benefits and reduces plan costs accordingly. However, in order to compare the total costs of all benefits provided to participants covered by these two programs, additional details would be needed. For example, an erroneous conclusion might result unless the costs of providing Social Security benefits to Exchange Service employees were added to the reported plan costs; similarly, detailed information about the Social Security benefits to Exchange Service employees would be required in order to compare the benefits under these programs. The scope of this report did not include analyzing and comparing the provisions of the various plans. However, we have been asked by the Chairman of the Senate Committee on Governmental Affairs to compare, in detail, the provisions of retirement programs for federal personnel. We will provide each of you a copy of the report on the results of that work when it is completed. We conducted our review from April 1995 through August 1995 in accordance with generally accepted government auditing standards. We requested comments on drafts of each plan profile from the applicable plan officials. We incorporated those comments in the plan profiles as appropriate. As agreed with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Director of the Office of Management and Budget and interested congressional committees. Copies will be made available to others on request. If you or your staffs have any questions concerning this report, please contact me at (202) 512-9406 or H. Kent Bowden, Assistant Director, at (202) 512-5270. Major contributors to this report are listed in appendix IV. Civil Service Retirement and Disability Fund (CSRS and FERS) Coast Guard Military Retirement System Foreign Service Retirement and Disability Fund Public Health Service Commissioned Corps Retirement System National Oceanic and Atmospheric Administration Corps Retirement System Comptrollers’ General Retirement Plan Court of Federal Claims Judges’ Retirement System U.S. Court of Veterans Appeals Judges’ Retirement Plan Judicial Officers’ Retirement Fund Judicial Survivors’ Annuities System United States Tax Court Retirement Plan United States Tax Court Survivors’ Annuity Plan Tennessee Valley Authority Retirement System Retirement Annuity Plan for Employees of Army and Air Force Exchange Service Supplemental Deferred Compensation Plan for Members of the Executive Management Program (Army and Air Force Exchange Service) U.S.A.F. Nonappropriated Fund Retirement Plan for Civilian Employees United States Army Nonappropriated Fund Retirement Plan Retirement Plan for Civilian Employees of United States Marine Corps Morale, Welfare, and Recreation Activities and Miscellaneous Nonappropriated Fund Instrumentalities Navy Exchange Service Command Retirement Plan U.S. Navy Nonappropriated Fund Retirement Plan for Employees of Civilian Morale, Welfare, and Recreation Activities Norfolk Naval Shipyard Pension Plan Federal Reserve Employees’ Benefits System Western Farm Credit District Employees’ Retirement Plan Ninth Farm Credit District Pension Plan Farm Credit District of Springfield Group Retirement Plan Farm Credit District of Baltimore Retirement Plan Seventh Farm Credit District Retirement Plan First South Production Credit Association Retirement Plan Farm Credit District of Columbia, SC Retirement Plan Farm Credit District of Texas Pension Plan Twelfth Farm Credit District Retirement Plan National Bank for Cooperatives Retirement Plan 173 This appendix lists the principal financial, actuarial, and general terms for each of the 34 federal government defined benefit plans. The actuarial data include two presentations of the benefit obligation and the related funding status for each federal government defined benefit plan. (1) Accumulated Benefit Obligation—Statement of Financial Accounting Standards (SFAS) No. 35, Accounting and Reporting by Defined Benefit Plans, prescribes the measure by which private sector defined benefit pension plans calculate the net present value of future benefit payments. Under this actuarial measure, the obligation for future benefits is primarily based on employees’ history of pay and service up to the date that the obligation information is reported. The benefit obligation determined in accordance with SFAS 35 is referred to as the Accumulated Benefit Obligation. Comparing the assets available for plan benefits to the Accumulated Benefit Obligation (the actuarial present value of accumulated benefits, less assets available for benefits) yields one measure of plan funding. In appendix I, a zero or negative total (that is, net assets available for benefits equal or exceed the Accumulated Benefit Obligation) indicates that the plan is fully funded, and, as such, the assets of the plan would satisfy the actuarial present value of accumulated plan benefits if the entity were to cease operations. (2) Actuarial Accrued Liability—Because it is assumed that the federal government will not cease as a going concern, a second actuarial measure of the obligation for future benefits is presented. It is referred to as the Actuarial Accrued Liability. The Federal Accounting Standards Advisory Board (FASAB), issued an exposure draft, Accounting for Liabilities for the Federal Government, which would require the use of the Actuarial Accrued Liability for those federal government pension plans subject to FASAB standards. The Actuarial Accrued Liability represents the present value of benefits expected to be paid in the future to current employees and annuitants, net of the present value of future normal cost contributions expected to be made for and by current employees. It includes the projected future salary increases that reflect an estimate of the compensation levels of the individual employees involved (including future changes attributable to general price level, seniority, promotion, and other factors). Comparing the plan assets to the Actuarial Accrued Liability yields a second measure of plan funding. In appendix I, a zero or negative total (that is, plan assets equal or exceed the Actuarial Accrued Liability) indicates that the plan is fully funded, and, as such, assets in the fund plus the present value of future normal cost contributions would satisfy the actuarial present value of projected benefits to current employees and annuitants, including estimated future salary increases. For both measures described above, plan asset amounts generally are based on fair value. For the Accumulated Benefit Obligation measure, the plan assets generally are valued at the amount that the plan could reasonably expect to receive in a current exchange for those assets. Most plans used the same asset amount for the Actuarial Accrued Liability measure. However, a few plans determined the actuarial value of their assets in a different manner for the Actuarial Accrued Liability measure. For example, for its Actuarial Accrued Liability measure, the Military Retirement System stated the actuarial value of its assets at amortized cost (book value). Thrift Savings Plan (CSRS and FERS) An actuarial cost method in which future service benefits are funded as they accrue. Thus, normal cost is the present value of the units of future benefits credited to employees for service in that year. Prior service cost is the present value at the valuation date of the units of future benefits credited to employees for service prior to the valuation date. Annual normal cost for an individual for an equal unit of benefits each year increases because the period to the employee’s retirement continually shortens and the probability of reaching retirement increases. For a mature employee group, the normal cost would tend to be the same each year as older employees are replaced by younger ones. The actuarial present value of pension benefits attributed by the pension benefit formula to employee service rendered before a specified date and based on service and compensation prior to that date. Benefits that are attributable under the provisions of a pension plan to employees’ service rendered up to the benefit information date. The portion of the present value (as of the benefit information date) of a pension plan’s projected future benefit costs and administrative expenses that exceeds the present value of future normal cost contributions. Estimates of future conditions affecting pension cost; for example, mortality rate, employee turnover, compensation levels, and investment earnings. A recognized technique used in establishing the amount of annual contributions or accounting charges for pension cost under a pension plan. The current worth of amounts payable or receivable in the future. If payment or receipt is certain, the present value is determined by discounting the future amount or amounts at a predetermined rate of interest. If payment or receipt is contingent on future events (for example, survival), further discounting is necessary for the probability that payment or receipt will occur. The process by which an actuary estimates the present value of benefits to be paid under a pension plan and calculates the amounts of employer contributions or accounting charges for pension cost. An actuarial cost method in which the entire unfunded cost of future pension benefits (including benefits to be paid to employees who have retired as of the date of the valuation) is spread over the average future service lives of employees who are active as of the date of valuation. In most cases this is done by the use of a percentage of payroll. Past service cost is included in normal cost. The date as of which the actuarial present value of accumulated plan benefits is presented. A pension plan under which participants bear part of the cost. Moving average of the Social Security wage base computed when a member attains normal retirement age. Some plans offer additional retirement benefits to highly compensated employees who exceed the average Social Security wage base. Assumptions as to rates of plan participants’ withdrawal from the plan, retirement, disability, and death used in making actuarial projections. A pension plan that specifies a determinable pension benefit, usually based on factors such as age, years of service, and salary. A pension plan that specifies the amount of contribution to be made to the plan for each employee. Benefits at retirement are those contributions plus whatever has been earned on them. An actuary enrolled under 29 U.S.C. 1242 by a Joint Board for the Enrollment of Actuaries established by the Secretaries of Labor and the Treasury. An actuarial cost method which assigns a “level normal cost” to each year of service for each participant. The assumption is made under this method that every employee entered the plan (entry age) at the time of initial employment or at the earliest eligibility date, if the plan had been in existence, and that contributions have been made from the entry age to the date of the actuarial valuation. A variation of the entry-age normal actuarial cost method which maintains the initial unfunded liability rather than recomputing it each year, adjusting it only for plan amendments or changes in actuarial assumptions. An estimate of the total benefits payable at retirement, including benefits anticipated to accrue in the future as well as those accruing before the benefit information date. Future benefits may depend on total length of service but with pay averaged over only a limited number of years (often the final 3 years of service). An actuarial cost method which assigns the cost of each employee’s pension in level annual amounts, or as a level percentage of the employee’s compensation, over the period from the inception date of a plan (or the date of his entry into the plan, if later) to his retirement date. Thus, past service cost is included in normal cost. The difference between a plan’s assets and its liabilities. For purposes of this definition, a plan’s liabilities do not include participants’ accumulated plan benefits. A pension plan under which participants do not make contributions. The annual cost assigned, under the actuarial cost method in use, to years subsequent to the inception of a pension plan. Member of a pension plan, including active employees covered by the plan, separated employees entitled to benefits, and retiree and survivor annuitants. A method of paying pension benefits to retired employees as they come due out of appropriations. Calendar, policy, or fiscal year chosen by the plan on which the records of the plan are kept. The actuarial present value as of a date of all benefits attributed by the pension benefit formula to employee service rendered prior to that date, including recognition of changes in future compensation levels if appropriate. In the case of a pension plan established or maintained by a single employer, the employer; in the case of a plan established or maintained jointly by two or more employers, an association, committee, joint board of trustees, or other group of representatives of the parties who have established or who maintain the pension plan. A contract with an insurance company under which related payments to the insurance company are accumulated in an unallocated fund to be used to meet benefit payments, either directly or through the purchase of annuities, when employees retire. Funds in an unallocated contract may also be withdrawn and otherwise invested. The amount by which the present value of future benefits exceeds the amount in the pension fund and the present value of future normal cost contributions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the status of public pension plan funding, focusing on federally sponsored defined benefit and contribution plans subject to reporting requirements legislation. GAO found that: (1) 34 federal government defined benefit plans have over 10 million participants and 17 defined contribution plans have 2.2 million participants; (2) the 34 defined benefit plans vary considerably and have benefits of over $1.2 trillion; (3) most of the defined benefit plans are administered as trust funds which almost exclusively invest in nonmarketable, special issue Treasury securities, while 6 plans pay benefits from their current year appropriations; (4) most agency plans, except for the Federal Employees Retirement System, are underfunded; (5) the 19 nonappropriated fund activity, federal reserve, and farm credit defined benefit plans are fully funded; (6) the use of trust funds has no effect on current budget outlays and is not a measure of the government's ability to pay future retirement benefits out of tax and other receipts; (7) agencies' budgets have not reflected the full cost of their pension plan programs; (8) the 17 defined contribution plans have more than $28 billion invested in stocks, bonds, and government securities, with the Thrift Savings Plan having about $26 billion in Treasury securities; and (9) defined contribution plan obligations are limited to the employee and employer contributions made and any earnings on them.
Ten states concentrated in the western, midwestern, and southeastern United States—all areas where the housing market had experienced strong growth in the prior decade—experienced 10 or more bank failures between 2008 and 2011 (see fig.1). Together, failures in these 10 states comprised 72 percent (298), of the 414 bank failures across all states during this time period. Within these 10 states, 86 percent (257) of the failed banks were small institutions with assets of less than $1 billion at the time of failure, and 52 percent (155), had assets of less than $250 million. Twelve percent (36) were of medium-size banks with more than $1 billion but less than $10 billion in assets, and 2 percent (5) were large banks with assets of more than $10 billion at the time of failure. In the 10 states with 10 or more failures between 2008 and 2011, failures of small and medium-size banks were largely associated with high concentrations of commercial real estate (CRE) loans, in particular the subset of acquisition, development, and construction (ADC) loans, and with inadequate management of the risks associated with these high concentrations. Our analysis of call report data found that CRE (including ADC) lending increased significantly in the years prior to the housing market downturn at the 258 small banks that failed between 2008 and 2011. This rapid growth of failed banks’ CRE portfolios resulted in concentrations—that is, the ratio of total CRE loans to total risk-based capital—that exceeded regulatory thresholds for heightened scrutiny established in 2006 and increased the banks’ exposure to the sustained downturn that began in 2007. Specifically, we found CRE concentrations grew from 333 percent in December 2001 to 535 percent in June 2008. At the same time, ADC concentrations grew from 104 percent to 259 percent. The trends for the 36 failed medium-size banks were similar over this time period. In contrast, small and medium-sized banks that did not fail exhibited substantially lower levels and markedly slower growth rates of CRE loans and as a result had significantly lower concentrations of them, reducing the banks’ exposure. With the onset of the financial crisis, the level of nonperforming loans began to rise, as did the level of subsequent charge-offs, leading to a decline in net interest income and regulatory capital. The rising level of nonperforming loans, particularly ADC loans, appears to have been the key factor in the failures of small and medium banks in the 10 states between 2008 and 2011. For example, in December 2001, 2 percent of ADC loans at the small failed banks were classified as nonperforming. With the onset of the financial crisis, the level of nonperforming ADC loans increased quickly to 11 percent by June 2008 and 46 percent by June 2011. As banks began to designate nonperforming loans or portions of these loans as uncollectible, the level of net charge-offs also began to rise. In December 2001, net charge-offs of ADC loans at small failed banks were less than 1 percent. By June 2008, they had risen to 2 percent and by June 2011 to 12 percent. CRE and especially ADC concentrations in small and medium-size failed banks in the 10 states were often correlated with poor risk management and risky funding sources. Our analysis showed that small failed banks in the 10 states had often pursued aggressive growth strategies using nontraditional and riskier funding sources such as brokered deposits. The IG reviews noted that in the majority of failures, management exercised poor oversight of the risks associated with high CRE and ADC concentrations and engaged in weak underwriting and credit administration practices. Further, 28 percent (84) of the failed banks had been chartered for less than 10 years at the time of failure and according to FDIC, appeared in many cases to have deviated from their approved business plans. Large bank failures in the 10 states were associated with some of the same factors as small banks—high-risk growth strategies, weak underwriting and risk controls, and excessive concentrations that increased these banks’ exposure to the real estate market downturn.The primary difference was that the large banks’ strategies generally relied on risky nontraditional residential mortgage products as opposed to commercial real estate. To further investigate factors associated with bank failures across the United States, we analyzed data on FDIC-insured commercial banks and state-chartered savings banks from 2006 to 2011. Our econometric analysis suggests that across the country, riskier lending and funding sources were associated with an increased likelihood of bank failures. Specifically, we found that banks with high concentrations of ADC loans and an increased use of brokered deposits were more likely to fail from 2008 to 2011, while banks with better asset quality and greater capital adequacy were less likely to fail. An FDIC IG study issued in October 2012 found that some banks with high ADC concentrations were able to weather the recent financial crisis without experiencing a corresponding decline in their overall financial condition. Among other things, the IG found that these banks exhibited strong management, sound credit administration and underwriting practices, and adequate capital. We found that losses related to bank assets and liabilities that were subject to fair value accounting contributed little to bank failures overall, largely because most banks’ assets and liabilities were not recorded at fair value. Based on our analysis, fair value losses related to certain types of mortgage-related investment securities contributed to some bank failures. But in general fair value-related losses contributed little to the decline in net interest income and regulatory capital that failed banks experienced overall once the financial crisis began. We analyzed the assets and liabilities on the balance sheets of failed banks nationwide that were subject to fair value accounting between 2007 and 2011. We found that generally over two-thirds of the assets of all failed commercial banks (small, medium-size, and large) were classified as held-for-investment (HFI) loans, which were not subject to fair value accounting. For example, small failed commercial banks held an average of 77 percent of their assets as HFI loans in 2008. At the same time, small surviving (open) commercial banks held an average of 69 percent in such loans. Failed and open small thrifts, as well as medium- size and large commercial banks, had similar percentages. Some assets and liabilities, such as securities designated for trading, are measured at fair value on a recurring basis (at each reporting period), where unrealized gains or losses flow through the bank’s earnings in the income statement and affect regulatory capital. However, for certain other assets and liabilities that are measured at fair value on a recurring basis, such as AFS securities, unrealized fair value gains and losses generally do not impact earnings and thus generally are not included in regulatory capital calculations. Instead, these gains or losses are recorded through other comprehensive income, unless the institution determines that a decline in fair value below amortized cost constitutes an other than temporary impairment, in which case the instrument is written down to its fair value, with credit losses reflected in earnings. value and impact regulatory capital, together these categories did not account for a significant percentage of total assets at either failed or open commercial banks or thrifts. For example, in 2008, trading assets, nontrading assets such as nontrading derivative contracts, and trading liabilities at small failed banks ranged from 0.00 to 0.03 percent of total assets. As discussed earlier, declines in regulatory capital at failed banks were driven by rising levels of credit losses related to nonperforming loans and charge-offs of these loans. For failed commercial banks and thrifts of all sizes nationwide, credit losses, which resulted from nonperforming HFI loans, were the largest contributors to the institutions’ overall losses when compared to any other asset class. These losses had a greater negative impact on institutions’ earnings and regulatory capital levels than those recorded at fair value. During the course of our work, several state regulators and community banking association officials told us that at some small failed banks, declining collateral values of impaired collateral-dependent loans— particularly CRE and ADC loans in those areas where real estate assets prices declined severely—drove both credit losses and charge-offs and resulted in reductions to regulatory capital. Data are not publicly available to analyze the extent to which credit losses or charge-offs at the failed banks were driven by declines in the collateral values of impaired collateral-dependent CRE or ADC loans. However, state banking associations said that the magnitude of the losses was exacerbated by federal bank examiners’ classification of collateral-dependent loans and evaluation of appraisals used by banks to support impairment analysis of these loans. Federal banking regulators noted that regulatory guidance in 2009 directed examiners not to require banks to write down loans to an amount less than the loan balance solely because the value of the underlying collateral had declined and that examiners were generally not expected to challenge the appraisals obtained by banks unless they found that any underlying facts or assumptions about the appraisal were inappropriate or could support alternative assumptions. A loan loss provision is the money a bank sets aside to cover potential credit losses on loans. The Department of the Treasury (Treasury) and the Financial Stability Forum’s Working Group on Loss Provisioning (Working Group) observed that the current accounting model for estimating credit losses is based on historical loss rates, which were low in the years before the financial crisis. Under GAAP, the accounting model for estimating credit losses is commonly referred to as an “incurred loss model” because the timing and measurement of losses are based on estimates of losses incurred as of the balance sheet date. In a 2009 speech, the Comptroller of the Currency, who was a co-chair of the Working Group, noted that in a long period of benign economic conditions, such as the years prior to the most recent downturn, historical loan loss rates would typically be low. As a result, justifying significant loan loss provisioning to increase the loan loss allowance can be difficult under the incurred loss model. country had identified multiple concerns with examiner treatment of CRE loans and related issues. GAO, Banking Regulation: Enhanced Guidance on Commercial Real Estate Risks Needed, GAO-11-489 (Washington, D.C.: May 19, 2011). losses earlier on the loans they underwrite and could incentivize prudent risk management practices. Moreover, it is designed to help address the cycle of losses and failures that emerged in the recent crisis as banks were forced to increase loan loss allowances and raise capital when they were least able to do so (procyclicality). We plan to continue to monitor the progress of the ongoing activities of the standard setters to address concerns with the loan loss provisioning model. FDIC is required to resolve a bank failure in a manner that results in the least cost to the Deposit Insurance Fund (DIF). FDIC’s preferred resolution method is to sell the failed bank to another, healthier, bank. During the most recent financial crisis, FDIC facilitated these sales by including a loss share agreement, under which FDIC absorbed a portion of the loss on specified assets purchased by the acquiring bank. From January 2008 through December 31, 2011, FDIC was appointed as receiver for the 414 failed banks, with $662 billion in book value of failed bank assets. FDIC used purchase and assumption agreements (the direct sale of a failed bank to another, healthier bank) to resolve 394 failed institutions with approximately $652 billion in assets. As such, during the period 2008 through 2011, FDIC sold 98 percent of failed bank assets using purchase and assumption agreements. However, FDIC only was able to resolve so many of these banks with purchase and assumption agreements because it offered to share in the losses incurred by the acquiring institution. According to FDIC officials, at the height of the financial crisis in 2008, FDIC sought bids for whole bank purchase and assumption agreements (where the acquiring bank assumes essentially all of the failed bank’s assets and liabilities) with little success. Potential acquiring banks we interviewed told us that they did not have sufficient capital to take on the additional risks that the failed institutions’ assets represented. Acquiring bank officials that we spoke to said that, because of uncertainties in the market and the value of the assets, they would not have purchased the failed banks without FDIC’s shared loss agreements. Because shared loss agreements had worked well during the savings and loan crisis of the 1980s and early 1990s, FDIC decided to offer the option of having such agreements as part of the purchase and assumption of the failed bank. Shared loss agreements provide potential buyers with some protection on the purchase of failed bank assets, reduce immediate cash needs, keep assets in the private sector, and minimize disruptions to banking customers. Under the agreements, FDIC generally agrees to pay 80 percent for covered losses, and the acquiring bank covers the remaining 20 percent. From 2008 to the end of 2011, FDIC resolved 281 of the 414 failures (68 percent) by providing a shared loss agreement as part of the purchase and assumption. The need to offer shared loss agreements diminished as the market improved. For example, in 2012 FDIC had been able to resolve more than half of all failed institutions without having to offer to share in the losses. Specifically, between January and September 30, 2012, FDIC had to agree to share losses on 18 of 43 bank failures (42 percent). Additionally, some potential bidders were willing to accept shared loss agreements with lower than 80 percent coverage. As of December 31, 2011, DIF receiverships had made shared loss payments totaling $16.2 billion. In addition, future payments under DIF receiverships are estimated at an additional $26.6 billion over the duration of the shared loss agreements, resulting in total estimated lifetime losses of $42.8 billion (see fig. 2). By comparing the estimated cost of the shared loss agreements with the estimated cost of directly liquidating the failed banks’ assets, FDIC has estimated that using shared loss agreements has saved the DIF over $40 billion. However, while the total estimated lifetime losses of the shared loss agreements may not change, the timing of the losses may, and payments from shared loss agreements may increase as the terms of the agreements mature. FDIC officials stated that the acquiring banks were being monitored for compliance with the terms and conditions of the shared loss agreements. FDIC is in the process of issuing guidance to the acquiring banks reminding them of these terms to prevent increased shared loss payments as these agreements approach maturity. The acquisitions of failed banks by healthy banks appear to have mitigated the potentially negative effects of bank failures on communities, although the focus of local lending and philanthropy may have shifted. First, while bank failures and failed bank acquisitions can have an impact on market concentration—an indicator of the extent to which banks in the market can exercise market power, such as raising prices or reducing the availability of some products and services—we found that a limited number of metropolitan areas and rural counties were likely to have become significantly more concentrated. We analyzed the impact of bank failures and failed bank acquisitions on local credit markets using data for the period from June 2007 to June 2012. We calculated the Herfindahl-Hirschman Index (HHI), a key statistical measure used to assess market concentration and the potential for firms to exercise their ability to influence market prices. The HHI is measured on a scale of 0 to 10,000, with values over 1,500 considered indicative of concentration. Our results suggest that a small number of the markets affected by bank failures and failed bank acquisitions were likely to have become significantly more concentrated. For example, 8 of the 188 metropolitan areas affected by bank failures and failed bank acquisitions between June 30, 2009, and June 29, 2010, met the criteria for raising significant competitive concerns. Similarly, 5 of the 68 rural counties affected by bank failures during the same time period met the criteria. The relatively limited number of areas where concentration increased was generally the result of acquisitions by institutions that were not already established in the locales that the failed banks served. However, the effects could be significant for those limited areas that were serviced by one bank or where few banks remain. Second, our econometric analysis of call report data from 2006 through 2011 found that failing small banks extended progressively less net credit as they approached failure, but that acquiring banks generally increased net credit after the acquisition, albeit more slowly. Acquiring and peer banks we interviewed in Georgia, Michigan, and Nevada agreed. However general credit conditions were generally tighter in the period following the financial crisis. For example, several noted that in the wake of the bank failures, underwriting standards had tightened, making it harder for some borrowers who might have been able to obtain loans prior to the bank failures to obtain them afterward. Several banks officials we interviewed also said that new lending for certain types of loans could be restricted in certain areas. For example, they noted that the CRE market, and in particular the ADC market, had contracted and that new lending in this area had declined significantly. Officials from regulators, banking associations, and banks we spoke with also said that involvement in local philanthropy declined as small banks approached failure but generally increased after acquisition. State banking regulators and national and state community banking associations we interviewed told us that community banks tended to be highly involved in local philanthropic activities before the recession—for example, by designating portions of their earnings for community development or other charitable activities. However, these philanthropic activities decreased as the banks approached failure and struggled to conserve capital. Acquiring bank officials we interviewed told us that they had generally increased philanthropic activities compared with the failed community banks during the economic downturn and in the months before failure. However, acquiring banks may or may not focus on the same philanthropic activities as the failed banks. For example, one large acquiring bank official told us that it made major charitable contributions to large national or statewide philanthropic organizations and causes and focused less on the local community charities to which the failed bank had contributed. Finally, we econometrically analyzed the relationships among bank failures, income, unemployment, and real estate prices for all states and the District of Columbia (states) for 1994 through 2011. Our analysis showed that bank failures in a state were more likely to affect its real estate sector than its labor market or broader economy. In particular, this analysis did not suggest that bank failures in a state—as measured by failed banks’ share of deposits—were associated with a decline in personal income in that state. To the extent that there is a relationship between the unemployment rate and bank failures, the unemployment rate appears to have more bearing on failed banks’ share of deposits than vice versa. In contrast, our analysis found that failed banks’ share of deposits and the house price index in a state appear to be significantly related to each other. Altogether, these results suggest that the impact of bank failures on a state’s economy is most likely to appear in the real estate sector and less likely to appear in the overall labor market or in the broader economy. However, we note that these results could be different at the city or county level. Chairman Capito, Ranking Member Meeks, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Lawrance Evans, Jr. at (202) 512-4802 or [email protected]. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff who made key contributions to this testimony include Karen Tremba, Assistant Director; William Cordrey, Assistant Director; Gary Chupka, Assistant Director; William Chatlos; Emily Chalmers, Robert Dacey; Rachel DeMarcus; M’Baye Diagne; Courtney LaFountain; Marc Molino, Patricia Moye; Lauren Nunnally; Angela Pun, Stefanie Jonkman; Akiko Ohnuma; Michael Osman; and Jay Thomas. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Between January 2008 and December 2011--a period of economic downturn in the United States--414 insured U.S. banks failed. Of these, 85 percent (353) had less than $1 billion in assets. These small banks often specialize in small business lending and are associated with local community development and philanthropy. These small bank failures have raised questions about the contributing factors, including the possible role of local market conditions and the application of fair value accounting under U.S. accounting standards. This statement is based on findings from the 2013 report on recent bank failures (GAO-13-71). This testimony discusses (1) the factors that contributed to the bank failures in states with the most failed institutions between 2008 and 2011 and what role, if any, fair value accounting played in these failures; (2) the use of shared loss agreements in resolving troubled banks; and (3) the effect of recent bank failures on local communities. To do this work, GAO relied on issued report GAO-13-71 and updated data where appropriate. GAO did not make recommendations in the report. Ten states concentrated in the western, midwestern, and southeastern United States--all areas where the housing market had experienced strong growth in the prior decade--experienced 10 or more commercial bank or thrift (bank) failures between 2008 and 2011. The failures of the smaller banks (those with less than $1 billion in assets) in these states were largely driven by credit losses on commercial real estate (CRE) loans. The failed banks also had often pursued aggressive growth strategies using nontraditional, riskier funding sources and exhibited weak underwriting and credit administration practices. Fair value accounting also has been cited as a potential contributor to bank failures, but between 2007 and 2011 fair value accounting losses in general did not appear to be a major contributor, as over two-thirds of small failed banks' assets were not subject to fair value accounting. During the course of our work, some state banking associations said that the magnitude of the credit losses were exacerbated by federal bank examiners' classification of collateral-dependent loans and evaluation of appraisals used by banks to support impairment analysis of these loans. Federal banking regulators noted that regulatory guidance on CRE workouts issued in October 2009 directed examiners not to require banks to write down loans to an amount less than the loan balance solely because the value of the underlying collateral had declined, and that examiners were generally not expected to challenge the appraisals obtained by banks unless they found that underlying facts or assumptions about the appraisals were inappropriate or could support alternative assumptions. The Federal Deposit Insurance Corporation (FDIC) used shared loss agreements to help resolve failed banks at the least cost during the recent financial crisis. Under a shared loss agreement, FDIC absorbs a portion of the loss on specified assets of a failed bank that are purchased by an acquiring bank. FDIC officials, state bank regulators, community banking associations, and acquiring banks of failed institutions GAO interviewed said that shared loss agreements helped to attract potential bidders for failed banks during the financial crisis. During 2008- 2011, FDIC resolved 281 of 414 failures using shared loss agreements on assets purchased by the acquiring bank. As of December 31, 2011, Deposit Insurance Fund (DIF) receiverships are estimated to pay $42.8 billion over the duration of the shared loss agreements. The acquisitions of failed banks by healthy banks appear to have mitigated the potentially negative effects of bank failures on communities, although the focus of local lending and philanthropy may have shifted. For example, GAO's analysis found limited rural and metropolitan areas where failures resulted in significant increases in market concentration. GAO's econometric analysis of call report data from 2006 through 2011 found that failing small banks extended progressively less net credit as they approached failure, and that acquiring banks generally increased net credit after the acquisition. However, acquiring bank and existing peer bank officials GAO interviewed noted that in the wake of the bank failures, underwriting standards had tightened and thus credit was generally more available for small business owners who had good credit histories and strong financials than those that did not. Moreover, the effects of bank failures could be significant for those limited areas that were serviced by one bank or where few banks remain.
The Coast Guard is a multimission, maritime military service within DHS. The Coast Guard’s responsibilities fall into two general categories—those related to homeland security missions, such as port security and vessel escorts, and those related to the Coast Guard’s traditional missions, such as search and rescue and polar ice operations. To carry out these responsibilities, the Coast Guard operates a number of vessels and aircraft and, through its Deepwater Program, is currently modernizing or replacing a number of those assets. Since 2001, we have reviewed the Deepwater Program and have informed Congress, DHS, and the Coast Guard of the risks and uncertainties inherent in the acquisition. In June 2008, we reported on our assessment of the preliminary steps the Coast Guard had taken to revise its acquisition approach. For example, we found that the Coast Guard had increased accountability by bringing Deepwater under a restructured acquisition function and investing its government project managers with management and oversight responsibilities formerly held by ICGS. In addition, the Coast Guard had begun to manage Deepwater under an asset-based approach, resulting in increased government control and visibility over acquisitions. We concluded that while these steps were beneficial, continued oversight and improvement were necessary to further mitigate risks and made several recommendations, which the Coast Guard and DHS have taken actions to address. At the start of the Deepwater Program in the late 1990s, the Coast Guard chose to use a system-of-systems acquisition strategy. A system-of-systems is defined as the set or arrangement of assets that results when independent assets are integrated into a larger system that delivers unique capabilities. As the systems integrator, ICGS was responsible for designing, constructing, deploying, supporting, and integrating the Deepwater assets into a system-of-systems. Under this approach, the Coast Guard provided the contractor with broad, overall performance specifications—such as the ability to interdict illegal immigrants—and ICGS determined the assets needed and their specifications. According to Coast Guard officials, the ICGS proposal was submitted and priced as a package; that is, the Coast Guard bought the entire solution and could not reject any individual component. In November 2006, the Coast Guard submitted a cost, schedule, and performance baseline to DHS that established the total acquisition cost of the ICGS solution at $24.2 billion and projected that the acquisition would be completed in 2027. In May 2007, shortly after the Coast Guard had announced its intention to take over the role of systems integrator, DHS approved the baseline. From fiscal year 2002 to fiscal year 2009, over $6 billion has been appropriated for the Deepwater Program, about 25 percent of the total anticipated costs of $24.2 billion. Figure 1 depicts a breakdown of how these appropriations have been allocated as of fiscal year 2009, including for integration and oversight functions; ongoing Deepwater assets; and assets that the Coast Guard has cancelled or restructured. Regarding the breakout of appropriations through fiscal year 2009, the $876 million appropriated for integration and oversight has been allocated for activities such as planning for Deepwater logistics, obsolescence prevention, government program management, and systems engineering and integration. Of the $288 million allocated for cancelled or restructured assets, the Coast Guard allocated about $134 million to ICGS for two projects that were subsequently cancelled: an estimated $95 million to extend the Coast Guard’s 110-foot patrol boats by an additional 13 feet (known as the 123-foot patrol boat conversions) and approximately $39 million for the initial design of the Fast Response Cutter (known as FRC- A). The Coast Guard terminated the design efforts for the FRC-A in February 2008. In addition, three projects received significant funding before being restructured or redesigned. The Coast Guard allocated approximately $119 million to ICGS for the Vertical Unmanned Aerial Vehicle before stopping work on the design in 2007 due to developmental and cost concerns. Over $27 million was allocated for the Offshore Patrol Cutter (OPC) before design work was stopped in 2006, and over $8 million was allocated for cutter small boats before a decision was made in 2008 to take a different acquisition approach for those assets. The Coast Guard is now considering alternative designs for all three of these assets. Table 1 describes in more detail the assets the Coast Guard is planning to procure or upgrade under the Deepwater Program according to approved acquisition baselines. The Coast Guard has assumed the role of systems integrator for Deepwater, concurrently downsizing the scope of systems engineering and integration work under contract with ICGS. In conjunction with its role as systems integrator, the Coast Guard has undertaken a fundamental reassessment of the capabilities and mix of assets it needs to meet its Deepwater missions. In addition, DHS and the Coast Guard have made improvements in oversight and management of Deepwater; for example, the Coast Guard has made progress in applying the MSAM acquisition process to individual Deepwater assets and made improvements to the process as a whole. However, the Coast Guard did not meet its goal of having all assets fully compliant with the MSAM by the end of March 2009. Hence, acquisition decisions for certain assets are being made without having completed some key acquisition documentation in light of what the Coast Guard views as pressing operational needs. The role of systems integrator involves determining the mix of assets needed to fulfill mission needs, as well as designing, procuring, and integrating those assets into a system-of-systems capability greater than the sum of the individual parts. ICGS’s role as systems integrator for the Deepwater Program included managing requirements, determining how assets would be acquired, defining how assets would be employed by Coast Guard users in an operational setting, and exercising technical authority over all asset design and configuration. In 2008, the Coast Guard acknowledged that in order to assume the role of systems integrator, it needed to define systems integrator functions and assign them to Coast Guard stakeholders. As a result, the Coast Guard has established new relationships among its directorates to assume control of key systems integrator roles previously carried out by the contractor. Through a series of policy changes and memoranda, the Coast Guard formally designated certain directorates as technical authorities responsible for establishing, monitoring, and approving technical standards for Deepwater assets related to design, construction, maintenance, logistics, C4ISR, and life- cycle staffing and training. Furthermore, the Coast Guard’s capabilities directorate is now responsible for determining operational requirements and the asset mix to satisfy those requirements. This directorate is expected to collaborate with the technical authorities to ensure that the Coast Guard’s technical standards are incorporated during the requirements development process. Finally, the acquisition directorate’s program and project managers are responsible for procuring the assets and are to be held accountable for ensuring that they fulfill the operational requirements and the technical authority standards established by the other directorates. The collaborative relationships among the Coast Guard directorates discussed above are depicted in figure 2. When it contracted with ICGS in 2002, the Coast Guard lacked insight into how the contractor’s proposed solution for Deepwater would meet overall mission needs. This situation limited the Coast Guard’s ability to make informed decisions about possible trade-offs between cost and capability. As a way of improving its insight, the capabilities directorate has initiated a fundamental reassessment of the capabilities and mix of assets the Coast Guard needs to fulfill its Deepwater missions. The goals of this fleet mix analysis include validating mission performance requirements and revisiting the number and mix of all assets that are part of the Deepwater Program. A specific part of the study will also analyze alternatives and quantities for the OPC, which currently accounts for a projected $8 billion—about 33 percent—of total Deepwater costs. Coast Guard leadership intends to base future procurement decisions on the results of this analysis, which is expected to be completed in the summer of 2009. According to a senior official in the capabilities directorate, the directorate has recommended that this type of analysis be repeated every 4 years, or once during each commandant’s tenure. In conjunction with assuming the role of systems integrator, the Coast Guard has reduced the scope and volume of ICGS’s systems engineering and integration functions. For example, the most recent systems engineering and integration task order, issued to ICGS in March 2009, is limited to support services such as data management and quality assurance for the assets currently on contract with ICGS, such as the Maritime Patrol Aircraft (MPA), the National Security Cutter (NSC), and C4ISR. By contrast, under the prior systems engineering and integration task order, ICGS was responsible for systems integrator functions such as developing the mix of assets to meet Coast Guard missions, the development of operational concepts, requirements management, test and evaluation management, and a number of other program management and system-of-systems level functions. While the Coast Guard does not intend to cancel ongoing orders with ICGS for services or assets, it does not plan to acquire future assets from ICGS. A step in this direction was the September 2008 competitive award of the Fast Response Cutter to Bollinger Shipyards, Inc. Further, while ICGS will continue to be responsible for the construction and delivery of the first three NSCs, the Coast Guard intends to award contracts for construction and long-lead-time materials for future NSCs directly to ICGS subcontractor Northrop Grumman Shipbuilding. The Coast Guard’s decision was formalized in a March 2009 contract modification with ICGS stating that it will not award future work to ICGS after the current award term ends in January 2011. Table 2 shows that, as of May 2009, the Coast Guard has about $2.3 billion under contract with ICGS in ongoing work. The table does not include the total potential value of options and modifications that could be exercised before the current award term expires. Since our June 2008 report on the Deepwater Program, and taking into account our recommendations, the Coast Guard and DHS have taken steps to improve management and oversight of Deepwater. We reported, for example, that the Coast Guard had transitioned from a system-of-systems acquisition approach to an asset-based approach that reflects the disciplined and formalized process outlined in its MSAM. While the introduction of this process was a significant improvement, we found that the absence of a key milestone decision point before low-rate initial production begins was problematic and put program outcomes at risk. In response to our recommendation, the Coast Guard revised its MSAM to require a formal design review, termed “acquisition decision event 2B,” to ensure that risks are appropriately addressed before low-rate initial production is authorized. The MSAM phases and acquisition decision events are shown in figure 3. The Coast Guard has made other improvements to its MSAM process. For example, the MSAM now includes standardized cost-estimating procedures to provide an accounting of all resources required to develop, produce, deploy, and sustain a program. Before, there was minimal guidance in the manual about the cost-estimating process; it now includes a full description of the process and a cost-estimating template for project managers. The MSAM process was also revised to require acquisition planning and an early affordability assessment prior to acquisition decision event 1 (the “analyze/select” phase), to help inform the budget and planning processes. DHS has also improved its oversight and management of the Deepwater Program by reviewing the program under its own acquisition processes. In June 2008, we reported that DHS approval of Deepwater acquisition decisions at key points in the program was not required, as the department had deferred decisions on specific assets to the Coast Guard in 2003. We recommended that DHS rescind the delegation of Deepwater acquisition authority, and, in September 2008, the Under Secretary did so. As a result, DHS officials are now formally involved in reviewing and approving acquisition decisions for Deepwater assets at key points in the program’s life cycle. In November 2008, DHS issued a new interim management directive that, if implemented as intended, should help ensure that the department’s largest acquisitions, including Deepwater, are more effectively overseen and managed. Because the Coast Guard had previously exempted Deepwater from its MSAM process, assets were procured without following a disciplined program management approach. Recognizing the importance of ensuring that each acquisition project is managed through a sustainable and repeatable process and wanting to adhere to proven acquisition procedures, in July 2008 the Coast Guard set a goal of completing the MSAM acquisition management activities for all Deepwater assets by the end of March 2009. However, of the 13 Deepwater assets, 9 were behind schedule in terms of MSAM compliance as of May 2009, as not all required documents and processes had been completed. Not complying with the MSAM process puts the Coast Guard at risk of buying assets that do not fully meet its needs and that may experience cost growth and schedule slips. Assets that are early in the development cycle, such as the Offshore Patrol Cutter (OPC) and the Unmanned Aerial System, are at present compliant with the MSAM process. For example, the MSAM directs the capabilities directorate to charter an integrated product team to develop operational requirements for Coast Guard assets. This approach is currently being applied to the OPC, which is in the “analyze/select” phase of the MSAM process. In accordance with MSAM guidelines, the OPC requirements team includes representatives from the Coast Guard’s technical authorities, acquisition project managers, test and evaluation officials, and research and development officials. The goal of this process is to develop operational requirements that are specific, testable, prioritized, and defendable in order to adequately support the acquisition process and satisfy users’ needs. The Coast Guard plans to continue to follow the MSAM process, under which the operational requirements document and other key acquisition documents will be approved by the Coast Guard and DHS prior to the OPC entering the “obtain” phase, when capabilities are developed and demonstrated. For the Unmanned Aerial System, currently in the “need” phase of the acquisition process, the Coast Guard’s Office of Research, Development, Test and Evaluation is currently conducting preacquisition studies and tests to identify alternative approaches to fulfilling mission requirements for maritime surveillance and inform early cost estimates. Through these activities, the Coast Guard intends to mitigate risks by identifying approaches with high levels of technical and production maturity and leveraging development efforts underway by the Department of Defense and DHS. For assets well into production, such as the MPA and the NSC, the Coast Guard has made some progress in the past year in retroactively developing acquisition documentation with the intent of providing the traceability from mission needs to operational performance that was previously lacking. For example, the Coast Guard approved an operational requirements document for the MPA in October 2008, to establish a formal performance baseline and identify attributes for testing. Through this process, the Coast Guard discovered that ICGS’s requirement for operational availability (the amount of time that an aircraft is available to perform missions) was excessive compared to the Coast Guard’s own standards. According to a senior Coast Guard official responsible for managing aviation assets, the ICGS requirement would have needlessly increased costs to maintain and operate the aircraft. In addition to revisiting its requirements for the MPA, the Coast Guard is also revising its plans to test and procure the asset. In February 2009, the Coast Guard submitted an MPA test plan to DHS with the intent of obtaining approval for full-rate production based on the results of a November 2008 operational assessment conducted by the U.S. Navy’s Commander Operational Test and Evaluation Force (COMOPTEVFOR). In April 2009, the DHS Director, Operational Test and Evaluation, approved the plan for testing leading up to initial operational test and evaluation, but required the Coast Guard to update and resubmit the plan before operational testing begins. DHS and Coast Guard policy require operational testing to be conducted before full-rate production is approved. According to the senior official responsible for managing aviation assets, the Coast Guard now plans to obtain DHS approval to order further low-rate initial production aircraft at the next MPA acquisition decision event, scheduled for the end of fiscal year 2009. With 11 of 36 MPAs already delivered or on contract, the Coast Guard has already made a significant investment in this program before the testing that would demonstrate that what it is buying meets Coast Guard needs. DHS also required the Coast Guard to obtain concurrence with the test plan from an operational test authority before proceeding with operational testing of the MPA. According to DHS and Coast Guard policy, operational testing should be conducted with the approval and under the oversight of an independent operational test authority to ensure that tests are clearly linked to requirements and mission needs. However, the MSAM appears to be inconsistent with DHS policy regarding who this test authority should be. The DHS Acquisition Guidebook states that an operational test authority should be independent of both the acquirer and user, which allows the test authority to present objective and unbiased conclusions about an asset’s operational effectiveness and suitability. Further, a DHS directive on test and evaluation issued in May 2009 distinguishes between the “sponsor” (or user of the system), who is responsible for defining the system’s operational requirements, and the operational test agent, who plans, conducts, and reports independent operational test and evaluation results. The MSAM, on the other hand, assigns responsibility for planning and conducting operational testing to the sponsor—the Coast Guard’s capabilities directorate—which represents the end user. While the Coast Guard has a memorandum of agreement with COMOPTEVFOR to leverage the Navy’s experience and expertise in conducting operational testing for the MPA, the Coast Guard’s position is that its capabilities directorate can function as the operational test authority, as it is independent of the acquisition program office. The Director, DHS Test & Evaluation and Standards said that, particularly given the recent change to the department’s test and evaluation directive, the MSAM does not appear to be consistent with DHS policy regarding the operational test authority. The Coast Guard has also made a significant investment in the NSC program before completing operational testing to demonstrate that the capabilities it is buying meet Coast Guard needs. While some testing of the NSC has already taken place, the tests conducted to date do not substitute for the complete scope of operational testing that should be the basis for further investment. For example, COMOPTEVFOR completed an operational assessment of the NSC in 2007 to identify risks to the program’s successful completion of operational testing. Before the first NSC was delivered, it also underwent acceptance trials, conducted by the U.S. Navy Board of Inspection and Survey, to determine compliance with contract requirements and to test system capabilities. Since delivery of the first NSC, the Coast Guard has also conducted flight deck and combat system certifications with the assistance of the Navy. While these demonstrations and certifications provide evidence that the first NSC functions as intended, they do not fully demonstrate the suitability and effectiveness of the ship for Coast Guard operations. According to officials, a test plan to demonstrate these capabilities is expected to be approved in July 2009, and COMOPTEVFOR may begin operational testing in March 2010. However, by the time full operational testing is scheduled to be completed in 2011, the Coast Guard plans to have six of eight NSCs either built or under contract. Based on its determination that the need for the capabilities to be provided by the Fast Response Cutter and C4ISR is pressing, the Coast Guard has contracted for these capabilities without having in place all acquisition documentation required by the MSAM. This situation puts the Coast Guard at risk for cost overruns and schedule slips if it turns out that what it is buying does not meet its requirements. For example, in September 2008, after conducting a full and open competition, the Coast Guard awarded an $88.2 million contract to Bollinger Shipyards, Inc. for the design and construction of a lead Fast Response Cutter. Prior to the award, however, the Coast Guard did not have an approved operational requirements document or test plan for this asset as required by the MSAM process. Recognizing the risks inherent in this approach, the Coast Guard developed a basic requirements document and an acquisition strategy based on procuring a proven design. These documents were reviewed and approved by the Coast Guard’s capabilities directorate, the engineering and logistics directorate, and chief of staff before the procurement began. The Coast Guard’s next acquisition decision event is scheduled for the first quarter of fiscal year 2010 to obtain DHS approval for low-rate initial production. According to officials, the Coast Guard intends to submit an operational requirements document and test plan to DHS for this acquisition decision event. With plans to exercise contract options for hulls 2 through 8 in fiscal year 2010, the Coast Guard’s aggressive schedule leaves little room for unforeseen problems. Program risks are compounded by the fact that the Coast Guard plans to have at least 12 cutters either delivered or under contract prior to the scheduled completion of operational testing in fiscal year 2012, before it has certainty that what it is buying meets Coast Guard needs. The Coast Guard has also continued its procurement of C4ISR capabilities without an approved operational requirements document as required by the MSAM. C4ISR encompasses the connections between surface, aircraft, and shore-based assets and is intended to provide operationally relevant information to Coast Guard field commanders. Design and development costs for the first increment of C4ISR have increased significantly, from $55.5 million to $141.3 million. According to Coast Guard officials, this increase was due in part to the structure of the ICGS contract, under which the Coast Guard lacked visibility into the contractor’s software development processes and requirements. In addition, the ICGS C4ISR solution developed under the first increment contained Lockheed Martin- proprietary software, making the Coast Guard reliant on the contractor for maintenance and support. In February 2009, the Coast Guard issued a task order to ICGS, with a total potential value of $77.7 million, for a second increment of C4ISR design and development. It was not until May 2009, however, that the capabilities directorate reviewed and concurred with the capabilities identified in the acquisition plan. Coast Guard officials stated that the Coast Guard’s technical authority for C4ISR reviewed the acquisition plan and statement of work to ensure conformance with Coast Guard technical standards, but the officials said there is no operational requirements document for this increment. The lack of operational requirements may put the program at continued risk of cost increases if the Coast Guard determines that what it is buying does not meet its needs. Through the award of the second C4ISR increment, the Coast Guard has acquired some of the data rights to the proprietary software developed under the first increment. The Coast Guard’s goal is to gain greater visibility into the software in order to compete future increments. According to officials, future decisions about the C4ISR acquisition rest on the Coast Guard’s ability to affordably maintain and support the C4ISR software and ensure interoperability between Deepwater assets and the Coast Guard as a whole; however, the Coast Guard has not yet determined how it will do so. According to officials, acquisition of the third C4ISR increment will adhere to the MSAM process, and documents critical to determining and testing requirements and capabilities will be completed and approved by DHS before the Coast Guard proceeds with a contract award in about 2 years. Due in part to the Coast Guard’s increased insight into what it is buying, the anticipated cost, schedules, and capabilities of many of the Deepwater assets have changed since the establishment of the $24.2 billion baseline in 2007. Coast Guard officials have stated that this baseline reflected not a traditional cost estimate, but rather the anticipated contract costs as determined by ICGS. As the Coast Guard has developed its own cost baselines, it has become apparent that some of the assets will likely cost more than anticipated. Information to date shows that the total cost of the program will likely grow by at least $2.7 billion. This represents growth of approximately 39 percent for those assets with revised cost estimates. Furthermore, assets may be ready for operational use later than anticipated in the 2007 baseline and, at least initially, lack some of the capabilities envisioned. As the Coast Guard develops more baselines, further cost and schedule growth is likely to become apparent. While the Coast Guard plans to update its annual budget requests with this new information, the current structure of its budget submission to Congress does not include details at the asset level, such as estimates of total costs and total numbers to be procured. The $24.2 billion baseline for the Deepwater Program established cost, schedule, and operational requirements for the Deepwater system as a whole; these were then allocated to the major assets. Coast Guard officials have stated that this baseline reflected not a traditional cost estimate but ICGS’s anticipated contract costs. Furthermore, the Coast Guard lacked insight into how ICGS arrived at some of the costs for Deepwater assets. As the Coast Guard has assumed greater responsibility for management of the Deepwater Program, it has begun to improve its understanding of costs by establishing new baselines for individual assets based on its own cost estimates. These baselines begin at the asset level and are developed by Coast Guard project managers, validated by a separate office conducting independent cost estimates within the acquisition branch and, in most cases, are reviewed and approved by DHS. The estimates use common cost-estimating procedures and assumptions and account for costs not previously captured. As of June 2009, the Coast Guard had prepared 10 revised asset baselines. Two were approved by the Coast Guard (for the sustainment projects for the medium endurance cutter and the patrol boats) and 8 had been submitted to DHS, which had approved 5 of them. These new baselines are formulated using various sources of information, depending on the acquisition phase of the asset. For example, the baseline for the NSC was updated using the actual costs of material, labor, and other considerations already in effect at the shipyards. The baselines for other assets, like the MPA, were updated using independent cost estimates. As the Coast Guard approaches major milestones on Deepwater assets, such as the decision to enter low-rate initial production or to begin system development, officials have stated that the cost estimates for all assets will be reassessed and revalidated. In developing its own asset baselines, the Coast Guard has found that some of the assets will likely cost more than anticipated. As of June 2009, with 7 of the 10 baselines approved, the total cost of the program will likely exceed $24.2 billion, with potential cost growth of approximately $2.7 billion. For the assets with revised cost estimates, this represents cost growth of approximately 39 percent. As baselines for the additional assets are approved, further cost growth will likely become apparent. Table 3 provides the revised estimates of asset costs available as of June 2009. It does not reflect the roughly $3.6 billion in other Deepwater costs, such as program management, that the Coast Guard states do not require a new baseline. The Coast Guard’s new baselines provide not only a better understanding of the costs of Deepwater assets, but also insight into the drivers of any cost growth. For example, the new NSC baseline attributes a $1.3 billion rise in cost to a range of factors, from the additional costs to correct fatigue issues on the first three cutters—estimated by the Coast Guard to add an additional $86 million—to changes in economic factors such as labor and commodity prices that add an additional $434 million to the cost of the first four ships. The $517 million rise in cost for the MPA is attributed primarily to items that were not previously accounted for, including $36 million for a training simulator, $30.6 million in facility improvements, and $124 million for sufficient spare parts. An additional $115.9 million is attributable to cost growth for the aircraft and engineering changes. The Coast Guard has structured some of the new baselines to indicate how cost growth could be controlled by making trade-offs in asset quantities and/or capabilities. For example, the new MPA baseline includes cost increments that show the acquisition may be able to remain within the $1.7 billion estimate established in the 2007 baseline if 8 fewer aircraft than the planned 36 are acquired. Coast Guard officials have stated that other baselines currently under review by DHS present similar cost increments. This information, if combined with data from the fleet mix study to show the effect of quantity or capability reductions on the system-of-systems as a whole, offers an opportunity to the Coast Guard for serious discussions of cost and capability trade-offs. Given the approximately 39 percent cost growth for the Deepwater assets that have revised cost estimates, the trade-off assessment is critical—particularly with regard to the OPC, which currently represents a substantial portion of the planned Deepwater investment. The Coast Guard’s reevaluation of baselines has also improved insight into the schedules for when assets will first be available for operations and when final assets will be delivered. For example, the initial operating capability of the first NSC has been delayed by a year as compared to the schedule in the 2007 baseline, and the MPA has been delayed by 21 months. Table 4 provides more information on initial operational capability and final asset delivery schedules for Deepwater assets that have had revised baselines approved. Other assets have baselines either with DHS for approval or are in development. Since many Deepwater assets are intended to replace older Coast Guard assets, delays in their introduction and final deliveries could have an effect beyond the Deepwater Program. For example, the NSC—together with the OPC—is intended to replace older High Endurance and Medium Endurance Cutters, some of which have been in service for over 40 years. According to Coast Guard officials, the longer these older cutters remain in service—due to a delay in the introduction of the NSC or the OPC to the fleet or delays in delivering all of the assets—the more funding will be required for maintenance of assets that are being replaced. According to a senior official in the Coast Guard’s acquisition directorate, additional, unplanned funding will be required for a sustainment project to keep the High Endurance Cutters in service longer than anticipated. An acquisition strategy to achieve this project is currently in development. The Coast Guard’s reevaluation of baselines has also changed its understanding of the capabilities of Deepwater assets. For example, Coast Guard officials stated that the restructuring of the unmanned aircraft and small boat projects has delayed the deployment of these assets with the first NSC and reduces the ship’s anticipated capabilities in the near term. We plan to report this summer on the operational effect of these delays on the NSC. The Coast Guard’s budget submission, as currently structured, limits Congress’s understanding of details at the asset level in so far as it does not include key information such as assets’ total acquisition costs or, for the majority of assets, the total quantities planned. For example, while the justification of the NSC request includes a detailed description of expected capabilities and how these capabilities link to the Coast Guard’s missions and activities funded by past appropriations, it does not include estimates of total program cost, future award or delivery dates of remaining assets, or even the total number of assets to be procured. Our past work has emphasized that one key to a successful capital acquisition, such as the multibillion-dollar ships and aircraft the Coast Guard is procuring, is budget submissions that clearly communicate needs. An important part of this communication is to provide decision makers with information about cost estimates, risks, and the scope of a planned project before substantial resources are committed. Good budgeting also requires that the full costs of a project be considered upfront when decisions are made. Other federal agencies that acquire systems similar to those of the Coast Guard, such as the Department of Defense, capture these elements in justifications of their budget requests. To illustrate, table 5 provides a comparison of the information found in the NSC budget justification with that the Navy is required to use in the Department of Defense regulations for its shipbuilding programs. While the Coast Guard’s asset-level Quarterly Acquisition Reports to Congress and the annual Deepwater Program Expenditure Report include some information on total costs and quantities, these documents are provided only to the appropriations committees, and they contain selected information that is restricted due to acquisition sensitive material. The budget justification prepared by the Coast Guard is a tool that Congress uses in its budget and appropriations deliberations. Presentation of information on the full costs and quantities of Deepwater assets in the Coast Guard’s budget submission can provide Congress greater insights in fulfilling its roles of providing funding and conducting oversight. The Coast Guard sought a systems integrator at the outset of the Deepwater Program in part because its workforce lacked the experience and depth to manage the acquisition internally. The Coast Guard acknowledges that it still faces challenges in hiring and retaining qualified acquisition personnel and that this situation poses a risk to the successful execution of its acquisition programs. According to human capital officials in the acquisition directorate, as of April 2009 the acquisition branch had funding for 855 military and civilian personnel and had filled 717 of these positions—leaving 16 percent unfilled. The Coast Guard has identified some of these unfilled positions as core to the acquisition workforce, such as contracting officers and specialists, program management support staff, and engineering and technical specialists. Even as it attempts to fill its current vacancies, the Coast Guard plans to increase the size of its acquisition workforce significantly by the end of fiscal year 2011. For example, the Coast Guard’s fiscal year 2010 budget request includes funding for 100 new acquisition workforce positions, and the Coast Guard anticipates requesting funding for additional positions in future budget requests. To supplement and enhance its internal expertise, the Coast Guard has increased its use of third-party, independent experts from outside both the Coast Guard and existing Deepwater contractors. For example, a number of organizations within the Navy have provided views and expertise on a wide range of issues, including testing and safety. In addition, the Coast Guard plans to use the American Bureau of Shipping, an organization that establishes and applies standards for the design and construction of ship and other marine equipment, as an advisor and independent reviewer on the design and construction of the Fast Response Cutter. The Coast Guard has also begun a relationship with a university-affiliated research center to supplement its expertise as it executes its fleet-mix analysis. In addition to third-party experts, the Coast Guard has been increasing its use of support contractors. As of fiscal year 2009, approximately 170 contractor employees supported the acquisition directorate, a number that has steadily increased in recent years. These contractors are performing a variety of services—some of which support functions the Coast Guard has identified as core to the government acquisition workforce—including project management support, engineering, contract administration, and business analysis and management. While support contractors can provide a variety of essential services, their use must be carefully overseen to ensure that they do not perform inherently governmental roles. The Coast Guard, acknowledging this risk, is monitoring its use of support contractors to properly identify the functions they perform and has developed a policy to define what is and what is not inherently governmental. While the Coast Guard may be hard-pressed to fill the government acquisition positions it has identified both now and in the future, it has made progress in identifying the broader challenges it faces and is working to mitigate them. The Coast Guard has updated two documents key to this effort, the Blueprint for Acquisition Reform, now in its third iteration, and the Acquisition Human Capital Strategic Plan, which is in its second iteration. Each document identifies challenges the Coast Guard faces in developing and managing its acquisition workforce and outlines initiatives and policies to meet these challenges. For example, the Acquisition Human Capital Strategic Plan sets forth three overall challenges and outlines over a dozen strategies for addressing them in building and maintaining an acquisition workforce. The discussion of strategies includes status indicators and milestones for monitoring progress, as well as supporting actions such as the formation of partnerships with the Defense Acquisition University and continually monitoring turnover in critical occupations. The Blueprint for Acquisition Reform supports many of these initiatives and provides deadlines for their completion. In fact, the Coast Guard has already completed a number of initiatives including achieving and maintaining Level III program manager certifications, adopting a model to assess future workforce needs, incorporating requests for additional staff into the budget cycle, initiating tracking of workforce trends and metrics, expanding use of merit-based rewards and recognitions, and initiating training on interactions and relationships with contractors. In assuming the role of systems integrator, the Coast Guard has made a major change in its management of the Deepwater Program, one that has increased its insight into the capabilities needed to fulfill Coast Guard missions, the costs and capabilities of what it is currently procuring, and what resources are needed to complete the acquisition. The continued application and improvement of the disciplined management processes inherent in the MSAM are also beneficial in helping to ensure that Deepwater assets are designed and delivered to meet mission needs. While these changes, as well as the additional oversight gained by DHS’s participation in acquisition decisions, do not eliminate the risks associated with this multibillion-dollar acquisition, they do help ensure that program risks are more fully considered. However, the Coast Guard has not applied the disciplined acquisition process to the FRC and the second increment of C4ISR, recent contract actions that will involve additional investments of taxpayer dollars over time. Further, as operational testing proceeds for Deepwater assets, the MSAM appears to be inconsistent with DHS policy and the recent directive on test and evaluation, which require operational test authorities to be independent of the system’s user. Finally, in light of the sheer size and scope of the Deepwater Program and Congress’s role in providing funds, the Coast Guard’s budget submissions do not provide a complete picture of the planned costs of Deepwater assets that would help inform the decision-making process. We recommend that the Commandant of the Coast Guard take the following three actions: Do not exercise further options under the Fast Response Cutter contract and under the task order for the second increment of C4ISR until these projects are brought into full compliance with the MSAM and DHS acquisition directives. Consult with the DHS Office of Test & Evaluation and Standards to determine whether the MSAM conflicts with DHS’s directive regarding the entity named as the independent operational test authority and, if so, take steps to reconcile the inconsistency. As the Coast Guard prepares future budget submissions for Deepwater, include the total acquisition costs for the assets and total quantities planned. In written comments on a draft of this report, the Coast Guard concurred with our findings. The agency also stated that it concurred with our recommendation to not exercise further contract options on the Fast Response Cutter and the second increment of C4ISR until these projects are brought into full compliance with the MSAM and DHS acquisition directives, as well as our recommendation to consult with DHS on policies regarding the independent operational test authority. DHS intends to take our final recommendation, to provide total acquisition costs and quantities in future budget submissions, under advisement. DHS noted that the proposed changes could result in the Coast Guard failing to comply with DHS budget submission guidelines and that Congress currently receives long-term acquisition project information through the Quarterly Acquisition Report to Congress. While we agree that this report includes some information on total costs and quantities, as we state in the report, it is provided only to the appropriations committees and contains other information that is restricted and limits its distribution, and therefore its utility, to decision makers. Presentation of information on the full costs and quantities of Deepwater assets in the Coast Guard’s budget submission can provide the information to a wider audience and better assist Congress in providing funding and conducting oversight. The comments from DHS are included in their entirety in appendix II. Technical comments were also provided and incorporated into the report as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Homeland Security, and the Commandant of the Coast Guard. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgements are provided in appendix III. Overall, in conducting this review, we relied in part on the information and analysis in our April 2009 testimony, Update on Deepwater Program Management, Cost, and Acquisition Workforce and our June 2008 report, Change in Course Improves Deepwater Management and Oversight, but Outcome Still Uncertain. Additional scope and methodology information on each objective of this report follows. To assess the Coast Guard’s efforts to manage the Deepwater Program at the overall system-of-systems level, we reviewed the Coast Guard’s July 2008 Blueprint for Acquisition Reform, work group charters, and plans and actions the Coast Guard has taken to assume the role of systems integrator. To understand how the Coast Guard defined and assigned systems integrator roles and responsibilities, we reviewed the Coast Guard’s Major Systems Acquisition Manual (MSAM) and technical authority instructions. We also interviewed senior acquisition directorate officials, representatives of the Coast Guard’s capabilities directorate, and representatives of Coast Guard’s technical authorities. To analyze the scope and volume of work currently under contract with Integrated Coast Guard Systems (ICGS) and the Coast Guard’s plans to end its contractual relationship with ICGS, we reviewed task orders, contract statements of work, and acquisition plans and interviewed senior acquisition directorate officials and contracting officials. To assess the Coast Guard’s implementation of a disciplined, project management process for Deepwater acquisitions, we reviewed the most recent update to Coast Guard’s MSAM and the Department of Homeland Security’s (DHS) November 2008 Interim Acquisition Directive 102-01 as well as how individual assets were complying with both sets of guidance. We compared these policies with best practices reflected in previous GAO work on major acquisitions. We also interviewed acquisition directorate officials and program and project managers to discuss ongoing efforts to transition the acquisition of Deepwater assets to the MSAM process and spoke with DHS officials about the department’s major acquisition review process and reporting requirements. We also interviewed Coast Guard officials and analyzed documentation for the fleet-mix analysis currently being conducted by the capabilities directorate. We conducted case studies of selected assets, representing some that are in production as well as some with recent contract awards. This analysis included reviews of acquisition program baselines, operational requirements documents, test plans, and other key acquisition documentation and interviews with program and project managers and independent test authority officials. In addition, we met with contractor and Coast Guard officials at Lockheed Martin’s facilities in Moorestown, New Jersey and ICGS’s offices in Arlington, Virginia to discuss the transition of systems integrator functions and current work on C4ISR capabilities. We also met with Coast Guard officials at the Aviation Logistics Center in Elizabeth City, North Carolina to discuss their role in upgrading and maintaining Deepwater assets, and the U.S. Navy’s Commander Operational Test and Evaluation Force in Norfolk, Virginia to discuss their role in conducting operational testing. Finally we met with Coast Guard officials and toured facilities and ships, including the National Security Cutter Bertholf in Alameda, California. To assess how cost, schedules, and capabilities have changed from the 2007 Deepwater Acquisition Program Baseline approved by DHS, we reviewed that baseline and compared it to the revised baselines for individual assets that have been approved to date. We also interviewed senior acquisition directorate officials and program and project managers to discuss how the Coast Guard is developing new acquisition program baselines for individual assets and how the process used differs from that in the 2007 baseline, such as the basis for cost estimates. We reviewed the Coast Guard’s guidance and policy on cost estimating in the MSAM and compared it to GAO best practices, including our Cost Assessment Guide: Best Practices for Estimating and Managing Program Costs. We also reviewed operational requirements documents and project reports for selected assets in various stages of the development and production processes to understand the major drivers of cost growth, schedule delays, and capability changes. We interviewed acquisition directorate officials and program and project managers to discuss options for controlling cost growth by making trade-offs in asset quantities and/or capabilities, as well as some of the potential implications of unplanned schedule delays. To assess how well costs are communicated to Congress, we reviewed the Office of Management and Budget’s guidance on budget justifications, the Coast Guard’s 2009 and 2010 budget justifications, the Coast Guard’s 2008 Deepwater Expenditure Plan, and the Coast Guard’s Quarterly Acquisition Reports to Congress. We compared the Coast Guard’s budget submissions to those prepared by the Navy. To assess the Coast Guard’s efforts to manage and build its acquisition workforce, we reviewed Coast Guard organization charts for aviation, surface, and C4ISR components showing government, contractor, and vacant positions. We supplemented this analysis with interviews of acquisition directorate officials, including contracting and Office of Acquisition Workforce Management officials and program and project managers to discuss current vacancy rates—especially for key acquisition positions such as contracting officials and systems engineers—and the Coast Guard’s plans to increase the size of the acquisition workforce. We also reviewed documentation and interviewed senior acquisition directorate officials about the Coast Guard’s use of third parties and independent experts outside of the Coast Guard such as the U.S. Navy and the American Bureau of Shipping, as well as increased use of support contractors and oversight to prevent contractors from performing inherently governmental functions. We reviewed documentation such as the July 2008 Blueprint for Acquisition Reform and the updated Acquisition Human Capital Strategic Plan and discussed workforce initiatives, challenges, and obstacles to building an acquisition workforce, including recruitment and difficulty in filling key positions. We conducted this performance audit from September 2008 to July 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For further information about this report, please contact John P. Hutton, Director, Acquisition and Sourcing Management, at (202) 512-4841 or [email protected]. Other individuals making key contributions to this report include Michele Mackin, Assistant Director; Greg Campbell; Carolynn Cavanaugh; J. Kristopher Keener; Angie Nichols-Friedman; and Sylvia Schatz. Coast Guard: Update on Deepwater Program Management, Cost, and Acquisition Workforce. GAO-09-620T. Washington, D.C.: April 22, 2009. Coast Guard: Change in Course Improves Deepwater Management and Oversight, but Outcome Still Uncertain. GAO-08-745. Washington, D.C.: June 24, 2008. Coast Guard: Observations on Changes to Management and Oversight of the Deepwater Program. GAO-09-462T. Washington, D.C.: March 24, 2009. Status of Selected Assets of the Coast Guard’s Deepwater Program. GAO-08-270R. Washington, D.C.: March 11, 2008. Coast Guard: Deepwater Program Management Initiatives and Key Homeland Security Missions. GAO-08-531T. Washington, D.C.: March 5, 2008. Coast Guard: Status of Efforts to Improve Deepwater Program Management and Address Operational Challenges. GAO-07-575T. Washington, D.C.: March 8, 2007. Coast Guard: Status of Deepwater Fast Response Cutter Design Efforts. GAO-06-764. Washington, D.C.: June 23, 2006. Coast Guard: Changes to Deepwater Plan Appear Sound, and Program Management Has Improved, but Continued Monitoring Is Warranted. GAO-06-546. Washington, D.C.: April 28, 2006. Coast Guard: Progress Being Made on Addressing Deepwater Legacy Asset Condition Issues and Program Management, but Acquisition Challenges Remain. GAO-05-757. Washington, D.C.: July 22, 2005. Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges. GAO-05-651T. Washington, D.C.: June 21, 2005. Coast Guard: Deepwater Program Acquisition Schedule Update Needed. GAO-04-695. Washington, D.C.: June 14, 2004. Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight. GAO-04-380. Washington, D.C.: March 9, 2004. Coast Guard: Actions Needed to Mitigate Deepwater Project Risks. GAO-01-659T. Washington, D.C.: May 3, 2001.
The Deepwater Program includes efforts to build or modernize ships and aircraft and to procure other capabilities. In 2002, the Coast Guard contracted with Integrated Coast Guard Systems (ICGS) to manage the acquisition as systems integrator. After a series of project failures, the Coast Guard announced in April 2007 that it would take over the lead role, with future work on individual assets bid competitively, and a program baseline of $24.2 billion was set. In June 2008, GAO reported on the Coast Guard's progress and made several recommendations, which the Coast Guard and the Department of Homeland Security (DHS) have addressed. In response to a Senate report accompanying the DHS Appropriations Bill, 2009, GAO addressed (1) efforts to manage Deepwater, (2) changes in cost and schedule of the assets, and (3) efforts to build an acquisition workforce. GAO reviewed Coast Guard and DHS documents and interviewed officials. The Coast Guard has assumed the role of systems integrator for the overall Deepwater Program by reducing the scope of the work on contract with ICGS and assigning these functions to Coast Guard stakeholders. As part of its systems integration responsibilities, the Coast Guard has undertaken a fundamental reassessment of the capabilities, number, and mix of assets it needs and expects to complete this analysis by the summer of 2009. At the individual Deepwater asset level, the Coast Guard has improved and begun to apply the disciplined management process contained in its Major Systems Acquisition Manual (MSAM), but did not meet its goal of complete adherence to this process for all Deepwater assets by the end of March 2009. For example, key acquisition management activities--such as operational requirements documents and test plans--are not in place for assets with contracts or orders recently awarded (such as the Fast Response Cutter and C4ISR) or in production, placing the Coast Guard at risk of cost growth or schedule slips. In addition, the MSAM does not appear to be consistent with recent DHS policy that requires entities responsible for operational testing to be independent of the system's users. Due in part to the Coast Guard's increased insight into what it is buying, the anticipated cost, schedules, and capabilities of many Deepwater assets have changed since the $24.2 billion baseline was established in 2007. Coast Guard officials have stated that this baseline reflected not a traditional cost estimate, but rather the anticipated contract costs as determined by ICGS. As the Coast Guard has developed its own cost baselines for some assets, it has become apparent that some of these assets it is procuring will likely cost more than anticipated--up to $2.7 billion more based on information to date. This represents approximately 39 percent cost growth for the assets with revised cost estimates. As more cost baselines are developed and approved, further cost growth is likely. Updated baselines also indicate that schedules have slipped for several of the assets. In addition, the current structure of the Coast Guard's budget submission to Congress does not include details at the asset level, such as estimates of total costs and total numbers to be procured, as do those of the Department of Defense, which acquires similar systems. One reason the Coast Guard hired a contractor as a systems integrator was because it recognized that it lacked the experience and depth in workforce to manage the acquisition internally. The Coast Guard acknowledges that it still faces challenges in hiring and retaining qualified acquisition personnel and that this situation poses a risk to the successful execution of its acquisition programs. According to human capital officials in the acquisition directorate, as of April 2009, the acquisition branch had 16 percent of positions unfilled, including key jobs such as contracting officers and systems engineers. Even as it attempts to fill its current vacancies, the Coast Guard plans to increase the size of its acquisition workforce significantly; the fiscal year 2010 budget request includes funding for 100 new acquisition workforce positions. In the meantime, the Coast Guard has been increasing its use of support contractors.
USPS faces a dire financial situation and does not have sufficient revenues to cover its expenses, putting its mission of providing prompt, reliable, and efficient universal services to the public at risk. USPS continues to incur operating deficits that are unsustainable, has not made required payments of $11.1 billion to prefund retiree health benefit liabilities,USPS lacks liquidity to maintain its financial solvency or finance needed and has reached its $15 billion borrowing limit. Moreover, capital investment. As presented in table 1, since fiscal year 2006, USPS has achieved about $15 billion in savings and reduced its workforce by about 168,000, while also experiencing a 25 percent decline in total mail volume and net losses totaling $40 billion. As a result of significant declines in volume and revenue, USPS reported that it took unprecedented actions to reduce its costs by $6.1 billion in fiscal year 2009. Also in 2009, a cash shortfall necessitated congressional action to reduce USPS’s mandated payment to prefund retiree health benefits from $5.4 billion to $1.4 billion. In 2011, USPS’s $5.5 billion required retiree health benefit payment was delayed until August 1, 2012. USPS missed that payment as well as the $5.6 billion that was due by September 30, 2012. USPS continues to face significant decreases in mail volume and revenues as online communication and e-commerce expand. While remaining among USPS’s most profitable products, both First-Class Mail and Standard Mail volumes have declined in recent years as illustrated in figure 1. First-Class Mail—which is highly profitable and generates the majority of the revenues used to cover overhead costs—declined 33 percent since it peaked in fiscal year 2001, and USPS projects a continued decline through fiscal year 2020. Standard Mail (primarily advertising) has declined 23 percent since it peaked in fiscal year 2007, and USPS projects that it will remain roughly flat through fiscal year 2020. Standard Mail is profitable overall, but it takes about three pieces of Standard Mail, on average, to equal the profit from the average piece of First-Class Mail. First-Class Mail and Standard Mail also face competition from electronic alternatives, as many businesses and consumers have moved to electronic payments over the past decade in lieu of using the mail to pay bills. For the first time, in 2010, fewer than 50 percent of all bills were paid by mail. In addition to lost mail volume and revenue, USPS also has incurred financial liabilities, that totaled $96 billion at the end of fiscal year 2012, that included unfunded pension and retiree health benefit liabilities. Table 2 shows the amounts of these liabilities over the last 6 fiscal years. One of these liabilities, USPS’s debt to the U.S. Treasury, increased over this period from $4 billion to its statutory limit of $15 billion. Thus, USPS can no longer borrow to maintain its financial solvency or finance needed capital investment. USPS continues to incur unsustainable operating deficits. In this regard, the USPS Board of Governors recently directed postal management to accelerate restructuring efforts to achieve greater savings. These selected USPS liabilities increased from 83 percent of revenues in fiscal year 2007 to 147 percent of revenues in fiscal year 2012 as illustrated in figure 2. This trend demonstrates how USPS liabilities have become a large and growing financial burden. USPS’s dire financial condition makes paying for these liabilities highly challenging. In addition to reaching its limit in borrowing authority in fiscal year 2012, USPS did not make required prefunding payments of $11.1 billion for fiscal year 2011 and 2012 retiree health benefits. At the end of fiscal year 2012, USPS had $48 billion in unfunded retiree health benefit liabilities. Looking forward, USPS has warned that it suffers from a severe lack of liquidity. As USPS has reported: “Even with some regulatory and legislative changes, our ability to generate sufficient cash flows from current and future management actions to increase efficiency, reduce costs, and generate revenue may not be sufficient to meet all of our financial obligations.” For this reason, USPS has stated that it continues to lack the financial resources to make its annual retiree health benefit prefunding payment. USPS has also reported that in the short term, should circumstances leave it with insufficient liquidity, it may need to prioritize payments to its employees and suppliers ahead of those to the federal government. For example, near the end of fiscal year 2011, in order to maintain its liquidity, USPS temporarily halted its regular contributions for the Federal Employees Retirement System (FERS) that are supposed to cover the cost of benefits being earned by current employees. However, USPS has since made up those missed FERS payments. USPS’s statements about its liquidity raise the issue of whether USPS will need additional financial help to remain solvent while it restructures and, more fundamentally, whether it can remain financially self-sustainable in the long term. USPS has also raised the concern that its ability to negotiate labor contracts is essential to maintaining financial stability and that failure to do so could have significant adverse consequences on its ability to meet its financial obligations. Most USPS employees are covered by collective bargaining agreements with four major labor unions which have established salary increases, cost-of-living adjustments, and the share of health insurance premiums paid by employees and USPS. When USPS and its unions are unable to agree, binding arbitration by a third-party panel is used to establish agreement. There is no statutory requirement for USPS’s financial condition to be considered in arbitration. In 2010, we reported that the time has come to reexamine USPS’s 40-year-old structure for collective bargaining, noting that wages and benefits comprise 80 percent of its costs at a time of escalating losses and a dramatically changed competitive environment.Congress should consider revising the statutory framework for collective bargaining to ensure that USPS’s financial condition be considered in binding arbitration. USPS has several initiatives to reduce costs and increase its revenues to curtail future net losses. In February 2012, USPS announced a 5-year business plan with the goal of achieving $22.5 billion in annual cost savings by the end of fiscal year 2016. This plan included savings from a change in the delivery schedule; however, USPS has now put all changes in delivery service on hold, which will reduce its ability to achieve the full 5-year business plan savings. USPS has begun implementing other parts of the plan, which includes initiatives to save: $9 billion in mail processing, retail, and delivery operations, including consolidation of the mail processing network, and restructuring retail and delivery operations; $5 billion in compensation and benefits and non-personnel $8.5 billion through proposed legislative changes, such as eliminating the obligation to prefund USPS’s retiree health benefits. o $2.7 billion of this $8.5 billion was estimated savings from moving to a 5-day delivery schedule for all types of mail. o USPS subsequently proposed a modified reduction in its delivery schedule, maintaining package delivery on Saturday, with estimated annual savings of $2 billion, but as noted, USPS has now put even this proposed change in service delivery on hold. Simultaneously, USPS’s 5-year plan would further reduce the overall size of the postal workforce by roughly 155,000 career employees, with many of those reductions expected to result from attrition. According to the plan, half of USPS’s career employees are currently eligible for full or early retirement. Reducing its workforce is vital because as noted compensation and benefits costs continue to generate about 80 percent of USPS’s expenses. Compensation alone (primarily wages) exceeded $36 billion in fiscal year 2012, or close to half of its costs. Compensation costs decreased by $542 million in fiscal year 2012 as USPS offered separation incentives to postmasters and mail handlers to encourage more attrition. This fiscal year, separation incentives were offered to employees represented by the American Postal Workers Union (e.g., mail processing and retail clerks) to encourage further attrition as processing and retail operations are redesigned and consolidated to more closely correspond with workload. Another key area of potential savings included in the 5-year plan focused on reducing compensation and benefit costs. USPS’s largest benefit payments in fiscal year 2012 included: $7.8 billion in current-year health insurance premiums for employees, retirees, and their survivors (USPS’s health benefit payments would have been $13.4 billion if USPS had paid the required $5.6 billion retiree health prefunding payment); $3.0 billion in FERS pension funding contributions; $1.8 billion in social security contributions; $1.4 billion in workers’ compensation payments; and $1.0 billion in Thrift Savings Plan contributions. USPS has proposed administering its own health care plan for its employees and retirees and withdrawing from the Federal Employee Health Benefits (FEHB) program so that it can better manage its costs and achieve significant savings, which USPS has estimated could be over $7 billion annually. About $5.5 billion of the estimated savings would come from eliminating the retiree health benefit prefunding payment and another $1.5 billion would come from reducing health care costs. We are currently reviewing USPS’s proposal including its potential financial effects on participants and USPS. To increase revenue, USPS is working to increase use of shipping and package services. With the continued increase in e-commerce, USPS projects that shipping and package volume will grow by 7 percent in fiscal year 2013, after increasing 7.5 percent in fiscal year 2012. Revenue from these two product categories represented about 18 percent of USPS’s fiscal year 2012 operating revenue. However, USPS does not expect that continued growth in shipping and package services will fully offset the continued decline of revenue from First-Class Mail and other products. We recently reported that USPS is pursuing 55 initiatives to generate revenue. Forty-eight initiatives are extensions of existing lines of postal products and services, such as offering Post Office Box customers a suite of service enhancements (e.g., expanded lobby hours and earlier pickup times) at selected locations and increasing public awareness of the availability of postal services at retail stores. The other seven initiatives included four involving experimental postal products, such as prepaid postage on the sale of greeting cards, and three that were extensions of nonpostal services that are not directly related to mail delivery. USPS offers 12 nonpostal services including Passport Photo Services, the sale of advertising to support change-of-address processing, and others generating a net income of $141 million in fiscal year 2011. Another area of potential revenue generation is USPS’s increased use of negotiated service agreements that offer competitively priced contracts as well as promotions with temporary rate reductions that are targeted to retain mail volume. We are currently reviewing USPS’s use of negotiated service agreements. As USPS attempts to reduce costs and increase revenue, its mission to provide universal service continues. USPS’s network serves more than 152 million residential and business delivery points. In May 2011, we reported that many of USPS’s delivery vehicles were reaching the end of their expected 24-year operational life and that USPS’s financial challenges pose a significant barrier to replacing or refurbishing its fleet.As a result, USPS’s approach has been to maintain the delivery fleet until USPS determines how to address longer term needs, but USPS has been increasingly incurring costs for unscheduled maintenance because of breakdowns. The eventual replacement of its vehicle delivery fleet represents yet another financial challenge facing USPS. We are currently reviewing USPS’s investments in capital assets. We have issued a number of reports on strategies and options for USPS to improve its financial situation by optimizing its network and restructuring the funding of its pension and retiree health benefit liabilities. To assist Congress in addressing issues related to reducing USPS’s expenses, we have issued several reports analyzing USPS’s initiatives to optimize its mail processing, delivery, and retail networks. In April 2012, we issued a report related to USPS’s excess capacity in its network of 461 mail processing facilities. We found that USPS’s mail processing network exceeds what is needed for declining mail volume. USPS proposed consolidating its mail processing network, a plan based on proposed changes to overnight delivery service standards for First- Class Mail and Periodicals. Such a change would have enabled USPS to reduce an excess of 35,000 positions and 3,000 pieces of mail equipment, among other things. We found, however, that stakeholder issues and other challenges could prevent USPS from implementing its plan for consolidating its mail processing network. Although some business mailers and Members of Congress expressed support for consolidating mail processing facilities, other mailers, Members of Congress, affected communities, and employee organizations raised concerns. Key issues raised by business mailers were that closing facilities could increase their transportation costs and decrease service. Employee associations were concerned that reducing service could result in a greater loss of mail volume and revenue that could worsen USPS’s financial condition. We reported that if Congress preferred to retain the current delivery service standards and associated network, decisions will need to be made about how USPS’s costs for providing these services will be paid. Over the past several years, USPS has proposed transitioning to a new delivery schedule. Most recently, in February of this year, USPS proposed limiting its delivery of mail on Saturdays to packages—a growing area for USPS—and to Express Mail, Priority Mail, and mail addressed to Post Office Boxes. Preserving Saturday delivery for packages would address concerns previously raised by some stakeholders, such as delivery of needed medications. USPS estimated that this reduced Saturday delivery would produce $2 billion in annual savings after full implementation, which would take about two years to achieve, and result in a mail volume decline of less than one percent. Based on our 2011 work,February 2013 estimate, we note that the previous and current estimates and recent information from USPS on their are primarily based on eliminating city and rural carrier work hours on Saturdays. In our prior work, stakeholders raised a variety of concerns about these estimates, several of which are still relevant. For example, USPS’s estimate assumed that most of the Saturday workload transferred to weekdays would be absorbed through more efficient delivery. USPS estimated that its current excess capacity should allow it to absorb the Saturday workload on Monday. If that is not the case, some of the projected savings may not be realized. Another concern stakeholders raised was that USPS may have underestimated the size of the potential volume loss from eliminating Saturday delivery due to the methodology used to develop its estimates. Since mail volume has declined from the prior estimate, the accuracy of the estimated additional impact of eliminating Saturday delivery is unclear. The extent to which USPS would be able to achieve its most recent estimate of $2 billion in annual savings depends on how well and how quickly it can realign its workforce and delivery operations. Nevertheless, we agree that such a change in USPS’s delivery schedule would likely result in substantial savings. A change to 5-day service would be similar to changes USPS has made in the past. USPS is required by law to provide prompt, reliable, and efficient services, as nearly as practicable. The Postal Regulatory Commission (PRC) has reported that delivery frequency is a key element of universal postal service. The Postal Service’s universal service obligation is broadly outlined in multiple statutes and encompasses multiple dimensions including delivery frequency. Other key dimensions include geographic scope, range of products, access to services and facilities, affordable and uniform pricing, service quality, and security of the mail. The frequency of USPS mail delivery has evolved over time to account for changes in communication, technology, transportation, and postal finances. The change to 5-day service would be a similar change. Until 1950, residential deliveries were made twice a day in most cities. Currently, while most customers receive 6-day delivery, some customers receive 5-day or even 3-day-a-week delivery, including businesses that are not open 6 days a week; resort or seasonal areas not open year- round; and areas not easily accessible, some of which require the use of boats, airplanes, or trucks. Following USPS’s most recent proposed change in delivery in February 2013, we issued a legal opinion concerning the proposal in response to a congressional request. As requested, we addressed whether a requirement contained in the USPS’s annual appropriations acts for the past three decades and contained in its fiscal year 2012 appropriations act—that it continue 6-day delivery of mail “at not less than the 1983 level”—was still in effect under the partial year Continuing Appropriations Resolution. We concluded that the Continuing Resolution carried forward this requirement, explaining that absent specific legislative language, a continuing resolution maintains the status quo regarding government funding and operations. Although the 6-day delivery proviso is an operational directive, not an appropriation, we saw no language in the Continuing Resolution to indicate that Congress did not expect it to continue to apply. The full-year 2013 Continuing Resolution that Congress then enacted on March 21, shortly after we issued our opinion, which provided funding through the end of fiscal year 2013, likewise has continued the effectiveness of the 6-day proviso. On April 10, 2013, the USPS Board of Governors announced that based on the language of the March 21, 2013, Continuing Resolution, it would delay implementation of USPS’s proposed delivery schedule until legislation is passed that provides it with the authority “to implement a financially appropriate and responsible delivery schedule.” By statute, the Board directs the exercise of the power of the Postal Service, directs and controls the Postal Service’s expenditures, and reviews its policies and practices. Thus, the Board, which has the lead responsibility for taking actions within the scope of the Postal Service’s existing statutory authority to maintain its financial solvency, has determined that full 6-day service will continue for the present time. In April 2012, we reported that USPS has taken several actions to restructure its retail network—which included almost 32,000 postal managed facilities in fiscal year 2012—through reducing its workforce and its footprint while expanding retail alternatives. We also reported on concerns customers and other stakeholders have expressed regarding the impact of post office closures on communities, the adequacy of retail alternatives, and access to postal services, among others. We discussed challenges USPS faces, such as legal restrictions and resistance from some Members of Congress and the public, that have limited USPS’s ability to change its retail network by moving postal services to more nonpostal-operated locations (such as grocery stores), similar to what other nations have done. The report concluded that USPS cannot support its current level of services and operations from its current revenues. We noted that policy issues remain unresolved related to what level of retail services USPS should provide, how the cost of these services should be paid, and how USPS should optimize its retail network. In November 2011, we reported that USPS had expanded access to its services through alternatives to post offices in support of its goals to improve service and financial performance and recommended that USPS develop and implement a plan with a timeline to guide efforts to modernize USPS's retail network, and that addresses both traditional post offices and retail alternatives as well. We added that the plan should also include: (1) criteria for ensuring the retail network continues to provide adequate access for customers as it is restructured; (2) procedures for obtaining reliable retail revenue and cost data to measure progress and inform future decision making; and (3) a method to assess whether USPS's communications strategy is effectively reaching customers, particularly those customers in areas where post offices may close. In November 2012, we reported that although contract postal units (CPUs)—independent businesses compensated by USPS to sell most of the same products and services as post offices at the same price—have declined in number, they have supplemented post offices by providing additional locations and hours of service. More than 60 percent of CPUs are in urban areas where they can provide customers nearby alternatives when they face long lines at post offices. In fiscal year 2011, after compensating CPUs, USPS retained 87 cents of every dollar of CPU revenue. We found that limited interest from potential partners, competing demands on USPS staff resources, and changes to USPS's retail network posed potential challenges to USPS's use of CPUs. To assist Congress in addressing issues related to funding USPS’s liabilities, we have also issued several reports that address USPS’s liabilities, including its retiree health benefits, pension, and workers’ compensation. In December 2012, we reported that USPS’s deteriorating financial outlook will make it difficult to continue the current schedule for prefunding postal retiree health benefits in the short term, and possibly to fully fund the remaining $48 billion unfunded liability over the remaining decades of the statutorily required actuarial funding schedule. However, we also reported that deferring funding could increase costs for future ratepayers and increase the possibility that USPS may not be able to pay for some or all of its liability. We stated that failure to prefund these benefits is a potential concern. Making affordable prefunding payments would protect the viability of USPS by not saddling it with bills later on, when employees are already retired and no longer helping it generate revenue; it can also make the promised benefits more secure. Thus, as we have previously reported, we continue to believe that it is important for USPS to prefund these benefits to the maximum extent that its finances permit. We also recognize that without congressional or further USPS actions to align revenue and costs, USPS will not have the finances needed to make annual payments and reduce its long term retiree health unfunded liability. No funding approach will be viable unless USPS can make the required payments. We reported on options with regard to the FERS surplus, noting the degree of uncertainty inherent in this estimate and reporting on the implications of alternative approaches to accessing this surplus.estimated FERS surplus decreased from 2011 to 2012, and at the end of The fiscal year 2012, USPS had an estimated FERS surplus of $3.0 billion and an estimated CSRS deficit of $18.7 billion. In 2012, we reported on workers’ compensation benefits paid to both postal and nonpostal beneficiaries under the Federal Employees’ Compensation Act (FECA). USPS has large FECA program costs. At the time of their injury, 43 percent of FECA beneficiaries in 2010 were employed by USPS. FECA provides benefits to federal workers who sustained injuries or illnesses while performing federal duties and benefits are not taxed or subject to age restrictions. Various proposals to modify FECA benefit levels have been advanced. At the request of Congress, we have provided information to assist them in making decisions about the FECA program. In summary, to improve its financial situation, USPS needs to reduce its expenses to close its gap between revenue and expenses, repay its outstanding debt, continue funding its retirement obligations, and increase capital for investment, such as replacing its aging vehicle fleet. In addition, as noted in prior reports, congressional action is needed to (1) modify USPS’s retiree health benefit payments in a fiscally responsible manner; (2) facilitate USPS’s ability to align costs with revenues based on changing workload and mail use; and (3) require that any binding arbitration resulting from collective bargaining takes USPS’s financial condition into account. As we have continued to underscore, Congress and USPS need to reach agreement on a comprehensive package of actions to improve USPS’s financial viability. In previous reports, we have provided strategies and options, to both reduce costs and enhance revenues, that Congress could consider to better align USPS costs with revenues and address constraints and legal restrictions that limit USPS’s ability to reduce costs and improve efficiency; we have also reported on implications for addressing USPS’s benefit liabilities. If Congress does not act soon, USPS could be forced to take more drastic actions that could have disruptive, negative effects on its employees, customers, and the availability of reliable and affordable postal services. Chairman Issa, Ranking Member Cummings, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information about this statement, please contact Lorelei St. James, Director, Physical Infrastructure, at (202) 512-2834 or [email protected]. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. In addition to the contact named above, Frank Todisco, Chief Actuary; Samer Abbas, Teresa Anderson, Barbara Bovbjerg, Kyle Browning, Colin Fallon, Imoni Hampton, Kenneth John, Hannah Laufe, Kim McGatlin, Amelia Shachoy, Andrew Sherrill, and Crystal Wesco made important contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
USPS is in a serious financial crisis as its declining mail volume has not generated sufficient revenue to cover its expenses and financial obligations. First-Class Mail--which is highly profitable and generates the majority of the revenues used to cover overhead costs--declined 33 percent since it peaked in fiscal year 2001, and USPS projects a continued decline through fiscal year 2020. Mail volume decline is putting USPS's mission of providing prompt, reliable, and efficient universal services to the public at risk. This testimony discusses (1) USPS's financial condition, (2) initiatives to reduce costs and increase revenues, and (3) actions needed to improve USPS's financial situation. The testimony is based primarily on GAO's past and ongoing work, its analysis of USPS's recent financial results, and recent information on USPS's proposal for a change in delivery service. In previous reports, GAO has provided strategies and options that USPS and Congress could consider to better align USPS costs with revenues and address constraints and legal restrictions that limit USPS's ability to reduce costs and improve efficiency. GAO has also stated that Congress and USPS need to reach agreement on a comprehensive package of actions to improve USPS's financial viability. The U.S. Postal Service (USPS) continues to incur unsustainable operating deficits, has not made required payments of $11.1 billion to prefund retiree health benefits, and has reached its $15 billion borrowing limit. Thus far, USPS has been able to operate within these constraints, but now faces a critical shortage of liquidity that threatens its financial solvency and ability to finance needed capital investment. USPS had an almost 25 percent decline in total mail volume and net losses totaling $40 billion since fiscal year 2006. While USPS achieved about $15 billion in savings and reduced its workforce by about 168,000 over this period, its debt and unfunded benefit liabilities grew to $96 billion by the end of fiscal year 2012. USPS expects mail volume and revenue to continue decreasing as online bill communication and e-commerce expand. USPS has several initiatives to reduce costs and increase its revenues. To reduce costs, USPS announced a 5-year business plan in February 2012 with the goal of achieving $22.5 billion in annual cost savings by the end of fiscal year 2016, which included a proposed change in the delivery schedule. USPS has now put all changes in delivery service on hold, which will reduce its ability to achieve the full 5-year business plan savings. USPS has begun implementing other parts of the plan, which includes needed changes to its network. To achieve greater savings, USPS's Board of Governors recently directed postal management to accelerate these efforts. To increase revenue, USPS is pursuing 55 initiatives. While USPS expects shipping and package services to continue to grow, such growth is not expected to fully offset declining mail volume. USPS needs to reduce its expenses to avoid even greater financial losses, repay its outstanding debt, continue funding its retirement obligations, and increase capital for investment, including replacing its aging vehicle fleet. Also, Congress needs to act to (1) modify USPS's retiree health benefit payments in a fiscally responsible manner; (2) facilitate USPS's ability to align costs with revenues based on changing workload and mail use; and (3) require that any binding arbitration resulting from collective bargaining takes USPS's financial condition into account. No one action in itself will address USPS's financial condition; GAO has previously recommended a comprehensive package of actions. If Congress does not act soon, USPS could be forced to take more drastic actions that could have disruptive, negative effects on its employees, customers, and the availability of postal services. USPS also reported that it may need to prioritize payments to employees and suppliers ahead of those to the federal government.
Servicemembers are entitled to Social Security benefits, just like the vast majority of U.S. workers. We reported that Social Security covers about 96 percent of all U.S. employees and about three-fourths of federal, state, and local government employees pay Social Security taxes on their earnings. Social Security’s primary source of revenue is the Old Age, Survivors, and Disability Insurance portion of the payroll tax paid by employers and employees. That payroll tax amounts to 6.2 percent of earnings for both employers and employees, up to an established maximum. Regardless of whether the death occurred in the line of duty or not, survivors of deceased servicemembers, covered civilian government employees, or their estates are eligible for a lump sum payment of $255. Moreover, eligible survivors are also entitled to recurring Social Security benefit payments. Eligibility for the $255 and recurring payments is determined by whether a deceased employee was currently insured through Social Security. The amount of recurring payment is based on the deceased employee’s earnings in covered employment. Deceased servicemembers’ survivors are entitled to a wide range of benefits. In our September 2002 report, we noted that a survivor might be entitled to a death gratuity payment, a life insurance settlement, burial benefits, monthly payments, and various other benefits that include the use of commissaries and exchanges. Determining whether the deceased servicemember died in the line of duty is seldom a consideration when awarding survivor benefits because an active duty servicemember is considered to be on duty 24 hours a day and 7 days a week. Determination of eligibility for some benefits provided to the survivors of deceased federal and, in most cases, state and local civilian government employees is based on a more restrictive definition of line of duty. For example, survivor benefits provided through workers’ compensation require that the civilian government employee die in the line of duty. The definition of line of duty for federal civilian employees includes any action that an employee is obligated or authorized to perform by rule, regulations, law, or condition of employment according to the employee’s agency. The effect of an eligibility determination based on line of duty can be illustrated using the example of an employee who has a heart attack while eating lunch at a restaurant. The servicemember is probably covered, whereas the civilian government employee is typically not covered. Survivor benefits for some civilian government employees are also contingent on the employee’s occupation, in addition to the circumstance of whether the employee’s death occurred in the line of duty. Law enforcement officers, firefighters, and employees in some other occupations at federal, state, and city levels may receive a supplemental survivor benefit provided through the Public Safety Officers’ Benefits Act, administered by the Department of Justice’s Bureau of Justice Assistance. State and city governments may provide other supplemental benefits to the survivors of deceased employees who work in high-risk occupations. The military and civilian government entities offer similar types of cash and noncash survivor benefits, but the entities provide different amounts for the survivor benefits. In general, the military and civilian government entities provide cash benefits—either as a lump sum, recurring payments, or both—and noncash benefits, such as continued health insurance or education benefits. Survivors of servicemembers almost always receive higher lump sum payments. For three of the four hypothetical situations, the recurring payments for deceased servicemembers’ survivors exceed the recurring payments that at least one-half of the states provide. In contrast, the recurring payments for deceased servicemembers’ survivors in the same three situations are lower than those that at least one-half of the cities provide. The military provides more types of noncash benefits to survivors of deceased servicemembers than do civilian government entities provide to the survivors of deceased general government employees. Survivors of deceased servicemembers and most deceased general government employees receive lump sum payments through comparable sources—Social Security, a death gratuity, burial expenses, and life insurance; the federal government, 16 states, and the District of Columbia provide additional lump sum payments through their respective retirement plans (see table 1 for a summary and appendix II for descriptions of how the payments are calculated for each entity). Social Security provides $255 upon the death of a deceased servicemember or covered civilian government employee. The death gratuity provided to survivors is $12,000 (tax-exempt) for deceased servicemembers, up to $10,000 for deceased federal government employees, and between $25,000 and $262,405 for deceased employees of the 5 states and 1 city that provide this benefit. The military’s death gratuity ranks above that paid by 55 of the 61 civilian government entities. The payment for burial expenses provided to survivors is up to $6,900 (tax-exempt) for deceased servicemembers, up to $800 (tax-exempt) for deceased federal government employees, and between $2,000 and $15,000 for deceased employees of all states and cities. The military’s payment for burial expenses ranks above that paid by 49 of the 61 civilian government entities. Life insurance is another common source of benefits for the survivors of many deceased servicemembers and civilian government employees. For example, approximately 98 percent of servicemembers and 91 percent of federal employees participate in government-sponsored life insurance. Servicemembers automatically are insured for $250,000 (tax-exempt) unless they elect less or no coverage. Although the government does not contribute to the Servicemembers’ Group Life Insurance, we elected to include the information in this report because the program plays a large role in the benefits provided to survivors and nearly all servicemembers participate in the program. Fifty-one of the 61 civilian government entities pay a portion of the life insurance premiums for their employees and reported that they provide this benefit to at least 80 percent of their employees. A federal employee is automatically enrolled for a payout (tax-exempt) equal to the employee’s rate of basic pay, rounded to the next higher $1,000, plus $2,000. The federal government contributes one-third of the total cost (i.e., 15 cents per month for each $1,000) of the basic coverage premium. For example, the government’s contribution for a federal employee who has $37,000 of basic life insurance coverage is $5.55 per month. The amount of coverage state and city governments provide varies and is determined as either a flat amount or a percentage of the employee’s salary. The military and the 9 cities do not provide a lump sum survivor benefit as part of their retirement plans. In contrast, the federal government, 16 states and the District of Columbia include a survivor benefit in their retirement plans. Similar to the funding of life insurance, these 18 civilian government entities contribute a portion of the benefit. These payments are generally based on the deceased employee’s annual salary, employer contributions to the retirement plan, or a flat amount. Although survivors of deceased military and civilian government employees are eligible for recurring Social Security payments, other types of recurring payments are specific to either servicemembers or civilian government employees (see table 2 for a summary and appendix II for descriptions of how the recurring payments are computed for each entity). As previously mentioned, the survivors of deceased servicemembers and survivors of three-fourths of the civilian government employees may be eligible to receive recurring Social Security payments based on the deceased employees’ earnings in covered employment. This recurring payment to the survivor will be equal if the deceased servicemember’s and deceased civilian government employee’s earnings in covered employment are identical. Survivors of deceased servicemembers would also receive payments through the Survivor Benefit Plan (SBP), tax-exempt Dependency and Indemnity Compensation (DIC), or both. The SBP payment is calculated based on 55 percent of the member’s maximum monthly retirement pay, and DIC provides $967 per month for a spouse, plus $241 per month for each child. If the spouse is the designated beneficiary for SBP, the SBP payment is reduced by the DIC payment. Additionally, if the DIC payment is greater than the SBP payment, there is no SBP payment. However, under the most recent changes to SBP, SBP benefits can be paid to the children, and the DIC payment can be paid to the spouse without causing any reduction in the SBP payment, thus providing a substantial increase in monthly payments during the years when children are still at home or in school. Similar to the military, survivors of deceased civilian government employees may receive recurring payments from multiple sources: a retirement plan, workers’ compensation if the death occurred while in the line of duty, or both. Survivors of deceased federal government employees receive the higher of two options: (1) 50 percent of an employee’s monthly retirement pay, if the employee had at least 10 years of creditable service, plus a lump sum payment or (2) up to 75 percent of the employee’s pay rate under the Federal Employees Compensation Act. The rules for determining the recurring payments for survivors of deceased state and city employees vary widely but are summarized in table 2. For the four hypothetical situations, the lump sum payments—excluding Social Security—for survivors of deceased servicemembers are almost always higher than those for the survivors of deceased civilian government employees in general. For hypothetical situations 2, 3, and 4, the recurring payments for deceased servicemembers’ survivors exceed the recurring payments that at least one-half of the states provide. In contrast, the recurring payments for deceased servicemembers’ survivors are lower than those that at least one-half of the cities provide (see table 3 for a summary and appendix III for the specific amounts provided by each entity for each type of payment). Hypothetical situation 2 is used as an example to explain the findings shown in table 3. It describes the situation of a servicemember or civilian government employee who had accrued 3 years of creditable service, an income of $34,376 (what an E-3 might be paid in the military), and two dependents. The benefits—excluding Social Security—provided to such a person’s survivors are outlined below. Servicemember’s survivors: The survivors would receive $268,900 in a lump sum payment from a death gratuity, life insurance, and burial expenses, as well as $2,390 in recurring payments from DIC and SBP (assuming the child is the designated beneficiary). Federal government employee’s survivors: The survivors would receive $121,000 in a lump sum payment from a death gratuity, which includes burial costs, and life insurance and $1,718 in recurring payments from workers’ compensation. That is, the survivors would receive nearly $148,000 less in a lump sum payment and almost $700 less per month in recurring payments than would a servicemember’s survivors. State or city government employee’s survivors: Interpretation of the state and city amounts is more problematic because the lump sum and recurring payments shown in the same row of table 3 may represent amounts paid by a different state or city government. For hypothetical situation 2, the median—or average—lump sum payment was $55,000 for states and $40,000 for cities. The lump sum payments range from $3,500 to $311,005 for the 50 states and the District of Columbia, while the recurring payments range from $1,146 to $5,059. The lump sum payments range from $5,000 to $110,000 for the 9 cities, while the recurring payments range from $2,149 to $5,014. In most instances, it would take years of inflation-adjusted recurring payments for the survivors of those general state and city government employees to reach the total lump sum and recurring payment benefits provided to the survivors of the servicemembers. Also some states or cities limit the duration (e.g., workers’ compensation benefits in Indiana and Maine are limited to 500 weeks) or total value (e.g., workers’ compensation benefits in Maryland are limited to $45,000) for some types of their recurring payments. These limits further lessen the likelihood that some survivors of deceased state and city government employees will receive lifetime benefits at least equal to those provided to deceased servicemembers’ survivors. The military provides more noncash survivor benefits than do the federal, state, and city governments, with some benefits being comparable in type and others differing among the entities (see table 4 for examples of the most common benefits). For example, the military, federal government, 17 states, and 7 cities provide continued health insurance that is wholly or partially subsidized. Additionally, the military and 5 of the state governments provide some education benefits. Eligible survivors of servicemembers who die while on active duty also obtain benefits such as rent-free government housing or a tax-free housing allowance for up to 180 days, relocation assistance, and lifetime access to commissaries and exchanges that are not available to other government survivors. The survivors of civilian government employees in selected high-risk occupations may receive supplemental benefits beyond those that the entities provide to government employees in general (see table 5 for a summary and appendix IV for the descriptions of how the payments are calculated for each entity). Employees in selected high-risk occupations in the 61 civilian government entities may receive an additional cash benefit through the Public Safety Officers’ Benefits (PSOB) Program. Using a case-by-case determination process, the Department of Justice’s Bureau of Justice Assistance provides a lump sum payment of $267,494 (for fiscal year 2004) to the eligible survivors of public safety officers whose deaths are the direct and proximate result of traumatic injury sustained in the line of duty. According to agency officials, the Bureau of Justice Assistance approved 659 death claims in fiscal year 2002 with 417 cases related to World Trade Center deaths, and 194 death claims for fiscal year 2003. Thirty-four states and 5 cities also supplement cash benefits for employees in high-risk occupations. For example, some states, such as Texas, Florida, and Arkansas, provide an additional death gratuity to survivors of government employees in high-risk occupations. Other states, such as Iowa, New Mexico, and Nevada, provide insurance benefits that are higher than those provided to general government employees. Still other states, such as Alaska, New Jersey, and Montana, provide survivor benefits through their retirement plans that are higher than those provided to general government employees. When these supplemental cash benefits are added to the benefits for general government employees, the total cash benefits that the entities provide to the survivors of deceased civilian government employees in high-risk occupations may be higher than those provided to deceased servicemembers’ survivors. For example, the very limited number of survivors who receive the $267,494 from the PSOB Program would likely have total survivor benefits higher than those provided to servicemembers’ survivors. In addition to the supplemental cash benefits, some of the states and cities provide supplemental noncash benefits for survivors of deceased employees in high-risk occupations. Eleven states provide survivors of employees in high-risk occupations with education benefits that are not provided to survivors of general government employees. Additionally, two states and two cities provide continued health insurance to survivors of employees in high-risk occupations that are not provided to survivors of general government employees. DOD reviewed a draft of this report and provided technical comments, which were incorporated as appropriate. We are sending copies of this report to the Secretary of Defense. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-5559 ([email protected]) or Jack E. Edwards at (202) 512- 8246 ([email protected]). Other staff members who made key contributions to this report are listed in appendix V. To assess the extent that survivor benefits provided to servicemembers’ survivors differ from those provided to federal, state, and city general government employees’ survivors, we gathered benefits information that covered the active duty military and the largest group of employees for each of 61 civilian government entities: the federal government, 50 states and the District of Columbia, and the 9 cities with a population of at least 1 million. While limiting the scope of our work to the 9 cities with at least 1 million people restricted the generalizability of our city findings to only those 9 cities, it allowed us to discuss with certainty (i.e., without sampling error) findings for the largest cities in the United States. Except for the Servicemembers’ Group Life Insurance, all benefits addressed in this report included government contributions. We elected to include military life insurance in this report because the program plays a large role in the benefits provided to survivors; nearly all servicemembers participate in the program; and during times of war, there may be government contributions. Life insurance information was included for a civilian government entity only if at least 80 percent of the employees received the benefit. We gathered data from the military and the federal agencies shown in table 6 through personal interviews. We developed a structured telephone interview to collect data, including general descriptions of the benefits and the way the benefits are determined, from state and city agencies. The initial content for developing the interview questions came from reports issued by us and other agencies as well as from consultations with benefits personnel and staff with expertise on specific military or civilian personnel government survivor benefit programs, such as Social Security. We pretested the structured telephone interview to minimize the occurrence of nonsampling errors, which led to modification of the data gathering instrument to clarify questions and address the ordering of items and other concerns that could affect data reliability. To further ensure data reliability, we requested and reviewed survivor benefits information, including statutes and plan documents, from each entity. For some civilian government entities, especially at the state and city levels, interviews were conducted with multiple offices because the responsibility for administering the different types of survivor benefits resided in different offices. All 62 entities provided information, but 1 state elected not to provide information on its retirement benefit. Similarly, we developed and obtained feedback on an e-mail-administered survey that described four hypothetical situations and assessed cash benefits. The hypothetical situations were developed to correspond to personnel at various stages of a military or government career, describing the servicemember’s or civilian government employee’s years of service, income, and number of dependents. The survey was sent to the military and all general civilian government entities to obtain information on the payments that would be provided in each hypothetical situation. When our interpretations of the benefits differed from the information supplied by the military or civilian government entities, we contacted the entities and resolved the differences. The responses to the survey reflect current values and do not account for lifetime payments, which may include cost-of-living adjustments and other assumptions. All 62 entities provided information, but 1 state elected not to provide information on its retirement benefit. To assess the extent that federal, state, and city governments supplement their general survivor benefits for employees in high-risk occupations, we gathered benefits information, except for the hypothetical situations, that covered law enforcement officers and firefighters in the same manner as for government employees in general. We selected law enforcement officers and firefighters because we considered those two occupations to have higher levels of personal risk than those found for government employees in general. As with the government employees in general, we limited the scope to include the 61 civilian government entities. All 61 entities provided information, but 1 state and 1 city elected not to provide requested information regarding a benefit for high-risk employees, retirement and life insurance, respectively. For both civilian government employees in general and civilian government employees in high-risk occupations, the concept of line of duty was an important consideration in the scope of this work because the granting of some survivor benefits is contingent on whether the employee dies in the line of duty. While active duty servicemembers are considered to be on duty 24 hours a day and 7 days a week, the definition for line of duty for civilian federal employees is more restrictive. The federal government defines line of duty as any action that an employee is obligated or authorized by rule, regulations, law, or condition of employment to perform by the agency served. Similar definitions were present for the administration of survivor benefits in some states and cities. Although the civilian government entities typically provide benefits to survivors of those who die while not in the line of duty, those benefits are not separately identified from the line-of-duty benefits in this report. We conducted our review from October 2003 through May 2004 in accordance with generally accepted government auditing standards. This appendix describes the cash benefits available to eligible survivors of active duty servicemembers and civilian government employees who die in the line of duty. We obtained information on the survivor benefits for the active duty military and the largest general employee group in each of 61 civilian government entities: the federal government, 50 states and the District of Columbia, and the 9 U.S. cities with a population of at least 1 million. Types of cash benefits are listed along with descriptions of how lump sum payments, recurring payments, or both are computed for each entity. We obtained the information through structured interviews with benefits personnel for the 62 entities and verified the reliability of that data through a review of statutes, benefits plans, and other information that the benefits personnel supplied. The information presented in this appendix is summarized in tables 1 and 2 in the report. This appendix identifies the amount of cash benefits available to eligible survivors of active duty servicemembers and civilian government employees who die in the line of duty. To facilitate the comparison of cash benefits available to survivors, we constructed four hypothetical situations that each described servicemembers or civilian government employees who had identical years of creditable service, an equal amount of regular military compensation or civilian government salary, and the same number of dependents at the time of their deaths. The four hypothetical situations for military and civilian government personnel are indicative of circumstances for servicemembers at a junior enlisted level (E-3) with and without dependents, at a senior enlisted level (E-7), and at a mid-grade officer level (O-3). We gathered data from benefits personnel who completed an e-mail survey that described the four hypothetical situations and asked for the amount of cash payments (in current-month values, without cost-of-living adjustments) that survivors would receive from each source of lump sum or recurring payments. (The methods for computing the amounts were described earlier in appendix II.) We obtained such information on the survivor benefits plans for the active duty military and the largest general employee group in each of 61 civilian government entities: the federal government, 50 states and the District of Columbia, and the 9 U.S. cities with a population of at least 1 million. Types of cash benefits are listed along with lump sum payments, recurring payments, or both, for each entity. The information in this appendix is summarized in table 3 in the report. To facilitate the comparison of military findings to those for the civilian government entities, we rank ordered the total lump sum and total recurring payments for each of the 62 entities on each hypothetical situation. The ranks appear in parentheses, with “1” indicating the highest lump sum or recurring payment for the situation and “62” indicating the lowest amount. This appendix describes the cash benefits available to eligible survivors of civilian government employees who die in the line of duty while performing in the high-risk occupations of law enforcement or firefighting. We obtained information on the survivor benefits plans for these occupations from the federal government, 50 states and the District of Columbia, and the 9 U.S. cities with a population of at least 1 million. Types of cash benefits are listed along with descriptions of how the lump sum payments, recurring payments, or both are computed for each entity if these benefits are above those provided to the survivors of general government employees. We obtained the information through structured interviews with benefits personnel for the 61 civilian government entities and verified the reliability of that data through a review of statutes, benefits plans, and other information that the benefits personnel supplied. The information presented in this appendix is summarized in table 5 in the report. In addition to the individual named above, Mark B. Dowling, Joel I. Grossman, Barbara L. Joyce, Marie A. Mak, Hilary L. Murrish, Cheryl A. Weissman, and Greg H. Wilmoth made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The National Defense Authorization Act for Fiscal Year 2004 noted that it was the sense of the Congress that "the sacrifices made by the members of the Armed Forces are significant and are worthy of meaningful expressions of gratitude by the United States, especially in cases of sacrifice through loss of life." In addition to offering expressions of gratitude, the government offers a variety of benefits, including Social Security benefits, to survivors of servicemembers who die while on active duty. GAO was asked to address two questions: (1) To what extent are the survivor benefits provided to servicemembers different from those provided to federal, state, and city government employees in general and (2) To what extent do federal, state, and city governments supplement their general survivor benefits for employees in high-risk occupations? The military provides survivor benefits that are comparable in type but not in amount to those provided by 61 civilian government entities (federal government, 50 states and the District of Columbia, and 9 cities with populations of at least 1 million) when employees die in the line of duty. Social Security payments, a death gratuity, burial expenses, and life insurance are four types of lump sum survivor benefits provided by the military and at least some civilian government entities; the federal government and some states additionally provide a lump sum payment through their retirement plans. Recurring payments are also provided by Social Security to the survivors for deceased servicemembers and most deceased government employees in the 61 civilian government entities GAO studied. Other types of recurring payments are specific to the military or civilian government entities. GAO identified two programs with recurring payments for the military and two other types of programs for the civilian government entities. For the four hypothetical situations GAO used to examine the amount of cash payments provided to survivors, survivors of deceased servicemembers almost always obtain higher lump sums than do the survivors of the deceased employees from the 61 civilian government entities. The amount of recurring payments to deceased servicemembers' survivors in three of the four situations exceeds those provided by the federal government, typically exceeds those provided by at least one-half of the states, but are typically less than those provided by over one-half the cities. The military also provides more types of noncash survivor benefits than do civilian government entities, with some benefits being comparable in type and others differing among the entities. The survivors of civilian government employees in some high-risk occupations may receive supplemental benefits--a death gratuity, higher life insurance, higher benefits from the retirement plan, or a combination of the three--beyond those that the entities provide to civilian government employees in general. For example, survivors of federal, state, and city government law enforcement officers and firefighters who die in the line of duty may be entitled to a lump sum payment of more than $267,000 under the Public Safety Officers' Benefits Act. Further, 34 states and 5 cities provide survivors of employees in high-risk occupations with additional cash benefits that are not available to survivors of state and city employees in general. The addition of these supplemental cash benefits to those provided to the survivors of deceased general government employees can result in lump sum and recurring payments being generally higher for survivors of government employees in high-risk occupations than for servicemembers' survivors.
A transfer price is the price charged by one company for a product or service supplied to a related company, such as the price a parent corporation charges its wholly-owned subsidiary. Any company that has a related company with which it transacts business establishes transfer prices for those intercompany transactions. Although often associated with the pricing of tangible goods, transfer pricing occurs whenever income and expenses are allocated among interrelated companies. For example, the payment of royalties, interest payments for debts, leasing expenses, and fees for other services between interrelated companies are transactions requiring transfer prices. Pricing of intercompany transactions affects the distribution of profits and, therefore, taxable income among the related companies and, sometimes, across tax jurisdictions. Abusive transfer pricing occurs when income and expenses are improperly allocated among interrelated companies for the purpose of reducing taxable income in a high-tax jurisdiction. Underpayment of U.S. income taxes can result from inappropriate transfer pricing between interrelated companies with operations in both the United States and in a country with a lower tax burden. Even when the U.S. corporate tax rate is lower than that of some other country, transfer pricing abuses can occur by shifting income through another related company that operates in a tax haven, that is, a country with low or no taxes. The following is an example of abusive cross-border transfer pricing. A foreign parent corporation with a subsidiary operating in the United States charges the subsidiary excessive prices for goods and services rendered (for example, $1,000 instead of the going rate of $600). This raises the subsidiary’s expenses (by $400), lowers its profits (by $400), and effectively shifts that income ($400) outside of the United States. At a 35-percent U.S. corporate income tax rate, the subsidiary will pay $140 less in U.S. taxes than it would if the $400 in profits were attributed to it. Section 482 of the Internal Revenue Code provides IRS authority to allocate income among related parties if IRS determines that the transfer prices used by the taxpayer are inappropriate. To evaluate transfer pricing, the IRS examiner considers what the price would have been if the parties had not been related to each other. Such a price between unrelated parties is called the “arm’s length” price. Finding a section 482 violation, that is, a difference between the price a related party charged and the arm’s length price, the IRS examiner can propose an adjustment to the taxpayer’s income. If the taxpayer does not agree with the proposed adjustment, it can appeal the dispute through IRS’ appeals process or take the case to court. In July 1994, IRS issued new regulations under section 482 that differed significantly from previous section 482 regulations. Under the new regulations, taxpayers have great latitude in establishing transfer prices. However, under 1993 legislation, taxpayers are subject to new requirements for documenting their transfer prices, and they face stiff penalties for substantially misstating them. IRS’ international examiners continued to propose substantial adjustments to the taxable income of FCCs and USCCs in fiscal years 1993 and 1994, although the total dollar value of the adjustments in 1993 was $1.3 billion less than in 1994. In examinations finished in fiscal year 1993, international examiners proposed adjustments to taxable income of $2.2 billion for 369 corporations—$900 million for 247 FCCs and $1.3 billion for 122 USCCs. For 1994, IRS data showed $3.5 billion in proposed adjustments. Most of the dollar value of these proposed section 482 adjustments was for large cases, those that had total proposed adjustments of $20 million or more. For example, in 1993 examiners proposed $1.8 billion in adjustments for 51 of these large cases—$700 million for 18 FCCs and $1.1 billion for 33 USCCs. Although the 18 large FCCs subject to proposed income adjustments in 1993 represented an increase in number over the previous 4 years, when 11 to 13 FCCs were subject to proposed income adjustments for transfer pricing, they also represented a decrease in dollars. The number of USCCs subject to income adjustments for transfer pricing in 1993, in contrast, was about the same as in previous years, although this group also involved fewer dollars. According to IRS, the 1993 adjustments for transfer pricing might be understated due to data that were lost in implementing a new management information system. Transfer pricing issues for which IRS examiners proposed income adjustments in 1993 occurred about equally in four categories, while pricing of tangible goods represented the largest share of the adjustment dollars proposed. As shown in figure 1, the categories with the most frequent section 482 issues were interest, royalties, pricing of tangible goods, and a nonspecific category of income allocations and deductions, each accounting for between 11 and 14 percent of occurrences found by IRS examiners. Yet, adjustments for pricing of goods accounted for 49 percent of all the section 482 adjustment dollars proposed. IRS’ recent experience with section 482 issues in the appeals process—the IRS administrative process that attempts to negotiate disputes—and with the Chief Counsel’s office, which is also involved in dispute resolution, was in some ways similar to, but also different from, the experience about which we testified in March 1993. The section 482 issues pending resolution in Appeals or in litigation with Counsel involved about the same dollar value, $14 billion in both June 1994 and September 1992, the dates for which information was available for our current report and for our March 1993 testimony. However, fewer taxpayers were involved at the more recent date—114 as opposed to 180 in the earlier time. Foreign-controlled taxpayers accounted for just under 20 percent of the total both times. The sustention rate for section 482 issues, which is the ratio of income adjustments proposed by the examiners to the final income adjustment after the appeals and legal processes ran their courses, peaked at 52 percent in 1990, then in succeeding years fell back to prior levels of less than 30 percent. While the data IRS maintains on examination and appeals staff time do not have sufficient detail to link them to specific tax issues, we determined that IRS examiners, economists, and appeals staff spent about 186 staff years on cases closed in fiscal year 1993 which contained transfer pricing issues, compared to about 227 staff years on cases closed in fiscal year 1992. The 1993 cases, which involved section 482 issues among other IRS findings, represented 148 international examiner years, 13 economist years, and 25 appeals years. According to IRS officials, the number of years associated with closed cases can fluctuate over time because large cases incur time charges over a period of years and close at different times within a year. In calendar year 1993 and the first part of calendar year 1994, IRS was more successful in litigating section 482 issues than it had been in calendar years 1990 through 1992. While IRS did not prevail in any of the four major court cases with section 482 issues that we analyzed, it lost only one and was a partial winner in the other three. IRS has a number of procedural tools—that is, discretionary measures, including designated summonses and formal document requests, and additional penalties specifically for section 482—that can be invoked to obtain needed documents, to encourage a recalcitrant taxpayer’s cooperation, or to otherwise facilitate examinations. IRS, however, actually used these tools infrequently because, according to IRS officials, they had their desired effect as deterrents inducing a positive change in taxpayer behavior, and recordkeeping requirements were also helping. Arbitration, a tool available only if both parties agree, was used only once for section 482 issues—with a favorable outcome for IRS. Simultaneous examinations of related parties by IRS and foreign tax enforcement agencies promote tax compliance and exchange of documents, but, for a number of reasons, including questions of the timing of examinations, IRS and the foreign governments have only agreed to examine five to eight each year. Advance pricing agreements (APA), which allow the taxpayer to detail and IRS to approve in advance the methodology to be used in setting transfer prices, require an upfront investment of resources to negotiate in order to save IRS examination time in future years. APAs increased from the 9 we cited in our 1993 testimony to 26 completed as of January 1995. IRS expected the number of APAs to grow significantly in the immediate future, which will require additional upfront staff time. Issued in July 1994, the final transfer pricing regulations are too recent for their impact to be known. The new regulations replace the strict priority of pricing allocation methods contained in the prior regulations with the best method rule, which recognizes that the most reliable measure of an arm’s length result will vary depending upon the facts and circumstances of the transaction under review. As a result, taxpayers will have flexibility in selecting and justifying transfer prices, and IRS will also be using considerable judgment in applying the arm’s length standard. As it was, taxpayers were already using transfer pricing methods other than the three formerly most well-defined methods for a large proportion of their transfer pricing activity reviewed by IRS, including that in APAs. The new regulations also recognize a range of acceptable transfer prices, providing a taxpayer protection for a reasonable and economically justified pricing methodology. The Omnibus Budget Reconciliation Act of 1993 required contemporaneous documentation of the pricing method used in setting transfer prices and established more widely applicable penalties for transfer pricing abuses than existed earlier. The extent to which the new documentation requirements and IRS’ use of judgment in applying the arm’s length standard will offset the risks of controversy brought on by taxpayer flexibility remains to be seen. In each of the 5 years studied, FCCs were less likely than USCCs to pay U.S. income tax, as shown in figure 2 and appendix V. In 1991, 73 percent of FCCs paid no U.S. income tax compared with 62 percent of USCCs. The relative flatness of the top two lines in the figure demonstrates how little has changed overall during this 5-year period. Yet, the bottom two lines show that between 1987 and 1991 an increasing percentage of large FCCs and USCCs (those with assets of $100 million or more) did not pay U.S. income tax. The nontaxpaying corporations, both FCCs and USCCs, accounted for the majority of all returns filed in 1991, but for much smaller proportions of total corporate assets and receipts. Although 73 percent of FCCs paid no U.S. income tax in 1991, they accounted for only 37 percent of the assets and 31 percent of the gross receipts of all FCCs that year, as shown in figure 3. This means the 27 percent of FCCs that paid U.S. income tax had 63 percent of FCC assets and 69 percent of receipts. Similarly, 62 percent of USCCs paid no U.S. income tax in 1991, but these nontaxpaying corporations accounted for only 20 percent of the assets and 19 percent of receipts of all USCCs. So, the 38 percent of USCCs that paid U.S. income tax in 1991 held 80 percent of the assets and generated 81 percent of all gross receipts. The largest nontaxpaying FCCs and USCCs were relatively few in number but had disproportionately large shares of total assets and gross receipts relative to all FCCs and all USCCs. In 1991, 715 nontaxpaying FCCs and 3,713 nontaxpaying USCCs had assets of $100 million or more. The large nontaxpaying FCCs accounted for only 1.5 percent of all FCCs that filed U.S. corporate income tax returns in 1991 but 31 percent of all FCCs’ assets and 22 percent of their total receipts. Similarly, the largest nontaxpaying USCCs represented one-fifth of 1 percent (0.2 percent) of returns but 16 percent of assets and less than 7 percent of receipts generated by all U.S.-controlled corporations in 1991. Conversely, smaller nontaxpaying FCCs (those with assets under $100 million) accounted for 71 percent of the 48,246 FCCs in 1991 but only 6 percent of FCC assets and 9 percent of FCC receipts. For USCCs, the smaller nontaxpaying corporations accounted for 62 percent of all USCC returns but only 4 percent of assets and 12 percent of the gross receipts. As already discussed, large FCCs were more likely than large USCCs to pay no U.S. income tax. Furthermore, large FCCs that did pay income tax, on average, paid less in taxes relative to receipts than their taxpaying U.S. counterparts in 1991. Abusive transfer pricing—that is, inflating prices of intercompany transactions to shift income outside the United States and reduce tax liability—is logically suspect as a possible cause of these observations, yet this suspicion is not easily confirmed by analyzing the tax data. For example, costs of goods sold and purchases accounted for larger proportions of receipts for large nontaxpaying FCCs than for USCCs, as would be expected if FCCs were inflating the price of goods more than USCCs were. Yet, it is not true that interest paid as a percentage of receipts is higher for the FCCs than for the USCCs, as would be expected if FCCs were shifting income by inflating interest paid to a related party. So, while abuse of transfer pricing may be occurring and may be a reason why FCCs are more likely to pay no or less tax than USCCs, other factors, such as differences between FCCs’ and USCCs’ industries, may be at work. Analysis of industry representation found that large nontaxpaying FCCs were more likely to be in manufacturing and wholesale trade, and less likely to be in finance, insurance, and real estate, than the large nontaxpaying USCCs—differences that may explain some of the tax disparities. As mentioned earlier, our objectives for this report were to provide information and analysis to update our 1993 work on (1) IRS’ recent experience in dealing with transfer pricing issues through its examinations, appeals, and litigation functions; (2) IRS’ use of available regulatory and procedural tools; and (3) the extent to which USCCs and FCCs did not pay U.S. income taxes. In some cases, the information we were updating dated back to tax year 1987. To meet the objectives relating to IRS’ recent experience and use of its tools, we obtained and analyzed the most recent data available from IRS. To update our analyses of nonpayment of U.S. income taxes by FCCs and USCCs, we obtained and analyzed data on U.S. corporate income tax returns for the 1990 and 1991 tax years, again the most recent IRS data available at the time we did our work. See appendix I for a full discussion of our objectives, scope, and methodology. We discussed a draft of this report with responsible Treasury and IRS officials on January 27 and 31, 1995. These officials included members of the Office of the Assistant Secretary of the Treasury for Tax Policy, the Deputy Assistant Commissioner (International), Appeals and other representatives of IRS’ Office of Chief Counsel, and an employee of IRS’ Statistics of Income Division. While generally agreeing with the report’s contents, the officials brought to our attention corrected, updated, and clarifying information. We modified the report where appropriate. IRS officials also raised other points that merit special mention. First, they pointed out various initiatives related to transfer pricing that were beyond the scope of our work. These initiatives included recent technological improvements given to IRS international examiners. Second, an IRS official from the Statistics of Income Division noted some limitations of our statistical analysis of the corporations that did and did not pay taxes. By focusing on corporations that paid no tax, we gave little attention to those that paid minimal taxes. By defining large corporations in terms of asset size rather than receipts, we included, for example, a larger number of finance companies such as banks and fewer trade companies than we would have otherwise. Finally, by comparing all FCCs to all USCCs, we did not consider industry differences. Although these limitations are inherent to some degree in our selected methodology, our analyses provide data on corporations that paid minimal taxes (see table V.4) and on industry differences (see table V.6). Further analyses would provide additional insights into the differences between FCCs and USCCs. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Commissioner of the Internal Revenue Service, the Secretary of the Treasury, and other interested parties. The major contributors to this report are listed in appendix VI. If you have any questions, please call me at (202)-512-9044. Our objectives directly related to section 482 were to update our 1993 work by providing information and analyzing (1) how much money was involved in transfer pricing cases, and what issues were at hand; (2) what allocation methods were used or proposed; (3) how many resources IRS used in transfer pricing cases; and (4) what regulatory and procedural tools IRS used in transfer pricing cases. Another objective was to update our analyses of FCCs and USCCs that paid no U.S. income taxes by including 1990 and 1991, the latest years for which tax data were available during our review. In some instances, the information we were updating dated back to tax year 1987. To determine how much money and what issues were involved in transfer pricing cases, we used an examination database maintained by IRS’ Assistant Commissioner (International) (AC(I)). At the time we used it, this database contained relevant transfer pricing information extracted from examination reports completed in fiscal year 1993 and the first half of fiscal year 1994. We obtained from IRS’ Office of Appeals information for a similar period generated from its case database that generally had large issues and issues that met certain tax deficiency or other criteria. Finally, we researched 1993 and early 1994 court cases involving transfer pricing issues. To determine the allocation methods used or proposed in transfer pricing cases, we summarized information from a database that IRS compiled in early 1993 covering fiscal years 1990 through 1992 to help it share examination findings across the country. We also gathered data on the allocation methods appearing in advance pricing agreements as of mid-1994. To determine how many resources IRS used in transfer pricing cases, we again used the AC(I) examination database. From this database, we determined the amount of time international examiners and economists spent on cases that involved section 482 issues and cases that did not. Similarly, the Office of Appeals provided us with the number of hours spent by Appeals personnel on cases containing section 482 issues as well as other issues. To report on the tools used by IRS personnel in transfer pricing cases, we obtained data covering fiscal year 1993 (and 1994 when available) from AC(I) and interviewed IRS and outside officials. We did not audit any of IRS’ management information systems from which we obtained section 482 data. An IRS internal audit report pointed out problems with some of the international management systems, which IRS is improving. For instance, due to a programming error, IRS officials told us they could not be sure that the 1993 and 1994 international examination information they were giving us was comprehensive or completely accurate, but they were continuing to refine it. According to IRS officials, problems with the data included the loss of significant information categorizing audit adjustments by specific issue, such as section 482. The management information we used was the best available at the time we did our work. To determine the percentage of foreign-controlled and U.S.-controlled corporations that paid no tax, we obtained from IRS’ Statistics of Income (SOI) Division data for 1990 and 1991. As we did in 1993, we broke these statistics out by returns, receipts, and assets. The statistics were based on tax returns as filed and did not reflect IRS audits or net operating loss carrybacks that would result from any losses in future years. The SOI statistics in this report, other than those for corporations with assets of $100 million or more, are based on SOI’s probability sample of taxpayer returns and thus are subject to some imprecision due to sampling variability. In this report, we measure the imprecision with the 95-percent confidence intervals that surround estimates of numbers of taxpayers and the assets and receipts for those taxpayers. For example, our finding that 35,138 foreign-controlled corporations did not pay tax in 1991 is surrounded by a 95-percent confidence interval of plus or minus 3,996. This means that the chances are 19 out of 20 that the interval from 31,142 to 39,134 includes the actual number of such corporations. Table I.1 shows this and other confidence intervals for 1991. We worked mainly with IRS’ National Office in Washington, D.C. We did our review from April through December 1994 in accordance with generally accepted government auditing standards. In fiscal years 1993 and 1994, IRS examiners continued to propose substantial section 482 adjustments to income. However, IRS appeals officers, who are charged with resolving tax controversies without litigation, continued in 1993 and 1994 to substantially reduce adjustments proposed by examiners in earlier years. Further, IRS had mixed success in litigating important section 482 cases in the courts, although this was an improvement over its record in earlier years. As shown in table II.1, in fiscal years 1993 and 1994, for cases with total proposed adjustments of $20 million or more, IRS proposed section 482 income adjustments for 51 and 64 taxpayers, respectively, of $1.8 billion and $3.5 billion. The number of proposed adjustments was similar to or larger than the numbers in previous years, although the associated dollar amount for 1993 was significantly lower than in several of those years. The dollar amount can fluctuate, however, on the basis of a few large adjustments. Also, proposed adjustments to income may or may not result in increased tax collections, depending on such things as whether a company has offsetting adjustments, offsetting corporate net operating losses carried over from other years, or success in challenging the proposed adjustment. Table II.1: Proposed Section 482 Income Adjustments of Foreign-and U.S.-Controlled Corporations With $20 Million or More of Total Proposed Adjustments Note 1: A few large adjustments significantly affect comparisons of adjustments for foreign- and U.S.-controlled corporations because they comprise large percentages of the totals. Note 2: We generally used IRS’ determinations of whether particular corporations were foreign controlled, but if we were aware that an IRS determination was incorrect, we used our own. While the cases with proposed adjustments of $20 million or more accounted for the bulk of all the dollars in proposed section 482 adjustments, they did not comprise the majority of all proposed adjustments. For instance, as shown in table II.2, for fiscal year 1993, IRS proposed $2.2 billion in section 482 adjustments to income for 247 FCCs and 122 USCCs. Only about 12 percent of the section 482 issues for any of these taxpayers were for tax years after 1990, the first recent year in which transfer pricing received intensive congressional scrutiny. Tax years ranged from 1975 through 1992. Because we received the 1994 information at the end of our review, we could not similarly analyze it. According to IRS, both 1993 and 1994 adjustments for transfer pricing might be understated due to data that were lost in implementing a new management information system. The nature of the section 482 issues IRS disputed varied from company to company. As shown in table II.3, several different sorts of issues accounted for at least 7 percent each of the section 482 issues IRS raised in fiscal year 1993. These relatively frequently occurring issues were interest, royalties, intercompany pricing of tangible goods, general allocation of income and deductions, and service charges and fees, which accounted for 55 percent of all section 482 challenges by IRS and 71 percent of section 482 dollars involved. Allocation of income and deductions (general category) As shown in figure II.1, for cases closed in fiscal year 1993 or the first half of fiscal year 1994, IRS spent about a third of its total international examiner time, and a much higher percentage of its economist time, on those cases that had a section 482 issue, among other issues. We could not break out the amount of time spent only on section 482 because AC(I) did not have a system for capturing staff time on individual Internal Revenue Code sections. However, using a formula provided by IRS, we estimated that for cases closed in fiscal year 1993, IRS spent about 148 international examiner staff years and about 13 economist staff years on examinations that included a section 482 adjustment. The corresponding numbers for fiscal year 1992 were 164 and 19. According to IRS officials, the time spent on cases closed is expected to fluctuate from year to year because large cases may close that incurred time charges over several years. In addition, most of the time reflected on cases that close early in a fiscal year will likely have been incurred on the cases in previous years. When adjustments to income are proposed in examinations, the taxpayer can take the dispute to Appeals, which is the administrative body within IRS authorized to settle tax controversies. Also, when litigation is involved, so is IRS’ Chief Counsel. In March 1993, we testified that IRS’ experience with section 482 issues in Appeals had not improved from the experience we described in our 1992 report. Since that time, while IRS examiners again identified billions of dollars in proposed adjustments to income for section 482 issues, Appeals again substantially reduced previous years’ proposals. As table II.4 shows, as of June 30, 1994, IRS had 114 large taxpayers with total section 482 proposed adjustments to income of $14.4 billion pending administrative resolution in Appeals or litigation with Counsel as opposed to the 180 taxpayers with proposed section 482 adjustments of $14.4 billion as of September 30, 1992. Thus, although the dollar value of open proposed adjustments stayed the same, the number of taxpayers involved declined. As of June 30, 1994, IRS Appeals officials had spent almost 96,000 hours on the open work units that contained section 482 issues as well as other issues. None of the tax returns involved were for tax periods after 1990. Of the $14.4 billion in proposed adjustments as of June 30, 1994, $1.8 billion was for foreign-controlled issues. “Open” issues are issues referred beyond examination but not yet settled. In March 1993, we testified that from 1987 through 1989, 29 percent of section 482 proposed adjustments to income were sustained by IRS. As shown in table II.5, this number rose to 52 percent in 1990 but declined in the years afterwards to the 21 to 28 percent range. A very large part of the section 482 proposed adjustments were closed by Appeals as opposed to Counsel. Sustention rates for foreign-controlled issues fluctuated more than did those for U.S.-controlled issues. We received the 1994 information late in our review and did not have a chance to ascertain the breakout between foreign- and U.S.-controlled issues. “Closed” issues are issues where settlement has been reached. recommended tax, credit, and penalty adjustments for large dollar issues that we reported in June 1994. During fiscal year 1993, Appeals and Counsel closed 82 transfer pricing issues (involving 51 taxpayers), and the vast majority of those applied to tax years before 1988. During fiscal year 1994, Appeals and Counsel closed 61 transfer pricing issues and sustained about $208 million of the slightly less than $1 billion proposed by IRS for a sustention rate of 21 percent. Using a formula provided by IRS, we estimated that Appeals personnel spent about 25 staff years on work units that included at least one section 482 issue as well as possibly other issues that were closed during fiscal year 1993. Of these years, 7 percent, or about 2 years, were spent on foreign-controlled issues, and 93 percent, or about 23 years, on U.S.-controlled issues. For cases closed during fiscal year 1992, Appeals personnel spent about 44 staff years. In recent years, IRS began tracking the reasons why it reached a particular settlement in particular cases. As table II.6 shows, the most prevalent reason why IRS reached a settlement of transfer pricing issues in cases closed during fiscal year 1993 was concern about whether a court would apply the same judgment to the evidence as IRS had. This reason appeared in 29 of the 82 issues closed, accounting for 63 percent, or $426 million, of the $677 million reduction in the original proposed adjustments to income. Two other reasons appeared in 14 issues each. These were (1) uncertainty about how the court would interpret or apply the law in a particular case and (2) the need to deal with new facts or evidence obtained and evaluated by Appeals or Counsel. As shown in table II.7, IRS’ recent experience in the courts with section 482 has been mixed. However, this is an improvement over what we reported in our 1993 testimony—that is, that IRS lost a significant section 482 issue for each of the five cases we examined. Mixed—The Tax Court increased the taxpayer’s income under section 482 by $40.6 million to bring the pricing relationships closer to what would have occurred at arm’s length. According to the Court, neither party presented the Court with evidence that would satisfy any of the prescribed methods of transfer pricing under section 482 regulations. The Court, however, adopted the IRS expert’s analysis with certain modifications as the “least unacceptable methodology” presented. Technology, Inc. Mixed—The Tax Court held that IRS’ reallocation of income under section 482 was arbitrary and capricious but that the manufacturer failed to prove that the transactions were arm’s length. The Court determined the appropriate transfer prices on the basis of its own “best estimate.” (continued) Mixed—The Tax Court noted that in this case, as in other significant section 482 cases, each party spent most of the time attacking the other party’s allocation formula rather than establishing the soundness of its own formula. Thus, the Court was left to find a formula without the benefit of sufficient help from the parties. Income allocations to the taxpayer were less than called for by IRS. Taxpayer—The Tax Court rejected IRS’ income reallocation, stating that restrictions from the Saudi Arabian government made a higher transfer price legally unfeasible. In this report, we identified four major section 482 cases that had been decided in court between January 1, 1993, and May 20, 1994. These cases took an average of 15 years from the earliest tax year audited until resolution in the courts. As we reported in our 1993 testimony, cases like these illustrate how disputes over section 482 issues can become extremely expensive for taxpayers and the government by requiring the employment of outside experts, resulting in long, drawn out litigation and keeping corporate tax liabilities in an uncertain status for years. In three of the cases, the Tax Court expressed its displeasure with IRS and found its reallocation of income to be arbitrary, capricious, or unreasonable. However, an IRS official noted that under its standard of review, the Tax Court must find IRS “arbitrary and capricious” to differ with adjustments proposed by IRS and then try to find a middle ground between the company and IRS. Thus, even though IRS was found arbitrary, capricious, and unreasonable in the National Semiconductor case, the Tax Court still increased the taxpayer’s income by $40.6 million. IRS also noted that in the one case in which the taxpayer was the complete winner—the Aramco Advantage case—the Tax Court limited IRS’ authority to propose adjustments under section 482. IRS used certain new and existing procedural tools—designated summonses, formal document requests, additional penalties, simultaneous examinations, and arbitration—sparingly. However, advance pricing agreements (APA) were increasingly used. Gauging the ultimate impact of procedural tools is difficult because, while their direct use is limited, their real effectiveness may be as deterrents that alter behavior and as remedies to uncertainty. Also, while IRS can use some of the tools unilaterally, it must have a partner for others. As shown in table III.1, very few designated summonses for transfer pricing cases have been used since they were sanctioned by Congress in 1990. Designated summonses are summonses issued by IRS that could result in suspending the running of the statute of limitations governing the time IRS has for assessing additional taxes against a taxpayer. According to IRS officials, increased taxpayer cooperation has made the use of designated summonses unnecessary, and just the threat that the designated summonses could be imposed is enough to stir taxpayers to action. Designated summonses Formal document requests 1994 (first half) A similar reason was given for the drop in formal document requests issued by IRS from 28 in 1991 to 1 in the first half of fiscal year 1994, also included in table III.1. Formal document requests are issued when relevant taxpayer documents are outside the United States. According to one IRS official, the need for formal document requests has been overtaken by the success of section 6038A of the Internal Revenue Code, which outlines recordkeeping requirements for certain foreign-owned corporations. IRS provided us many examples of taxpayers who became more compliant in the face of section 6038A. Since our 1993 testimony, IRS has continued to make progress with its APAs, which are agreements under a program begun in fiscal year 1991 in which IRS approves ahead of time the methodology a taxpayer volunteering for the program will use in setting transfer prices. According to IRS’ APA program director, as of January 1995, 26 APAs were complete, up from the 9 APAs we cited in our 1993 testimony. The director also anticipated substantial additional growth in APAs and growth in APA staff resources in the immediate future as the APA office was entertaining about 100 active matters. He expected the extra resources to result in reduced audit demands and enhanced voluntary compliance. When we asked, IRS did not have information on the number of staff hours it had devoted to APAs. Section 6662 of the Internal Revenue Code, as modified in 1990 and 1993, imposes substantial penalties on tax underpayments attributable to certain transfer prices that were substantially misstated. According to an IRS official, only one penalty has been proposed since new provisions took effect in 1990 because substantiating penalties was originally difficult on account of the broad exceptions allowed. The difficulty was eased, according to the official, starting in April 1993, when tougher documentation standards went into effect. To head off concerns that penalties might be applied inconsistently or unfairly, future penalties must be reviewed by an IRS Penalty Oversight Committee. Since the latest version of the penalties was enacted for tax years beginning after December 31, 1993, the full force and impact of the penalty cannot be measured at this time. However, IRS officials believe that taxpayers are acting differently than they did before because of the existence of the penalties. In simultaneous examinations, the United States and another country examine related parties under their jurisdictions at the same time in an effort to promote international tax compliance and information exchange. As shown in figure III.1, the number of simultaneous examinations proposed in fiscal years 1993 and 1994—18—was substantially lower than the 33 proposed in 1991 and 1992, which in turn was a much higher number than were proposed in previous years. However, from 1991 through 1994, only a moderate number of the proposed simultaneous examinations had been accepted for follow through at the time of our review, although a few were still pending. According to an IRS official, the relatively high number of proposals in 1991 and 1992 was due to a misunderstanding between IRS and foreign countries. As we testified in 1993, for years, at least since our 1981 report on transfer pricing, IRS has emphasized that its simultaneous examination program is important in protecting U.S. interests in international tax enforcement.According to an IRS official, however, questions of the timing of U.S. and foreign examinations, examination proposal format, and resources have presented obstacles to doing more simultaneous examinations. However, IRS has developed model procedures to encourage the carrying out of more of these examinations. Only one section 482 case has been resolved through arbitration, even though both IRS and the taxpayer considered that arbitration a success. Under Tax Court Rule 124, any time a factual case is at issue and before trial, the parties to the case may move to resolve it through voluntary binding arbitration. In the one case, the arbitration panel ruled in favor of IRS after the agency substantially modified its position after examination. According to the taxpayer’s attorneys, arbitration reduced the taxpayer’s expenses, improved its settlement opportunities, and produced a binding decision. Although recommendations have been made to improve the arbitration process, a second case has not been forthcoming. Before arbitration can occur in a specific instance, both IRS and the taxpayer have to voluntarily agree to it, which is unlikely if either party is unsure of the strength or desirability of its arbitration position. However, IRS officials indicated their interest in pursuing arbitrations and other means of alternate dispute resolution. IRS was beginning a program that would allow certain cases to be subject to mediation. Although IRS has issued new regulations on section 482, how successful they will ultimately be in resolving transfer pricing issues is unclear. Subjectivity will still be involved, and thus controversy may still arise. The flexibility that the regulations allow taxpayers for tax planning must be weighed against the flexibility allowed IRS and the more stringent documentation and penalty provisions that have recently been enacted. As we testified in 1993, the arm’s length standard required that the price charged on a transaction between related corporations be the price that would have been charged if the corporations had been unrelated. To enforce this standard, IRS had to analyze comparable transactions between unrelated corporations to identify the arm’s length price that the related corporations had to charge. If IRS found a difference between an arm’s length price and the price that the related corporations charged, it could propose an adjustment to the related corporations’ income. IRS and taxpayers have used direct and indirect methods for identifying arm’s length prices. The comparable uncontrolled price was based on a direct comparison of the prices charged on readily identifiable, comparable transactions between unrelated parties. More indirect methods, such as the resale price, cost plus, and other appropriate methods, based prices on comparisons with unrelated corporations performing similar functions. These methods may require IRS examiners to use considerable judgment and to develop and analyze a great deal of data. A major obstacle in enforcing the arm’s length standard has been the difficulty that IRS examiners have had in finding readily identifiable, comparable transactions. The data requirements and the subjective nature of the pricing methods imposed a significant administrative burden on both corporate taxpayers and IRS, and also led to uncertainties for corporations about their ultimate tax liabilities. As shown in figure IV.1, indirect methods have been used for most section 482 cases. For transfer pricing issues completed in fiscal years 1990 through 1992, IRS international examiners reported that the three then most well-defined methods—comparable uncontrolled price, cost plus, and resale price—had been used only about half the time, with other methodologies being used the other half. Similarly, these three methods accounted for only 38 percent of the 75 methods used in APAs that had been started and/or finished without being withdrawn as of July 7, 1994, and that had decided-upon methodologies. Another 36 percent used different profit measures or formulary or allocation apportionment, with 27 percent using miscellaneous other allocation methods. IRS officials indicated that formulary apportionment has been used only in difficult cases and only after obtaining the agreement of affected treaty partners. Because the APA information is more recent than the examination information we have, it is more likely to show recent trends in methodologies. Figure IV.1: Allocation Methods Reported by IRS Examiners and Those Used in Advance Pricing Agreements Advance pricing agreements (as of July 7, 1994) Note 1: Percentages do not add to 100 due to rounding. Note 2: Eighteen of the 430 transfer pricing issues reported by IRS examiners arose in APAs. The difficulty of finding comparables is further illustrated by concerns expressed by the IRS Commissioner’s Advisory Group Task Force on Third Party Transfer Price Information. These concerns centered on the confidentiality of price and cost information and the reliability of aggregated data on comparables. The problem of access to information on comparable transactions, and the difficulty of finding such transactions for many intangible properties, make transfer pricing difficult for both taxpayers and IRS. On July 1, 1994, the Department of the Treasury issued new final regulations on intercompany transfer pricing, replacing temporary and proposed regulations issued on January 21, 1993. According to Treasury, the new regulations emphasize comparability and flexibility. The “best method rule” under the regulations provides that the pricing method the taxpayer chooses must be the one that gives the most reliable measure of the arm’s length result under the facts and circumstances. Thus, both taxpayers and IRS will have to use considerable judgment in applying the arm’s length standard. To help taxpayers and IRS exercise this judgment, the new regulations have a greatly expanded discussion of the factors to consider in applying the best method rule. As we said in our 1993 testimony, because the best method approach is still based on the facts and circumstances of each case, the task of selecting and justifying transfer prices will remain complex and open to interpretation. Some commentators on the new regulations have echoed this, saying that there is room for aggressive tax maneuvering by taxpayers and warning that examination and litigation controversy between taxpayers and IRS will increase and certainty decrease. According to Treasury officials, however, the general opinion is that the regulations are relatively workable. Further complicating the task of dealing with the regulations are the data problems described earlier. According to the Commissioner’s Advisory Group Task Force, the data needed for section 482 regulations are extensive, but the ability to create a single database is inhibited. According to an IRS official, this lack of good data on comparables led IRS to develop profit-oriented methods that do not rely so heavily on information on comparable transactions. Still, the new regulations prefer the use of direct evidence of arm’s length prices over profit-related methods. The lack of good data also explains IRS’ practice of asking individual companies other than the one being audited to voluntarily submit information on comparables. Without this information, IRS believes that it runs the risk of inappropriately raising or abandoning section 482 issues, settling for far less than the arm’s length amount, or failing to sustain litigation. The increased documentation required by new penalty legislation may ease the regulatory risks and IRS’ enforcement burden, despite the continuing difficulty in identifying comparables. The requirement for contemporaneous documented support provides IRS with information about the methods used by taxpayers when setting their transfer prices. According to IRS officials, this information provided by taxpayers should enable examiners to determine whether transfer prices are appropriate without, in each case, having to develop alternative methods and data. These IRS officials believed that the documentation requirements combined with the substantial misstatement penalties also provided in the legislation will reduce transfer pricing abuse. The documentation requirements may add to the compliance burden of taxpayers to the extent that transactions will be documented whether or not they are at issue with IRS. However, contemporaneous documentation will be useful to taxpayers in justifying their prices and in avoiding the penalties. Furthermore, taxpayers should benefit from the new section 482 regulations’ additional guidance in determining comparable transactions and from the recognition that the arm’s length price may belong to a range of prices. Taxpayers with prices within the arm’s length range will be protected to some degree from small changes in transfer prices by IRS that result in large increases in tax liabilities. A higher percentage of FCCs than USCCs paid no U.S. income tax in each year from 1987 to 1991. Larger corporations (both foreign- and U.S.-controlled) were more likely than smaller corporations to pay U.S. income tax. Overall, nontaxpaying corporations accounted for a smaller proportion of total assets and receipts in 1991 than taxpaying corporations did, which means that the relatively smaller proportion of taxpaying corporations generated the majority of receipts. But the very small number of large nontaxpaying corporations accounted for a disproportionately large share of total corporate assets and receipts. Finally, large taxpaying FCCs, on average, paid less U.S. income tax relative to receipts than large taxpaying USCCs. Analyzing foreign- and U.S.-controlled corporations’ costs of goods sold, purchases, and other tax data as a percentage of their gross receipts indicated differences but provided no clear evidence of transfer pricing abuses. Comparing the types of industries represented by the largest foreign- and U.S.-controlled corporations may explain some of the differences between them. FCCs were less likely than U.S.-controlled corporations to pay U.S. income tax. In 1991, 73 percent of FCCs versus 62 percent of USCCs paid no U.S. income tax. The trend over 5 years, as shown in table V.1, was relatively constant for FCCs. The change in the percentage of FCCs that did not pay U.S. income tax—from 71 to 73 percent—is small relative to the sampling error in the 1991 estimate of about plus or minus 4 percent. Although the USCC data are also based on statistical samples, the differences for the USCCs that did not pay income tax—from 57 percent in 1987 to 62 percent in 1991—are statistically significant. The increase from 57 to 62 percent in the USCCs that did not pay income tax from 1987 through 1991 should be interpreted with care. The absolute number of USCCs that did not pay income tax in 1991 was the lowest of the 5 years—1,265,272—as opposed to 1,333,470 in 1987. The explanation can be found in the changing number of USCCs—that is, the number of USCCs overall is decreasing faster than the decrease in nontaxpaying USCCs. As table V.2 shows, large FCCs and USCCs were more likely than their smaller corporate counterparts to pay U.S. income tax. In each year from 1987 through 1991, a higher percentage of FCCs and USCCs with assets of $100 million or more paid income tax than did FCCs and USCCs with assets of less than $100 million. While the largest companies were more likely than smaller companies to pay U.S. income tax, an increasing number of the largest companies, both foreign- and U.S.-controlled, paid no tax in these 5 years. The number of large foreign-controlled corporations that did not pay U.S. income tax more than doubled in this period, from 297 in 1987 to 715 in 1991. During the same period, the number of large U.S.-controlled corporations that did not pay U.S. income tax also increased, although not as dramatically, from 2,483 in 1987 to 3,713 in 1991. This trend among the largest corporations is not based on samples and therefore is not subject to sampling error. Nontaxpaying corporations, both foreign- and U.S.-controlled, accounted for the majority of all returns filed, but for much smaller proportions of total assets and receipts. This indicates that many of the nontaxpaying corporations, both foreign- and U.S.-controlled, were smaller in terms of assets and generated fewer receipts than their taxpaying counterparts. While 73 percent of FCCs did not pay U.S. income tax in 1991, these 35,138 nontaxpaying corporations accounted for only 37 percent of the assets and 31 percent of the gross receipts of all FCCs that year, as shown in table V.3. This means the 13,108 taxpaying FCCs accounted for only 27 percent of the returns but 69 percent of receipts. Note 1: “Large” refers to those USCCs and FCCs that had $100 million or more in assets. Note 2: All percentages are of total FCCs or USCCs, including both those that paid income tax and those that did not. This observation is even more striking for the nontaxpaying USCCs. They numbered 1,265,272 or 62 percent of all U.S.-controlled corporate returns filed, but accounted for only 20 percent of the assets and 19 percent of receipts. So, the 38 percent of USCCs that paid income tax in 1991 had 81 percent of all gross receipts generated by USCCs. The largest nontaxpaying FCCs and USCCs were relatively few in number but had a disproportionately large share of the total assets and receipts of all FCCs and all USCCs. In 1991, 715 nontaxpaying FCCs had assets of $100 million or more. These 715 corporations represented only 1.5 percent of all FCCs that filed U.S. corporate tax returns in 1991 but, as shown in table V.3, 31 percent of all FCCs’ assets and 22 percent of their total receipts. In contrast, the 34,423 nontaxpaying FCCs with assets of less than $100 million were 71 percent of all FCCs but accounted for only 9 percent of total FCC receipts and 6 percent of total FCC assets. The observation is also true for USCCs. The 3,713 nontaxpaying USCCs with assets of $100 million or more accounted for a tiny 0.2 percent (one-fifth of one percent) of returns, but 16 percent of assets and 7 percent of receipts generated by all USCCs in 1991. Thus, the 1,261,559 nontaxpaying USCCs with assets under $100 million represented 62 percent of all USCC returns but only 4 percent of assets and 12 percent of the gross receipts. As shown in table V.4, the large FCCs that did pay U.S. income tax paid less relative to FCCs’ total gross receipts in 1991 than the large USCCs that paid tax. Also, for any level of U.S. income taxes paid, large FCCs paid, on average, fewer taxes and, except for those paying $1 million or more in taxes, had higher gross receipts and assets than their taxpaying U.S. counterparts. Finally, large FCCs that paid no U.S. income tax in 1991, on average, had larger assets and more than double the receipts of the large USCCs that paid no tax. Table V.4: Profile of Large Foreign- and U.S.-Controlled Corporations, by Amount of Income Taxes Paid, 1991 Assets (millions) Receipts (millions) Income taxes paid (thous.) Ratios of cost of goods sold and other expense deductions to receipts yield interesting differences between large nontaxpaying FCCs and USCCs but no clear evidence of transfer pricing abuse. We calculated various components of cost of goods sold—including purchases, cost of labor, and inventory—and other items—including interest paid and taxes paid—as percentages of total gross receipts, and then compared the results for the large FCCs and USCCs. These are factors that we considered and reported on in our earlier reports and testimony. Higher costs of goods sold and other items in relation to receipts may indicate, but do not necessarily prove, transfer pricing abuses. Table V. 5 presents the factors that differed most significantly. For the large nontaxpaying FCCs, costs of goods sold accounted for 65.7 percent of receipts, while purchases accounted for 47.7 percent of receipts. In contrast, costs of goods sold and purchases for large nontaxpaying USCCs accounted for a smaller proportion of receipts—43.0 and 25.7 percent, respectively. This may indicate, but is not clear evidence, that large nontaxpaying FCCs are abusing transfer pricing—that is, shifting taxable income by inflating the cost paid to related parties outside the United States for goods and services. Indeed, since one potential means of shifting income is inflating the interest paid to a related party, we might expect that interest paid as a percentage of receipts would be higher for the nontaxpaying FCCs than for the USCCs. But the opposite appears to be true—that is, interest paid as a percentage of receipts was higher (11.4 percent) for large nontaxpaying USCCs than for their FCC counterparts (9.5 percent). A recent study showed various factors at work in accounting for the depressed earnings of foreign firms in the United States. It attributed these earnings to the firms having bought underperforming U.S. firms at top dollar, borrowed heavily, and spent freely on investment and marketing. The study also found some evidence that would point to profit shifting as a contributor to the low earnings. The type of business may also help explain the differences observed between FCCs and USCCs in their relative costs of goods sold, purchases, interest paid, and income taxes paid. Specifically, as shown in table V.6, while more than three-quarters of the large nontaxpaying USCCs were in finance, insurance, and real estate, little more than one-third of the FCCs were in these businesses. In contrast, almost one-half of the FCCs were in manufacturing or wholesale trade compared to less than 15 percent of the USCCs, which may explain the large nontaxpaying FCCs’ relatively higher costs of goods sold and purchases relative to receipts. Finance, insurance, and Note 1: Because we defined large corporations in terms of asset size rather than receipts, this industry breakdown is skewed in favor of finance companies, which have large assets, and against trade companies, which have large receipts. Note 2: The detail may not sum to the total because of rounding differences. Carolyn B. Alessi, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information regarding transfer pricing issues and foreign-controlled corporations' (FCC) and U.S.-controlled corporations' (USCC) tax compliance, focusing on: (1) the Internal Revenue Service's (IRS) handling of transfer pricing issues through its examinations, appeals, and litigation functions; and (2) IRS use of available regulatory and procedural tools. GAO found that: (1) recent IRS experiences with transfer pricing cases have been mixed; (2) although there were as many regulatory violations in 1993 and 1994 as in previous years, the value of the 1994 adjustments increased $1.3 billion over 1993 adjustments; (3) a large number of the 1993 and 1994 cases involved pricing methods other than the three methods specifically described in earlier IRS regulations; (4) the outcomes of IRS appeals and legal processes for the 2 years were similar to those in 1987 and 1988, with a sustention rate of about 30 percent of the proposed adjustments' value; (5) IRS has used certain procedural tools, such as simultaneous examinations and arbitration, as effective deterrents to abusive transfer pricing practices; (6) IRS expects to increase its use of advanced pricing agreements; (7) the success of the new transfer pricing regulations remains to be seen; (8) about 75 percent of FCC and 60 percent of USCC paid no U.S. income tax between 1987 and 1991; (9) the corporations that paid U.S. taxes in 1991 held 80 percent of FCC and USCC assets and generated 81 percent of their receipts; (10) the largest nontaxpaying corporations accounted for most FCC and USCC assets and receipts; and (11) factors other than transfer pricing abuse may contribute to the differences in tax amounts paid by FCC and USCC.
The next Congress and new administration will confront a set of pressing issues that will demand urgent attention and continuing oversight to ensure the nation’s security and well-being. The goal of our transition planning is to look across the work we have done and across the scope and breadth of the federal government’s responsibilities to offer insights into areas needing immediate attention. A few examples follow: Oversight of financial institutions and markets: As events over the past few days have underscored, oversight over the U.S. housing and financial markets will certainly be among the priority matters commanding the attention of the new administration and the 111th Congress. These sectors of our economy have been going through a period of significant instability and turmoil. Congress has taken a number of steps to address some of the immediate effects of the market turmoil including enactment of the Federal Housing Finance Regulatory Reform Act of 2008, which, among other things, strengthens regulation of the housing government- sponsored enterprises (GSE) and provides authority to the Treasury to purchase any amount of Fannie Mae and Freddie Mac securities. We are closely monitoring a range of implications of the current market turmoil including the financial condition of GSEs and the implications of the Treasury exercising this new authority to stabilize GSEs. In addition, recent bank failures and growing numbers of banks on the “Watchlist” raise questions about the impact on the banking system and future federal exposures as well as on the bank insurance fund. We have a larger body of work that involves auditing the Federal Deposit Insurance Corporation, the newly created Federal Housing Finance Agency, and the consolidated financial statements of the U.S. government, as well as evaluating ongoing developments in the housing and financial markets. We will draw on this work to provide observations and advice, as appropriate, on how best to ensure the stability of our nation’s financial system. While these serious disruptions require immediate attention and careful monitoring, ongoing turmoil in the housing and financial markets has renewed concerns about whether the current system for overseeing and regulating financial institutions and markets is best suited to meet the nation’s evolving needs and 21st century challenges. Later this year we plan to issue a report describing the evolution of the current regulatory structure and how market developments and changes have introduced challenges for the current system. We believe this reassessment is needed to ensure that these types of serious disruptions can be minimized in the future. As part of this work, we are also developing a framework to assist Congress in evaluating alternative regulatory reform proposals. U.S. efforts in Iraq and Afghanistan: Policy and implementation issues will remain on the horizon for these and other international challenges. Hundreds of billions of dollars have been provided to the Department of Defense (DOD) for military operations in Iraq and Afghanistan as well as U.S. efforts to help address security, stabilization and reconstruction, and capacity-building efforts in these countries. These efforts include developing security forces, rebuilding critical infrastructure, and enhancing the countries’ capacity to govern. Since 2003, we have issued more than 175 reports on military operations and various aspects of U.S. efforts to achieve the goals in Iraq and Afghanistan. Our transition work will highlight the major implementation issues that need to be addressed to ensure accountability and assess progress regardless of what policies are pursued. DOD’s readiness and capabilities: Extended operations in Iraq, Afghanistan, and elsewhere have had significant consequences for military readiness, particularly with regard to the Army and Marine Corps. Current operations have required the military to operate at a persistently high tempo with the added stress of lengthy and repeated deployments. In addition, because of the significant wear and tear on equipment, refocusing of training on counterinsurgency operations, and other factors, rebuilding readiness of U.S. forces is a major challenge for DOD. At the same time, DOD faces competing demands for resources given broad- based initiatives to grow, modernize, and transform its forces. We will offer our perspective on the competing demands DOD faces and the need to develop sound plans to guide investment decisions, as it reassesses the condition, size, composition, and organization of its total force, including contractor support, to protect the country from current, emerging, and future conventional and unconventional security threats. Protection at home: DHS must remain prepared and vigilant with respect to securing the homeland, particularly during the transition period when the nation can be viewed as being particularly vulnerable. In doing so, it is important that the new administration address key issues that, as we reported, have impacted and will continue to impact the nation’s security and preparedness, including better securing our borders, enforcing immigration laws, and serving those applying for immigration benefits; defining key preparedness and response capabilities and building and maintaining those capabilities through effective governmental and external partnerships; and further strengthening the security and resiliency of critical infrastructure to acts of terrorism. In achieving its critical mission, we found that DHS needs to more fully integrate and strengthen its management functions, including acquisition and human capital management; more fully adopt risk-based principles in allocating resources to the areas of greatest need; and enhance the effectiveness of information sharing among federal agencies and with state and local governments and the private sector. The decennial census: The results of the 2010 census are central to apportionment, redistricting congressional boundaries, and distributing hundreds of billions of dollars in federal aid. Soon after taking office, the new administration will face decisions that will shape the outcome of this central effort. Next spring the first nationwide field operation of the 2010 decennial census will begin. During address canvassing, the Census Bureau will rely, for the first time, on hand-held computers to verify address and map information. Earlier this year, we designated the decennial census as a high-risk area, in part, because of ongoing challenges in managing information technology—including hand-held computers—and uncertainty over the total cost of the decennial census and the Bureau’s plans for rehearsing its field operations. The Bureau has taken some important steps to get the census back on track but did not rehearse its largest and most costly field operation—non-response follow- up—and has little time for further course correction as it prepares to carry out the national head count. While facing pressing issues, the next Congress and new administration also inherit the federal government’s serious long-term fiscal challenge— driven on the spending side by rising health care costs and changing demographics. This challenge is complicated by the need to timely address developments such as the recent economic pressures and troubles in the housing and financial markets. Ultimately, however, the new administration and Congress will need to develop a strategy to address the federal government’s long-term unsustainable fiscal path. Planning for the transition will necessarily need to address the fact that achieving meaningful national results in many policy and program areas requires some combination of coordinated efforts among various actors across federal agencies, often with other governments (for example, internationally and at state and local levels), non-government organizations (NGO), for-profit and not for-profit contractors, and the private sector. In recognition of this fact, recent years have seen the adoption of a range of national plans and strategies to bring together decision makers and stakeholders from different locations, types of organizations, and levels of government. For example, the National Response Plan is intended to be an all-discipline, all-hazards plan that establishes a single, comprehensive framework for managing domestic incidents where involvement is necessary among many levels of government, the private sector, and nonprofit organizations. The response and recovery efforts after 9/11 and natural disasters, the nation’s preparations for a possible pandemic influenza, and the need to address global food insecurity are some of the many public issues that vividly underscore the critical importance of employing broad governance perspectives to meet global and national needs. Our transition work will highlight challenges the new Congress and next administration face in devising integrated solutions to such multi-dimensional problems. Some examples follow: Care for servicemembers: Over the last several years, more than 30,000 servicemembers have been wounded in action; many with multiple serious injuries such as amputations, traumatic brain injury, and post-traumatic stress disorder. We have identified substantial weaknesses in the health care these wounded warriors are receiving as well as the complex and cumbersome DOD and VA disability systems they must navigate. While improvement efforts have started, addressing the critical continuity of care issues will require sustained attention, systematic oversight by DOD and VA, and sufficient resources. Health care in an increasingly global market and environment: The spread of severe acute respiratory syndrome (SARS) from China in 2002, recent natural disasters, and the persistent threat of an influenza pandemic all highlight the need to plan for a coordinated response to large-scale public health emergencies. Federal agencies must work with one another and with state and local governments, private organizations, and international partners to identify and assess the magnitude of threat, develop effective countermeasures (such as vaccines), and marshal the resources required for an effective public health response. Our transition work on these topics—including work related to such emergencies as SARS, Hurricane Katrina, pandemic influenza, bioterrorism, and TB—will highlight that federal agencies still face challenges such as coordinating response efforts and developing the capacity for a medical surge in mass casualty events. Food safety: The fragmented nature of the federal food oversight system undermines the government’s ability to plan more strategically to inspect food production processes, identify and react more quickly to outbreaks of foodborne illnesses, and focus on promoting the safety and integrity of the nation’s food supply. Fifteen federal agencies collectively administer at least 30 laws related to food safety. We have recommended, among other things, that the executive branch reconvene the President’s Council on Food Safety to facilitate interagency coordination on food safety regulation and programs. Surface transportation: The nation’s transportation infrastructure—its aviation, highway, transit, and rail systems—is critical to the nation’s economy and affects the daily lives of most Americans. Despite large increases in federal spending on America’s vital surface transportation system, this investment has not commensurately improved the performance of the system. Growing congestion has created by one estimate a $78 billion annual drain on the economy, and population growth, technological change, and the increased globalization of the economy will further strain the system. We have designated transportation finance a high-risk area and have called for a fundamental reexamination and restructured approach to our surface transportation policies, which experts have suggested need to recognize emerging national and global imperatives, such as reducing the nation’s dependence on foreign fuel sources and minimizing the impact of the transportation system on the global climate change. Disaster response: Hurricane Katrina demonstrated the critical importance of the capability to implement an effective and coordinated response to catastrophes that leverages needed resources from across the nation, including all levels of government as well as nongovernmental entities. While the federal government has made progress since Katrina, as shown in the recent response to Hurricane Gustav, we have reported that the administration still does not have a comprehensive inventory of the nation’s response capabilities or a systematic, comprehensive process to assess capabilities at the local, state, and federal levels based on commonly understood and accepted metrics for measuring those capabilities. We have work under way to identify the actions that DHS and the Federal Emergency Management Agency (FEMA) have taken to implement the provisions of the Post-Katrina Emergency Management Reform Act, which charged FEMA with the responsibility for leading and supporting the nation in a comprehensive risk-based emergency management system—a complex task that requires clear strategic vision, leadership, and the development of effective partnerships among governmental and nongovernmental entities. Cyber critical infrastructures: Cyber critical infrastructures are systems and assets incorporating information technology—such as the electric power grid and chemical plants—that are so vital to the nation that their incapacitation or destruction would have a debilitating impact on national security, our economy, and public health and safety. We have made numerous recommendations aimed at protecting these essential assets and addressing the many challenges that the federal government faces in working with both the private sector and state and local governments to do so—such as improving threat and vulnerability assessments, enhancing cyber analysis and warning capabilities, securing key systems, and developing recovery plans. Until these and other areas are effectively addressed, our nation’s cyber critical infrastructure is at risk of the increasing threats posed by terrorists, foreign intelligence services, and others. Also, more broadly, the Government Performance and Results Act of 1993 (GPRA) calls for a governmentwide performance plan to help Congress and the executive branch address critical federal performance and management issues, including redundancy and other inefficiencies. Unfortunately, the promise of this important provision has not been realized. The agency-by-agency focus of the budget does not provide for the needed strategic, longer range, and integrated perspective of government performance. A broader performance plan would provide the President with an opportunity to assess and communicate the relationship between individual agency goals and outcomes that transcend federal agencies. Our transition work will identify opportunities to limit costs and reduce waste across a broad spectrum of programs and agencies. While these opportunities will not eliminate the need to address more fundamental long-term fiscal challenges the federal government faces, concerted attention by the new administration could conserve resources for other priorities and improve the government’s image. Examples of areas we will highlight and for which we will suggest needed action follow: Improper payments: For fiscal year 2007, agencies reported improper payment estimates of about $55 billion—including programs such as Medicaid, Food Stamps, Unemployment Insurance, and Medicare. The governmentwide estimate has steadily increased over the past several years; yet even the current estimate does not reflect the full scope of improper payments. Further, major management challenges and internal control weaknesses continue to plague agency operations and programs susceptible to significant improper payments. Addressing these challenges and internal control weaknesses will better ensure the integrity of payments and minimize the waste of taxpayers’ dollars. DOD cost overruns: Total acquisition cost growth on the 95 major defense programs in DOD’s fiscal year 2007 portfolio is now estimated at $295 billion, and of the weapon programs we assessed this year, none had proceeded through development meeting the best practice standards for mature technologies, stable design, and mature production processes—all prerequisites for achieving planned cost and schedule outcomes. DOD expects to invest about $900 billion (fiscal year 2008 dollars) over the next 5 years on development and procurement, with more than $335 billion, or 37 percent, going specifically for new major weapon systems. Yet, much of this investment will be used to address cost overruns rooted in poor planning, execution, and oversight. By adopting best practices on individual programs and strengthening oversight and accountability for better outcomes, as we have consistently recommended, cost and schedule growth could be significantly reduced. DOD secondary inventory: DOD expends considerable resources to provide logistics support for military forces, and the availability of spare parts and other critical items provided through DOD’s supply chains affects military readiness and capabilities. DOD officials have estimated that the level of investment in DOD’s supply chains is more than $150 billion a year, and the value of its supply inventories has grown by tens of billions of dollars since fiscal year 2001. However, as we have reported over the years, DOD continues to have substantial amounts of secondary inventory (spare parts) that are in excess to requirements. Most recently, in 2007, we reported that more than half of the Air Force’s secondary inventory, worth an average of $31.4 billion, was not needed to support required inventory levels from fiscal years 2002 through 2005, although increased demand due to ongoing military operations contributed to slight reductions in the percentage of inventory on hand and the number of years of supply it represents. In ongoing reviews of the Navy’s and the Army’s secondary inventory, we are finding that these services also continue to have significant amounts of inventory that exceeds current requirements. To reduce its investment in spare parts that are in excess of requirements, DOD will need to strengthen the accountability and management of its secondary inventory. Oil and gas royalties: In fiscal year 2007, the Department of Interior’s Minerals Management Service collected over $9 billion in oil and gas royalties, but our work on the collection of federal royalties has found numerous problems with policies, procedures, and internal controls that raise serious doubts about the accuracy of these collections. We also found that past implementation of royalty relief offered some oil and gas companies during years of low oil and gas prices did not include provisions to remove the royalty relief in the event that oil and gas prices rose as they have, and this failure to include such provisions will likely cost the federal government tens of billions of dollars over the working lives of the affected leases. Finally, we have found that the federal government ranks lowest among the nations in terms of the percentage of total oil and gas revenue accruing to the government. We have ongoing reviews of Interior’s oil and gas leasing and royalty policies and procedures and reports based on this work should be publicly released within the next few months. The tax gap: The tax gap—the difference between taxes legally owed and taxes paid on time—is a long-standing problem in spite of many efforts by Congress and the Internal Revenue Service (IRS) to reduce it. Recently, IRS estimated a net tax gap for tax year 2001 of about $290 billion. We have identified the need to take multiple approaches to reduce the tax gap, and specifically have recommended ways for IRS to improve its administration of the tax laws in many areas, including payroll taxes, rental real estate income, the tax preparation industry, income sent offshore, collecting tax debts, and the usefulness of third-party information reporting. Ultimately, long-term fiscal pressures and other emerging forces will test the capacity of the policy process to reexamine and update priorities and portfolios of federal entitlement programs, policies, programs, commitments, and revenue approaches. In that regard, the “base” of government—spending and revenue—also must be reassessed so that emerging needs can be addressed while outdated and unsustainable efforts can be either reformed or eliminated. Tax expenditures should be part of that reassessment. Spending channeled through the tax code results in forgone federal revenue that summed to an estimated $844 billion in 2007 and has approximated the size of total discretionary spending in some years. Yet, little is known about the performance of credits, deductions, and other tax preferences, statutorily defined as tax expenditures, which are often aimed at policy goals similar to those of federal spending programs. Because tax expenditures represent a significant investment of resources, and in some program areas are the main tool used to accomplish federal goals, this is a significant gap in the information available to decision makers. While some progress has been made in recent years, agencies still all too often lack the basic management capabilities needed to address current and emerging demands. As a result, any new administration will face challenges in implementing its policy and program agendas because of shortcomings in agencies’ management capabilities. Accordingly, our transition effort will synthesize our wide range of work and identify the key management challenges unique to individual departments and major agencies. Additionally, our transition work will emphasize five key themes common to virtually every government agency. Select a senior leadership team that has the experience needed to run large, complex organizations: It is vitally important that leadership skills, abilities, and experience be among the key criteria the new President uses to select his leadership teams in the agencies. The Senate’s interest in leveraging its role in confirmation hearings as evidenced by Senator Voinovich’s request to us to suggest management-related confirmation questions and your interest in hearings such as this one will send a strong message that nominees should have the requisite skills to deal effectively with the broad array of complex management challenges they will face. It is also critical that they work effectively with career executives and agency staff. Given that management improvements and transformations can take years to achieve, steps are needed to ensure a continuous focus on those efforts. Agencies need to develop executive succession and transition-planning strategies that seek to sustain commitment as individual leaders depart and new ones arrive. For example, in creating a Chief Management Officer (CMO) position for DHS, Congress has required the DHS CMO to develop a transition and succession plan to guide the transition of management functions with a new administration. More broadly speaking, though, the creation of a chief operating officer (COO)/CMO position in selected federal agencies can help elevate, integrate, and institutionalize responsibility for key management functions and transformation efforts and provide continuity of leadership over a long term. For example, because of its long-standing management weaknesses and high-risk operations, we have long advocated the need for a COO/CMO for DOD to advance management integration and business transformation in the department. In the fiscal year 2008 National Defense Authorization Act, Congress designated the Deputy Secretary of Defense as the department’s CMO. Strengthen the capacity to manage contractors and recognize related risks and challenges: Enhancing acquisition and contracting capability will be a critical challenge for many agencies in the next administration in part because many agencies (for example, DOD, DHS, the Department of Energy, and the Centers for Disease Control and Prevention) are increasingly reliant on contractors to carry out their basic operations. In fiscal year 2007, federal agencies spent $436 billion on contracts for products and services. At the same time, our high-risk list areas include acquisition and contract management issues that collectively expose hundreds of billions of taxpayer dollars to potential waste and misuse. To improve acquisition outcomes, we have stated that agencies need a concentrated effort to address existing problems while facilitating a reexamination of the rules and regulations that govern the government- contractor relationship in an increasingly blended workforce. For example, since agencies have turned to contractor support to augment their capabilities, they need to ensure that contractors are playing appropriate roles and that the agencies have retained sufficient in-house workforce capacity to monitor contractor cost, quality, and performance. Better manage information technology (IT) to achieve benefits and control costs: A major challenge for the federal government is managing its massive investment in IT—currently more than $70 billion annually. Our reports have repeatedly shown that agencies and the government as a whole face challenges in prudently managing major modernization efforts, ensuring that executives are accountable for IT investments, instituting key controls to help manage such projects, and ensuring that computer systems and information have adequate security and privacy protections. The Office of Management and Budget (OMB) identifies major projects that are poorly planned by placing them on a Management Watch List and requires agencies to identify high-risk projects that are performing poorly. OMB and federal agencies have identified approximately 413 IT projects— totaling at least $25.2 billion in expenditures for fiscal year 2008—as being poorly planned, poorly performing, or both. OMB has taken steps to improve the identification of the Management Watch List and high-risk projects since GAO testified last September, including publicly disclosing reasons for placement on the Management Watch List and clarifying high- risk project criteria. However, more needs to be done by both OMB and the agencies to address recommendations GAO has previously made to improve the planning, management, and oversight of poorly planned and performing projects so that potentially billions in taxpayer dollars are not wasted. Address human capital challenges: Governmentwide, about one-third of federal employees on board at the end of fiscal year 2007 will become eligible to retire on the new administration’s watch. Certain occupations— air traffic controllers and customs and border protection personnel among them—are projected to have particularly high rates of retirement eligibility come 2012. As experienced employees retire, they leave behind critical gaps in leadership and institutional knowledge, which could adversely affect the government’s ability to carry out its diverse responsibilities. Agencies must recruit and retain employees able to create, sustain, and thrive in organizations that are flatter, results-oriented, and externally focused, and who can collaborate with other governmental entities as well as with the private and nonprofit sectors to achieve desired outcomes. The Office of Personnel Management needs to continue to ensure that its own workforce has the skills needed to successfully guide agency human capital improvements and agencies must make appropriate use of available authorities to acquire, develop, motivate, and retain talent. Build on the progress of the statutory management framework: Over the last 2 decades, Congress has put in place a legislative framework for federal management that includes results-based management, information technology, and financial management reforms. As a result of this framework and the efforts of Congress and the Bush and Clinton administrations, there has been substantial progress in establishing the basic infrastructure needed to create high-performing organizations across the federal government. However, work still remains and sustained attention by Congress and the incoming administration will be a critical factor in ensuring the continuing and effective implementation of the statutory management reforms. Initiated in 1990, GAO’s high-risk program has brought a much greater focus to areas in need of broad-based transformations and those vulnerable to waste, fraud, abuse, and mismanagement. It also has provided the impetus for the creation of several statutory management reforms. GAO’s current high-risk list covers 28 areas. Our updates to the list, issued every 2 years at the start of each new incoming Congress, have helped in setting congressional oversight agendas. The support of this Subcommittee and others in Congress has been especially important to the success of this program. Further, administrations have consistently turned to the high-risk list in framing their management improvement initiatives. The current administration in particular, working with this Subcommittee, has provided a valuable and focused effort in requiring agencies to develop meaningful corrective action plans for each area that we have designated as high-risk. As a consequence of efforts by Congress, the agencies, OMB, and others, much progress has been made in many high-risk areas, but key issues need continuing attention. Sustained efforts in these areas by the next Congress and administration will help improve service to the American public, strengthen public confidence in the government’s performance and accountability, potentially save billions of dollars, and ensure the ability of government to deliver on its promises. The world has obviously changed a great deal since the Presidential Transition Act of 1963. And while there have been periodic amendments to the Act, neither the Act nor the transition process itself has been subject to a comprehensive or systematic assessment of whether the Act is setting transitions up to be as effective as they might be. We will be monitoring the transition and reaching out to the new administration, Congress, and outside experts to identify lessons learned and any needed improvements in the Act’s provisions for future transitions. In summary, our goal will continue to be to provide congressional and executive branch policy makers with a comprehensive snapshot of how things are working across government and to emphasize the need to update some federal activities to better align them with 21st century realities and bring about government transformation. In keeping with our role, we will be providing Congress and the executive branch with clear facts and constructive options and suggestions that our elected officials can use to make policy choices in this pivotal transition year. The nation’s new and returning leaders will be able to use such information to help address both the nation’s urgent issues and long-term challenges so that our nation stays strong and secure now and for the next generation. Chairman Akaka, Senator Voinovich, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions you may have. Housing Government-Sponsored Enterprises: A Single Regulator Will Better Ensure Safety and Soundness and Mission Achievement (GAO-08-563T, Mar. 6, 2008). Financial Regulation: Industry Trends Continue to Challenge the Federal Regulatory Structure (GAO-08-32, Oct. 12, 2007). Securing, Stabilizing, and Reconstructing Afghanistan: Key Issues for Congressional Oversight (GAO-07-801SP, May 24, 2007). Securing, Stabilizing and Rebuilding Iraq: Progress Report: Some Gains Made, Updated Strategy Needed (GAO-08-837, June 23, 2008). Military Readiness: Impact of Current Operations and Actions Needed to Rebuild Readiness of U.S. Ground Forces (GAO-08-497T, Feb. 14, 2008). Force Structure: Restructuring and Rebuilding the Army Will Cost Billions of Dollars for Equipment but the Total Cost Is Uncertain (GAO-08-669T, Apr. 10, 2008). Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions (GAO-07-454, Aug. 17, 2007). Department of Homeland Security: Progress Made in Implementation of Management Functions, but More Work Remains (GAO-08-646T, Apr. 9, 2008). 2010 Census: Census Bureau's Decision to Continue with Handheld Computers for Address Canvassing Makes Planning and Testing Critical (GAO-08-936, July 31, 2008). Information Technology: Significant Problems of Critical Automation Program Contribute to Risks Facing 2010 Census (GAO-08-550T, Mar. 5, 2008). The Nation's Long-Term Fiscal Outlook: April 2008 Update (GAO-08- 783R, May 16, 2008). Budget Issues: Accrual Budgeting Useful in Certain Areas but Does Not Provide Sufficient Information for Reporting on Our Nation's Longer- Term Fiscal Challenge (GAO-08-206, Dec. 20, 2007). Fiscal Exposures: Improving the Budgetary Focus on Long-Term Costs and Uncertainties (GAO-03-213 (Jan. 24, 2003). Long-Term Fiscal Outlook: Long-Term Federal Fiscal Challenge Driven Primarily by Health Care (GAO-08-912T, June 17, 2008). DOD and VA: Preliminary Observations on Efforts to Improve Care Management and Disability Evaluations for Servicemembers (GAO-08-514T, Feb. 27, 2008). DOD and VA: Preliminary Observations on Efforts to Improve Health Care and Disability Evaluations for Returning Servicemembers (GAO- 07-1256T, Sept. 26, 2007). Emergency Preparedness: States are Planning for Medical Surge, but Could Benefit from Shared Guidance for Allocating Scarce Medical Resources (GAO-08-668, June 13, 2008). Influenza Pandemic: Efforts Under Way to Address Constraints on Using Antivirals and Vaccines to Forestall a Pandemic (GAO-08-92, Dec. 21, 2007). Federal Oversight of Food Safety: High-Risk Designation Can Bring Needed Attention to Fragmented System (GAO-07-449T, Feb. 8, 2007). Federal Oversight of Food Safety: FDA's Food Protection Plan Proposes Positive First Steps, but Capacity to Carry Them Out Is Critical (GAO- 08-435T, Jan. 29, 2008). Surface Transportation Programs: Proposals Highlight Key Issues and Challenges in Restructuring the Programs (GAO-08-843R, July 29, 2008). Surface Transportation: Restructured Federal Approach Needed for More Focused, Performance-Based, and Sustainable Programs (GAO-08-400, Mar. 6, 2008). Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System (GAO-06-618, Sept. 6, 2006). Emergency Management: Observations on DHS's Preparedness for Catastrophic Disasters (GAO-08-868T, June 11, 2008). Critical Infrastructure Protection: Sector-Specific Plans’ Coverage of Key Cyber Security Elements Varies (GAO-08-113, Oct. 31, 2007). Critical Infrastructure Protection: Multiple Efforts to Secure Control Systems Are Under Way, but Challenges Remain (GAO-07-1036, Sept. 10, 2007). Improper Payments: Status of Agencies' Efforts to Address Improper Payment and Recovery Auditing Requirements (GAO-08-438T, Jan. 31, 2008). Fiscal Year 2007 U.S. Government Financial Statements: Sustained Improvement in Financial Management Is Crucial to Improving Accountability and Addressing the Long-Term Fiscal Challenges (GAO-08-847T, June 5, 2008). Defense Acquisitions: Better Weapon Program Outcomes Require Discipline, Accountability, and Fundamental Changes in the Acquisition Environment (GAO-08-782T, June 3, 2008). Defense Acquisitions: Assessments of Selected Weapon Programs. (GAO-08-467SP, Mar. 31, 2008). DOD's High-Risk Areas: Efforts to Improve Supply Chain Can Be Enhanced by Linkage to Outcomes, Progress in Transforming Business Operations, and Reexamination of Logistics Governance and Strategy (GAO-07-1064T, July 10, 2007). Defense Inventory: Opportunities Exist to Save Billions by Reducing Air Force’s Unneeded Spare Parts Inventory (GAO-07-232, Apr. 27, 2007). Oil and Gas Royalties: A Comparison of the Share of Revenue Received from Oil and Gas Production by the Federal Government and Other Resource Owners (GAO-07-676R, May 1, 2007). Oil and Gas Royalties: Litigation over Royalty Relief Could Cost the Federal Government Billions of Dollars (GAO-08-792R, June 5, 2008). Highlights of the Joint Forum on Tax Compliance: Options for Improvement and Their Budgetary Potential (GAO-08-703SP, June 2008). Tax Compliance: Multiple Approaches Are Needed to Reduce the Tax Gap (GAO-07-488T, Feb. 16, 2007). Government Performance and Accountability: Tax Expenditures Represent a Substantial Federal Commitment and Need to Be Reexamined (GAO-05-690, Sept. 23, 2005). Higher Education: Multiple Higher Education Tax Incentives Create Opportunities for Taxpayers to Make Costly Mistakes (GAO-08-717T, May 1, 2008). Organizational Transformation: Implementing Chief Operating Officer/Chief Management Officer Positions in Federal Agencies (GAO-08-34, Nov. 1, 2007). Defense Management: DOD Needs to Reexamine Its Extensive Reliance on Contractors and Continue to Improve Management and Oversight (GAO-08-572T, Mar. 11, 2008). Federal Acquisitions and Contracting: Systemic Challenges Need Attention (GAO-07-1098T, July 17, 2007). Information Technology: OMB and Agencies Need to Improve Planning, Management, and Oversight of Projects Totaling Billions of Dollars (GAO-08-1051T, July 31, 2008). Information Security: Progress Reported, but Weaknesses at Federal Agencies Persist (GAO-08-571T, Mar. 12, 2008). Office of Personnel Management: Opportunities Exist to Build on Recent Progress in Internal Human Capital Capacity (GAO-08-11, Oct. 31, 2007). Human Capital: Transforming Federal Recruiting and Hiring Efforts (GAO-08-762T, May 8, 2008). High-Risk Series: An Update (GAO-07-310, Jan. 31, 2007). Suggested Areas for Oversight for the 110th Congress (GAO-07-235R, Nov. 17, 2006). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The upcoming 2009 transition will be a unique and critical period for the U.S. government. It marks the first wartime presidential transition in 40 years. It will also be the first administration change for the relatively new Department of Homeland Security operating in the post 9/11 environment. The next administration will fill thousands of positions across government; there will be a number of new faces in Congress as well. Making these transitions as seamlessly as possible is pivotal to effectively and efficiently help accomplish the federal government's many essential missions. While the Government Accountability Office (GAO), as a legislative branch agency, has extensive experience helping each new Congress, the Presidential Transition Act points to GAO as a resource to incoming administrations as well. The Act specifically identifies GAO as a source of briefings and other materials to help presidential appointees make the leap from campaigning to governing by informing them of the major management issues, risks, and challenges they will face. GAO has traditionally played an important role as a resource for new Congresses and administrations, providing insight into the issues where GAO has done work. This testimony provides an overview of GAO's objectives for assisting the 111th Congress and the next administration in their all-important transition efforts. GAO will highlight issues that the new President, his appointees, and the Congress will confront from day one. These include immediate challenges ranging from national and homeland security to oversight of financial institutions and markets to a range of public health and safety issues. GAO will synthesize the hundreds of reports and testimonies it issues every year so that new policy makers can quickly zero in on critical issues during the first days of the new administration and Congress. GAO's analysis, incorporating its institutional memory across numerous administrations, will be ready by the time the election results are in and transition teams begin to move out. GAO will provide congressional and executive branch policy makers with a comprehensive snapshot of how things are working across government and emphasize the need to update some federal activities to better align them with 21st century realities and bring about government transformation. In keeping with its mission, GAO will be providing Congress and the executive branch with clear facts and constructive options and suggestions that elected officials can use to make policy choices in this pivotal transition year. GAO believes the nation's new and returning leaders will be able to use such information to help meet both the nation's urgent issues and long-term challenges so that our nation stays strong and secure now and for the next generation. GAO's transition work also will highlight the need to modernize the machinery of government through better application of information technology, financial management, human capital, and contracting practices. GAO also will underscore the need to develop strategies for addressing the government's serious long-term fiscal sustainability challenges, driven on the spending side primarily by escalating health care costs and changing demographics.
The stated purposes of the Recovery Act are to: preserve and create jobs and promote economic recovery; assist those most impacted by the recession; provide investments needed to increase economic efficiency by spurring technological advances in science and health; invest in transportation, environmental protection, and other infrastructure that will provide long-term economic benefits; and stabilize state and local government budgets, in order to minimize and avoid reductions in essential services and counterproductive state and local tax increases. While many Recovery Act projects focused on immediately jumpstarting the economy, some projects—such as those involving investments in technology, infrastructure, and the environment—are expected to contribute to economic growth for many years. The Recovery Act established the Recovery Accountability and Transparency Board (Recovery Board) to provide additional monitoring and oversight. The board was originally scheduled to terminate operations by September 30, 2013, but its mission has been extended until September 30, 2015, to provide oversight and monitoring of assistance provided in response to Hurricane Sandy, which hit the northeast in October 2012.in figure 1 displays selected events related to the Recovery Act and its requirements. The Congressional Budget Office (CBO) initially estimated the cost of the Recovery Act to be approximately $787 billion; however, CBO’s most recent estimate projects that the Recovery Act will cost approximately $830 billion over the 2009-2019 time period. As of October 31, 2013, the federal government provided a total of approximately $812 billion related to Recovery Act activities. This includes funding to 28 federal agencies that were distributed to states, localities, and other entities; individuals through a combination of tax benefits and cuts; entitlements; and loans, contracts, and grants. See figure 2 for an overview of Recovery Act spending by category and program. Although Medicaid was the single largest Recovery Act grant program, we did not include it in our review because it is primarily an entitlement program and subject to specific rules that are not typical of program grants. Accordingly, we included the Recovery Act funds directed to Medicaid in the entitlement category, rather than the grant category, in figure 2. Emphasizing the importance of spending Recovery Act funds quickly, the President established a goal that by September 30, 2010, 70 percent of Recovery Act funding should be spent (that is, both obligated and outlayed). Therefore, agencies had approximately 19 months to spend almost three- quarters of their Recovery funds. Grants have played a key role in providing Recovery Act funds to recipients, with approximately $219 billion being awarded for use in states and localities through a wide variety of federal grant programs. With the intent of disbursing funds quickly to create and retain jobs and stabilize state and local budgets, a large majority of Recovery Act grant funding went to states and localities within 3 years of the law’s enactment. Recipients reported receiving approximately 88 percent of their grant awards by the end of the 2nd quarter of calendar year 2013. State and local spending was as follows: Fiscal year 2009: spending totaled approximately $53 billion in actual outlays. Fiscal year 2010: spending was at its highest level with approximately $112 billion in actual outlays. Fiscal year 2011: spending decreased from its peak, with approximately $69 billion in actual outlays. The 28 federal agencies that received Recovery funds developed specific plans for spending the money. The agencies then awarded grants and contracts to state governments or, in some cases, directly to schools, hospitals, or other entities. OMB guidance directed these federal agencies to file weekly financial reports detailing how the money was being distributed. Recipients of the funds, in turn, were required by the Recovery Act to file quarterly reports on how they were spending the Recovery Act funds that they received. Recovery Act grants provided to states and localities covered a broad range of areas such as transportation, energy, and housing. Education programs were the largest recipients of Recovery Act grant awards. Of the education programs funded in the Recovery Act, the largest in terms of funding was the newly created State Fiscal Stabilization Fund (SFSF) program, which provided assistance to state governments to stabilize their budgets by minimizing budgetary cuts in education and other essential government services, such as public safety. The Recovery Act appropriated $53.6 billion for the SFSF program.shows, grants represent over one-quarter of Recovery Act funding. Out of that category, funding received in the program areas of education, transportation, and energy and environment amount to approximately $137 billion, or 70 percent, of Recovery Act grant spending to date. As figure 2 (above) The Recovery Act called for a large amount of federal funds to be spent (that is, obligated and outlayed) in a short period of time—approximately 19 months—by the end of September 30, 2010. To assure the public that their tax dollars were being spent efficiently and effectively, the Recovery Act placed increased emphasis on accountability and transparency through enhanced reporting, auditing, and evaluation requirements for users of Recovery Act funds. The Recovery Act delineated some of these increased accountability and transparency responsibilities to existing organizations and entities as well as newly-created ones. See table 1 for details regarding the primary accountability and oversight responsibilities of key organizations involved in implementing the Recovery Act. Under the Recovery Act, accountability for timely and effective implementation of the law was a shared responsibility that included agencies involved in directly implementing the law as well as the external oversight community. On the operational side, among the practices that facilitated accountability were (1) strong support by top leaders, (2) centrally-situated collaborative governance structures, and (3) the regular and systematic use of data to support management reviews. We have previously reported on the importance of having the active support of top leadership when undertaking large and complex activities. This was the case in the implementation of the Recovery Act where, at the federal level, the President and Vice President made clear that effective Recovery Act implementation was a high priority for them. The President assigned overall management responsibility for the Recovery Act to the Vice President and appointed a former OMB deputy director to head the newly-created Recovery Implementation Office with direct reporting responsibilities to both him and the Vice President. The former head of the Recovery Implementation Office told us that his position gave him access to top leadership in the administration. This official said he participated in daily morning staff meetings with the White House senior staff, briefing them on any issues related to the Recovery Act. He briefed the President directly approximately once a month. In addition, he typically met with the Vice President’s staff on a daily basis after the President’s staff meeting. He also met with the Vice President directly every 1 to 2 weeks. Finally, he frequently interacted with the head of OMB and sometimes also sat in on his staff meetings. In each of these roles he had direct access to, and support from, the highest levels of government. The former head of the Recovery Implementation Office stated this was key to his ability to ensure cooperation and coordination with other federal departments during the Recovery Act. For example, he told us that senior government leaders knew that his office had the authority of the President and Vice President behind it, and if they did not do what was requested, they would have to explain their reasoning to senior White House officials. This awareness of the Recovery Implementation Office’s line of authority helped to ensure that federal officials coordinated and cooperated with the office. In turn, the involvement and engagement of top leaders at individual federal agencies was facilitated by OMB guidance that required each agency to identify a senior accountable official—generally at the deputy secretary or subcabinet level—to be responsible for Recovery planning, implementation, and performance activities within the agency.senior agency leaders were regularly involved with overseeing and reporting on Recovery Act efforts. At the state level, several governors demonstrated top leadership support by establishing specific positions, offices, or both that were responsible for state Recovery efforts. For example, the Governor of Massachusetts created the Massachusetts Recovery and Reinvestment Office as a temporary program management office for the specific task of overseeing Recovery activities. The former director of the office stated that he reported directly to, and drew his authority from, the Governor. The Governor also elevated the office to the rank of a senior level office. This action increased the office’s visibility and gave it a seat at the Governor’s weekly cabinet meetings, where its director would regularly report on the status of Recovery Act projects. In addition, no state Recovery Act program could be approved without the director’s consent. The former director told us that the success of the office was attributable to the direct line of authority it had with the Governor of Massachusetts. In fiscal year 2012, Massachusetts’ Office of Commonwealth Performance, Accountability, and Transparency was created, in part, as a direct result According to Massachusetts’ state officials, this of the Recovery Act. office is the state’s attempt to take lessons from the state’s experience with the Massachusetts Recovery and Reinvestment Office and apply them post Recovery Act. Mass. Gen. Laws ch. 7, § 4A(e). and to run competitions in a manner consistent with their individual statutes, regulations, and agency practices. On the other hand, there was also centralization of oversight as demonstrated by the direct involvement of high-level officials such as the Vice President, cabinet secretaries, and senior accountable officials in federal agencies receiving Recovery Act funding, as well as centrally-placed policy and oversight organizations such as OMB and the Recovery Board. This combination of a centralized and decentralized approach to managing the implementation of the Recovery Act represented a new method of managing grant oversight, one which simultaneously recognized the importance of collaboration while increasing the role of the center. Officials in the Recovery Implementation Office employed a collaborative, facilitative approach, while also leveraging the authority of the Vice President to facilitate the participation of stakeholders. The office functioned as a convener and problem-solver that engaged with a wide range of federal, state and local partners. This approach was embodied in the objectives identified by the Vice President when the office was established. These objectives included the expectation that office staff respond to requests and questions within 24 hours, cut across bureaucratic silos by reaching out to a variety of partners, and always be accessible. Toward this end, the office adopted the role of an “outcome broker,” working closely with partners across organizational silos at all levels of government in order to foster implementation of the Recovery Another role of the Recovery Implementation Act and achieve results. Office was to closely monitor Recovery Act spending. One way it did so was to monitor grants to ensure that they were consistent with the objectives identified by the Vice President. A second way the office monitored spending was to review weekly financial reports on agency obligations and expenditures for programs receiving Recovery Act funds and to meet with the agencies on a regular basis. For more information on the concept of an “outcome broker”, please see Frank DiGiammarino, Can Government Work Like Open Table? Innovation in the Collaborative Era (2012), accessed January 22, 2014, http://www.scribd.com/doc/115361546/Can-Government-Work-Like-OpenTable. OMB sought to facilitate effective implementation of the Recovery Act by working to establish and strengthen relationships with state and local governments that would ultimately implement the programs on the ground. This was done in two ways: (1) by soliciting feedback from state and local partners when formulating and revising rules and policies governing the implementation of Recovery Act programs and (2) by developing its capacity to respond to questions from the many states and localities that would be implementing those rules and policies. A senior OMB official directly involved in this work told us the office had to move out of its traditional role as mainly a policy-making organization to adopt a more interactive and service-oriented approach. Under this approach, key activities involved engaging with and obtaining feedback from states and localities as well as providing technical support to these groups so that they could meet the Recovery Act’s numerous reporting requirements. For example, to obtain feedback from state and local partners when developing key Recovery Act policies, OMB became actively involved in weekly conference calls that included a diverse group of federal, state, and local organizations. Starting in the spring of 2009, regular participants in these calls included OMB; GAO; the National Association of State Auditors, Comptrollers and Treasurers; the National Governors’ Association; the National Association of State Budget Officers; the Recovery Board; the National Association of Counties; the National Association of State Chief Information Officers; and the National Association of State Purchasing Officers. These weekly calls were scheduled after several of these organizations wrote to OMB and GAO to express their strong interest in coordinating on reporting and compliance aspects of the Recovery Act. An important outcome of this regular information exchange was to make OMB aware of the need to clarify certain reporting requirements. The Recovery Act required federal agencies to make information publicly available on the estimate of the number of jobs created and number of jobs retained as a result of activities funded by the act. Our previous Recovery Act work in the states raised the issue that some local officials needed clarification regarding definitions when reporting on job data. The local partners participating in these calls were able to corroborate what we reported and provide OMB with specific information about what additional guidance was needed. To obtain information to further guide refinements to the Recovery implementation process, at the end of 2009, OMB officials said they (1) interviewed and surveyed numerous stakeholders including governors and state and local recipients, and (2) worked with GAO to identify best practices. Based on these efforts, OMB subsequently revised its guidance, which focused on lessons learned around enhancing recipient reporting and compliance. To improve technical support provided to state and local governments implementing the Recovery Act, OMB worked with the Recovery Board to establish an assistance center based on an “incident command” model. One OMB official likened this approach to an extension of a traditional response model used during natural disasters, where the country’s economic condition during the Great Recession was the “incident” and the Recovery Act was the intervention to be rolled out through many partners. To help implement this approach, OMB worked with officials from the Department of Agriculture who offered the services of one of their national emergency management teams to help set up and coordinate this effort. Given the large number of state and local governments that needed to be supported, OMB requested that each agency with grant programs receiving Recovery Act funds contribute personnel to support the center. According to OMB officials, from September to mid-December of 2009, the center responded to approximately 35,000 questions from states and localities. Under the Recovery Act, some agencies used new data-driven approaches to inform how they managed programs, and some of those new approaches become institutionalized at the agencies post-Recovery. While the Government Performance and Results Act (GPRA) Modernization Act of 2010 (GPRAMA) laid out requirements for data- driven quarterly performance reviews, several Recovery Act efforts aided agencies in implementing those requirements. For example, in February 2013 we found that the Department of Energy (DOE) built on its Recovery Act-related performance reviews and established quarterly performance reviews, called business quarterly reviews, in 2011. Another control DOE implemented for large dollar projects was a “Stage-Gate” process, which did not allow the funds to be disbursed all at one time. It required the recipient to meet certain metrics before receiving additional funding at certain levels. DOE Office of Inspector General (OIG) officials believed this Stage-Gate approach was an effective internal control tool. Post- Recovery, DOE has institutionalized both the business quarterly reviews and Stage-Gate processes. As part of the Department of Housing and Urban Development’s (HUD) implementation of the Recovery Act, the agency piloted a new approach to data management and accountability called HUDStat. HUD’s Recovery Act team collected data about the status of projects and progress towards financial goals. Armed with this information, HUD leaders could identify and neutralize spending delays across the agency’s 80 field and regional offices. In some cases, a senior HUD official would make a phone call to a mayor or a governor to stress the need to spend funds quickly. In other cases, staff would refocus on regions where progress was slow and would work with grantees to move more quickly to promote economic growth. After the Recovery Act, and in accordance with GPRAMA requirements, HUD continued to use HUDStat to share data and resources across the agency. The Recovery Act contained increased accountability requirements in the areas of reporting, audits, and evaluations to help ensure that tax dollars were being spent efficiently and effectively. At the same time, the act provided aggressive timelines—approximately 19 months—for the distribution of funds. The combination of these two factors placed high expectations on federal, state, and local governments and led to increased coordination both vertically across levels of government and horizontally within the same level of government to share information and work towards common goals. Organizations involved in overseeing and implementing grants funded by the Recovery Act made use of both new and established networks to share information. Shortly after the Recovery Act was signed into law, our then Acting Comptroller General and the Chair of the Council of the Inspectors General on Integrity and Efficiency hosted a coordination meeting with the OIGs or their representatives from 17 federal agencies to discuss an approach to coordination and information sharing going forward. We also worked with state and local auditors and their associations to facilitate regular conference calls to discuss Recovery Act issues with a broad community of interested parties. Participants included the Association of Government Accountants; the Association of Local Government Auditors; the National Association of State Auditors, Comptrollers, and Treasurers; the Recovery Board; and federal OIGs. Another active venue for information sharing was the National Intergovernmental Audit Forum (NIAF). The NIAF, led during this period by our then Acting Comptroller General, is an association that has existed for over three decades as a means for federal, state, and local audit executives to discuss issues of common interest and enhance accountability. NIAF’s May 2009 meeting brought together these executives and others including OMB, to update them on the Recovery Act and provide another opportunity to discuss emerging issues and challenges. In addition, several Intergovernmental Audit Forum meetings were scheduled at the regional level across the country and sought to do the same. This regional coordination and information sharing directly contributed to our Recovery Act work in the states. For example, our western regional director made a presentation at the Pacific Northwest Audit Forum regarding our efforts to coordinate with state and local officials in conducting Recovery Act oversight. In conjunction with that forum and at other related forums, she regularly met with the principals of state and local audit entities to coordinate oversight of Recovery Act spending. Officials from New York City also played a role in creating networks to share information. Believing that large cities were probably facing similar issues and challenges, Recovery officials in New York City established the American Recovery and Reinvestment Act Big City Network (BCN) to serve as a peer exchange group and facilitate information sharing among large municipalities across the country. The group was composed of over 20 large cities with geographical diversity, such as Los Angeles, Philadelphia, Phoenix, and Seattle, that received a significant amount of federal stimulus funding. The former head of the BCN told us that the organization held frequent teleconferences and used this collaboration to elevate issues unique to large cities with OMB, the White House’s Recovery Implementation Office, and the Recovery Board. For example, BCN informally surveyed its members in January 2010 concerning each grant and associated funds they received. From this survey, BCN officials assembled a list of cross-jurisdictional issues reflecting the perspectives and experiences of large cities and shared them with the White House, OMB, and the Recovery Board. Likewise, OMB, the Recovery Implementation Office, and the Recovery Board used BCN as a vehicle for getting information out to its partners on the ground. Similarly, at the state level, a network was established where state Recovery Act coordinators shared information and lessons learned on a weekly basis. This state-level network also discussed ongoing Recovery Act policy and operational issues with the White House, OMB, and the Recovery Board to ensure successful implementation. Federal officials joined the state calls on a regular basis. Both BCN and the state network proved to be especially helpful in fostering intergovernmental communications. For example, the former head of the BCN stated that in response to a Senate Committee request in 2012, New York City leveraged both BCN and the state Recovery Act coordinators’ network to inform the current discussion on the Digital Accountability and Transparency Act, proposed legislation which seeks to improve grant transparency through increased reporting. Cities and states mobilized quickly and came together on key consensus principles for Congress’ consideration. Under the tight time frames set for implementation of the Recovery Act, federal agencies needed to work together to accomplish their goals. For example, HUD and DOE shared a goal of weatherizing low-income households through long-term energy efficiency improvements. To get the projects under way as quickly as possible, they worked together to ensure that homeowners met income standards. Before Recovery Act implementation, both DOE and HUD conducted their own independent income verifications. In May of 2009, DOE and HUD entered into a memorandum of understanding that eliminated the need for separate DOE income verification for people whose incomes had already been verified by HUD. According to DOE officials, this collaboration helped projects move faster, reduced the cost and administrative burden of duplicative verifications, and helped DOE weatherize numerous homes under the Recovery Act through 2013. DOE officials reported that between fiscal years 2010 and 2013, the joint effort helped weatherize approximately 1.7 million housing units, the majority of which were low- income. This policy of sharing low-income verifications for weatherizing homes has continued post-Recovery Act. At the state level, Massachusetts is an example where officials developed new ways of working together to achieve Recovery Act goals. For example, Massachusetts state officials established the Stimulus Oversight and Prevention (STOP) Fraud Task Force in 2009 to fulfill the Recovery Act’s goal of preventing fraud, waste, and abuse of Recovery Act funds. This task force included the state OIG’s office, the Attorney General’s office, and the State Auditor. Over the next 2 years, the group met bimonthly to discuss fraud prevention and collaborated with several federal agencies including the Department of Justice, the Federal Bureau of Investigation, and HUD. The group also brought in federal OIGs including DOE and Education, the state Comptroller’s office, and the Massachusetts Recovery and Reinvestment Office to discuss our report findings and OMB guidance. According to officials from the Massachusetts Attorney General’s office, the task force improved communication and furthered efforts to avoid overlap. Faced with the short time frames and accelerated roll out of Recovery Act funds, both the oversight community and agencies adjusted their oversight approach and innovated to foster accountability for Recovery Act funds at the federal and state agency levels. These organizations became more engaged in up-front analysis and monitoring of programs under the Recovery Act and their reviews were often issued before money was spent. These practices included (1) assessing and planning for risks up front; (2) reviewing programs before and while they were being funded rather than waiting until after programs were implemented; (3) communicating findings quickly through informal processes as opposed to regular full reports; and (4) using advanced data analytics. At the federal level, several agency OIGs conducted up-front risk planning to proactively prepare for the influx of Recovery Act funds. For example, the Department of Transportation’s (DOT) OIG instituted a three-phase risk assessment process for DOT programs that received Recovery Act funds. The OIG first identified existing program risks based on past reports; it next assessed what the department was doing to address those risks; and it then conducted the audit work. DOT’s OIG is continuing to use this three-phase scan approach for its work on Hurricane Sandy. At the Department of Education, when the OIG realized that Education’s discretionary grant budget would increase from a typical allotment of $60 billion annually to over $100 billion under the Recovery Act, officials put aside their initial work plan and developed a new one which focused on the Recovery Act. Toward this end, the OIG conducted up-front risk assessments by looking at its prior work to identify persistent implementation issues going back to fiscal year 2003. The OIG then issued a 2009 capping report that summarized these issues. This report and additional risk assessments on Recovery Act-specific issues guided the OIG's internal control audits that focused on the use of funds, cash management, subrecipient monitoring, and data quality for Recovery Act education programs. Shortly after the Recovery Act was signed, DOE’s OIG reviewed the challenges the agency would need to address to effectively manage the unprecedented level of funding and to meet the goals of the Recovery Act. The resulting report was based on a body of work by the OIG to improve operations and management practices. The OIG identified specific risks that they discovered during past reviews and investigations. The OIG also suggested actions that should be considered during Recovery Act planning and program execution to help reduce the likelihood that these historical problems would recur. Further, the OIG described the department’s initial efforts to identify risks and to develop strategies to satisfy the Recovery Act’s goals and objectives. In addition, the report outlined the OIG’s planned oversight approach which adopted a risk-based strategy that included, among other things, early evaluations of internal controls and assessments of performance outcomes. At HUD, regional offices conducted front-end risk assessments of programs that would be receiving Recovery Act funds. The HUD OIG considered these risk assessments when preparing its work plan and carrying out audits. The office also conducted capacity reviews for programs that field offices had identified as having known issues. The purpose of these capacity reviews was to enable the office to actively address and work to resolve known issues before Recovery Act funds were distributed to programs. At the state level, audit organizations also adjusted their usual approaches when planning and conducting reviews of grant programs that received Recovery Act funds. Several state auditors conducted extra audit work of state programs up front in an effort to identify risks and inform their work moving forward. For example, the Office of the California State Auditor conducted “readiness reviews” that highlighted known vulnerabilities in programs receiving Recovery Act money. The office used the information coming out of these reviews to identify specific issues to focus on in future work as well as to inform the oversight committees of the state legislature and other state officials involved in Recovery Act oversight and implementation. As a result of one such review that focused on DOE’s Weatherization Assistance Program, the State Auditor was able to identify key implementation issues that needed attention at a joint meeting of state and federal officials organized by the Governor’s Recovery Act Task Force. The readiness review identified specific areas where the program needed to improve and informed the frequency with which state auditors would go back to program officials to check on progress. According to the California state auditor, among the benefits of this approach was the feedback it provided to state agencies on their level of readiness as well as the detailed information given to both the state legislature and the Governor’s Recovery Act Task Force on the agency’s progress. The use of readiness reviews has continued post- Recovery Act. Most recently, the office employed the approach in 2013 as it prepared to audit the implementation of the Affordable Care Act in California. The Recovery Act’s short time frames prompted the oversight community to carry out some of its reviews in “real time” as Recovery funds were being rolled out, as opposed to the traditional approach of reviewing a program after implementation. Under this approach, members of the oversight community looked for ways to inform program officials of challenges and needed improvements much earlier in the process. For example, as described previously in table 1, the Recovery Act specified several roles for us, including conducting bimonthly reviews of selected states’ and localities’ use of funds made available under the Act. We subsequently selected a core group of 16 states and the District of Columbia to follow over the next few years to provide an ongoing longitudinal analysis of the use of funds provided in conjunction with the Recovery Act. The Recovery Act also assigned us a range of responsibilities to help promote accountability and transparency. Some were recurring requirements such as providing bimonthly reviews of the use of funds made available under various provisions of the Recovery Act by selected states and localities and reviews of quarterly reports on job creation and job retention as reported by Recovery Act fund recipients. Other requirements included targeted studies in several areas such as small business lending, education, and trade adjustment assistance. In total, we issued approximately 125 reports on, or related to, the Recovery Act resulting in more than 65 documented accomplishments. The interest in obtaining “real time” feedback concerning Recovery Act implementation was not limited to the oversight community. For example, DOT’s Federal Highway Administration (FHWA) established National Review Teams (NRT) within 3 months of the Recovery Act’s passage to help assist its division offices attain the greater level of accountability and transparency called for under the Recovery Act. As we previously reported, the NRTs were composed of FHWA staff—separated from the rest of FHWA—to act as a neutral third party to conduct oversight. The mission of the NRTs was to conduct quick reviews of FHWA programs and assess processes and compliance with federal requirements in six key risk areas: (1) preliminary plans, specifications, and estimates; (2) contract administration; (3) quality assurance of construction materials; (4) local public agencies; (5) disadvantaged business enterprises; and (6) eligibility for payments. As a review progressed, the NRT discussed findings with division office and state transportation staff. According to FHWA officials, independent reviews had several benefits: a consistent, comparative perspective on the oversight regularly conducted by division offices, and the collection of information at the national level on both best practices and recurring trouble spots across FHWA division offices; additional “boots on the ground” for project-level oversight and increased awareness of federal oversight activity among states, Metropolitan Planning Organizations, and other transportation organizations receiving Recovery Act funds; and an independent outside voice to examine Recovery Act projects and point out problems, keeping the partnering relationship between the division offices and the state DOTs intact. Division offices and state officials with whom we spoke responded positively to the NRT reviews. The NRT was viewed as a success for FHWA and it has since added independent reviews based largely on the NRT model to provide independent corporate level review of projects and programs in addition to providing other support services. The rapid pace at which Recovery Act funds were being distributed also prompted audit organizations to communicate their findings earlier in the audit process. For example, DOT’s OIG issued periodic advisories within the agency rather than waiting until an audit was completed to share its findings. According to OIG staff, these advisories informed the department of issues or concerns shortly after they were discovered, thereby permitting program staff to take corrective action much more quickly. In our first report on our bimonthly reviews of the use of Recovery Act funds by selected states and localities, we determined that the Single Audit process needed adjustment to provide the necessary level of focus and accountability over Recovery Act funds in a timelier manner than the current schedule. Subsequently, we recommended that the director of OMB adjust the Single Audit process to provide for review of the design of internal controls during 2009 over programs to receive Recovery Act funding, before significant expenditures in 2010. In response, in October 2009 OMB implemented the Single Audit Internal Control Project—a collaborative effort between 16 volunteer states receiving Recovery Act funds, their auditors, and the federal government—to achieve more timely communication of internal control deficiencies for higher-risk Recovery Act programs. The project encouraged auditors to identify and communicate significant deficiencies and material weaknesses in internal controls over compliance for selected major Recovery Act programs 3 months sooner than the 9-month time frame required under statute. The project allowed program management officials at an audited agency to expedite corrective action and help mitigate the risk of improper Recovery Act expenditures. In May 2010, we reported that the project met some of its objectives and was helpful in identifying critical areas where further OMB actions were needed to improve the Single Audit process over Recovery Act funding. Auditors at the local level also communicated their findings early. For example, the Denver City Auditor’s Office adopted new practices to provide more timely information on Recovery Act programs to the Mayor and other key officials, particularly on issues affecting compliance with Recovery Act reporting requirements. Using a tiered notification process, the auditor’s office would initially notify the appropriate city department informally through e-mail or a similar means of potential issues they were finding during an on-going audit. The auditor’s office would revisit the issues later and, if the office determined the issue had not been addressed, it would then formally communicate any substantive issue on a real-time basis through an “audit alert.” These alerts were typically brief documents and went to the affected departments as well as directly to the Mayor’s work group that oversaw the city’s Recovery Act implementation. If appropriate action was still not forthcoming, the city auditor might issue a public alert or maybe a full public audit report. According to a senior city audit official, the alerts were beneficial because the city auditor did not have to conduct a full audit to communicate risks and findings to decision makers, allowing them to more quickly address problems. The city auditor issued its first audit alert in October 2009 and subsequently issued another one in February 2010 when problems from the first one had not been addressed. After the second alert, the city administration corrected the identified problems. To further increase accountability under the Recovery Act, the Recovery Board utilized innovative data analytics in carrying out its oversight responsibilities. Data analytics is a term typically used to describe a variety of techniques that can be used to analyze and interpret data to, among other things, help identify and reduce fraud, waste, and abuse. Specifically, predictive analytic technologies can be used to identify potential fraud and errors before payments are made, while other techniques, such as data-mining and data-matching of multiple databases, can identify fraud or improper payments that have already been awarded, thus assisting agencies in recovering these dollars. In October 2009, the Recovery Board established an innovative center to analyze the use of Recovery Act funds by employing data analytics (see figure 3). The Recovery Operations Center (ROC) served as a centralized location for analyzing Recovery Act funds and their recipients through the use of such predictive analytic technologies. According to Recovery Board staff, the results of these approaches provided the OIG community and other oversight authorities with information they could use to focus limited resources on cities, regions, and high-risk government programs where historical data and current trends suggested the likelihood of future risk. ROC analysts would cross- reference lists of grant recipients or sub-recipients against a variety of databases to look for risk indicators such as criminal convictions, lawsuits, tax liens, bankruptcies, risky financial deals, or suspension/debarment proceedings. One tool used to do this is link analysis, which assists the analyst in making connections by visually representing investigative findings. Link analysis charts visually depict how individuals and companies are connected, what awards an entity has received, and how these actors may be linked to any derogatory information obtained from the databases described above . Such tools, when combined with enhanced Geographic Information System capabilities, enable ROC analysts to conduct geospatial analysis by displaying data from multiple datasets on maps to help them make linkages and discover potential problems. For example, the ROC helped a federal agency investigate possible contract fraud related to over-billing on multiple contracts. ROC analysts found 99 recipient awards made to a single company totaling over $12 million. In another example, the ROC helped to investigate allegations of false claims and major fraud against the United States. ROC analysts found officers of one company were also executives of more than 15 other companies, many of which were located at the same address, and collectively received millions in Recovery Act funds. More recently, the ROC has been used to track funds and help reduce fraud, waste, and abuse related to the tens of billions of dollars that have been awarded to states and communities to assist in their recovery after Hurricane Sandy hit in October 2012. Recovery Board staff have sought to leverage the expertise they have developed in analyzing financial spending and identifying potential fraud and high-risk indicators based on their experience with the Recovery Act. Figure 3. An Analyst Working in the Recovery Board’s Recovery Operations Center and a Sample Output of One of ROC’s Link Analysis Tools. To assure the public that their tax dollars were being spent efficiently and effectively, the Recovery Act called for increased oversight and accountability of those funds by oversight and program entities at the federal, state, and local levels. This increased emphasis on oversight and accountability presented challenges for those entities stemming from (1) a lack of financial resources to conduct oversight at the state and local levels, (2) human capital issues, and (3) the accelerated roll out of Recovery Act funds. Officials with whom we spoke in several states expressed concerns that the Recovery Act did not provide funding to state oversight entities, although it placed additional federal requirements on them to provide proper accounting and to ensure transparency. Federal agency OIG offices received a significant amount to conduct oversight of Recovery Act funds—ranging anywhere from $1 million to $48.25 million distributed to more than 28 agencies. In contrast, states and localities relied on their existing budgets and human capital resources (and, in some cases, supplemented by a small percentage of administrative funds) to carry out their additional oversight activities. Due to fiscal constraints, states reported significant declines in the number of management and oversight staff—limiting states’ ability to ensure proper implementation and management of Recovery Act funds. With oversight capacity already strained in many states, the situation was further exacerbated by increased workloads resulting from implementation of new or expanded grant programs funded by the Recovery Act. For example, Massachusetts officials explained that the state oversight community faced budget cuts of about 10 percent. According to officials from the OIG and the State Auditor’s office, their budgets are almost entirely composed of salaries, and any cuts in funding resulted in fewer staff available to conduct oversight. As a result of the cuts, the Inspector General stated that his department did not have the resources to conduct any additional oversight related to Recovery Act funds. Further, the Massachusetts State Auditor described how his department had to furlough staff for 6 days in fiscal year 2009. In recognition of this situation and reflective of the state’s desire to pursue fraud in the Recovery Act program, for state fiscal years 2009 through 2012, the Massachusetts Recovery and Reinvestment Office allocated funds from the state’s central administration account to the Attorney General, State Auditor, and OIG offices to ensure that oversight would take place. The California State Auditor also cited the lack of federal funding for state and local oversight as a challenge to ensuring accountability in the implementation of the Recovery Act. In a 2009 testimony to the California state budget committee, the State Auditor said that her office would need to conduct an additional 14 audits based on an initial analysis of the estimated stimulus funds that California would receive. Furthermore, the programs that the office was auditing at the time received additional funds, which potentially increased the workload and cost to audit those programs as well. Finally, new requirements created by the Recovery Act for existing programs also impacted the State Audit Office’s efforts. The California State Auditor noted that given the additional responsibilities her office faced due to the influx of stimulus funds, any budget cuts would adversely affect the office’s ability to conduct audits. In another example, Colorado’s state auditor reported that state oversight capacity was limited during Recovery Act implementation, noting that the Department of Health Care Policy and Financing had three controllers in 4 years and the state legislature’s Joint Budget Committee cut field audit staff for the Department of Human Services in half. In addition, the Colorado DOT’s deputy controller position was vacant, as was the Department of Personnel & Administration’s internal auditor position. Colorado officials noted that these actions were, in part, due to administrative cuts during a past economic downturn in an attempt to maintain program delivery levels. The President’s goal for quickly spending Recovery Act funds created a large spike in spending for a number of programs in the 28 agencies receiving Recovery Act funds. The act also created a number of new programs—requiring agencies to move quickly. As a result, under the Recovery Act’s accelerated rollout requirements, some federal agencies and states faced oversight challenges. For example, DOT and states faced numerous challenges in implementing the Recovery Act’s maintenance-of-effort oversight mechanism due to the accelerated rollout of funds. The Recovery Act contains maintenance of effort provisions designed to prevent recipients, such as state DOTs, public housing agencies, and private companies, from substituting planned spending for a given program with Recovery Act funds. That is, the provisions ensured that the increased federal spending would supplement rather than replace state, local, or private spending. The maintenance-of-effort provision for DOT in the Recovery Act required the governor of each state to certify that the state would maintain its planned level of transportation spending from February 17, 2009, through September 30, 2010. Twenty-one states did not meet their certified planned spending levels, and a January 2011 preliminary DOT report found that some of these states were unclear on what constituted “state funding”. DOT also found some of the states were unclear about how well DOT guidance on calculating planned expenditures would work in the many different contexts in which it would have to operate. As a result, many problems came to light only after DOT had issued initial guidance and states had submitted their first certifications. DOT issued guidance seven times during the first year after the act was signed to clarify how states were to calculate their planned or actual expenditures for their maintenance-of-effort certifications. Further, many states did not have an existing means to identify planned transportation expenditures for a specific period, and their financial and accounting systems did not capture that data. Therefore, according to DOT and some state officials, a more narrowly focused requirement applying only to programs administered by state DOTs or to programs that typically receive state funding could have helped address the maintenance-of-effort challenges. DOT and state officials told us that while the maintenance-of-effort requirement can be useful for ensuring continued investment in transportation, allowing more flexibility for differences in states and programs, and adjustments for unexpected changes to states’ economic conditions, should be considered for future provisions. At DOE, the department initially encountered some challenges with fully developing a management and accountability infrastructure because of the large amount of Recovery Act funding it received in a short period of time. According to an official in the DOE OIG’s office, this was especially true with the new Energy Efficiency Conservation Block Grant program.This official told us that some states and localities also did not have the infrastructure in place (including the necessary training) to manage the large amount of additional federal funding. Further, DOE required recipients’ weatherization plans to address how the respective state’s current and expanded workforce (employees and contractors) would be trained. In May 2010, according to DOE, the agency was in the process of developing national standards for weatherization certification and accreditation. DOE estimated that developing the standards would take about 2 years—a time frame that did not match the accelerated timing of the Recovery Act’s funds’ distribution. Several years after the Recovery Act was implemented, DOE reported that it had completed certain milestones toward developing national standards for weatherization, training, certification, and accreditation, but was still working to finalize other elements such as its national certification program. In an April 2009 memorandum, OMB directed agencies to follow leading practices for federal website development and management, such as those listed on HowTo.gov, a website managed by the Federal Web Managers Council and the General Services Administration.makes available a list of the “Top 10 Best Practices” for federal websites as a resource to improve how agencies communicate and interact with HowTo.gov customers and provide services.state and city Recovery websites, demonstrated several of these leading practices including establishing a clear purpose of the website, using social networking tools to garner interest in the website, tailoring websites to meet audience needs, and obtaining stakeholder input when designing the website. In addition, we found that some websites enabled place- based performance reporting. Consistent with leading practices for the development of federal websites on HowTo.gov, Recovery.gov and selected state Recovery websites clearly identify for the user the purposes of the site and the ways it can be used to accomplish tasks efficiently. According to HowTo.gov, this is important because people often visit government websites with a specific task in mind, and if it is not easy to find the information quickly that they need to complete that task, they will leave the site. Recovery.gov contains an entire page that outlines what users can do on the site, including how to use the raw data available through the website; report waste, fraud, and abuse; or find job and grant opportunities. Further, Recovery.gov has a “Get Started” page with an overview of the information on the site including Recovery Act goals, the Recovery Board’s mission, what information is not available on the website, and what users can do on the website. Similarly, Massachusetts’ Recovery website has tabs on its homepage that link to information on how to use the website to track Recovery Act jobs, spending, vendors, and the impact of Recovery Act dollars in the state. For example, the “track jobs” page informs users how they can track jobs created and retained in their community and provides a user guide to assist them in their query. Another leading practice for federal websites includes the use of social networking tools. According to Howto.gov, social media is transforming how government engages with citizens, allowing agencies to share information and deliver services more quickly and effectively than ever before. Recovery.gov and selected state and local Recovery websites use social networking tools to garner interest in their websites. These websites integrated Web 2.0 technologies to help people share and use the information they provide. For example, to develop web-based communities of interest, Recovery.gov has a dedicated social media web page that has links to Recovery’s presence on various social-networking tools such as Facebook, Twitter, YouTube, and Flickr. Recovery.gov’s social media page enables users to (1) download a Recovery application for iPhones and for iPads with a mapping feature showing how Recovery Act funds were being spent, (2) sign up for a Recovery.gov month-in- review email, and (3) sign up to receive Recovery RSS web feeds. Finally, Recovery.gov also has a blog, written by Recovery Board staff, with a stated purpose to further a dialogue on transparency and accountability in government, as well as to provide a forum for thoughts, comments, and suggestions from the public. New York City also made use of social networking to communicate information regarding Recovery Act implementation through the use of a Tumblr blog. City officials used this blog to communicate stories and examples to its residents about how it was using Recovery Act funds and the impact of those investments. City officials said the blog allowed them to get behind full-time equivalent numbers and dollar expenditures so that people could better understand how the Recovery Act was helping them tackle problems where they work and live. For example, the blog described one project that had no net increase in jobs but still made a valuable difference for the city because Recovery Act funds were used to repair 300,000 potholes and move to zero diesel fuel emissions for city vehicles. Organizing a website according to the needs of its audience is also a key leading practice for federal websites since an agency’s goal is to build the right website for the people who need it and serve them effectively by learning as much as possible about the website’s customers and what they do. Recovery.gov has dedicated pages for different audiences that compile and organize relevant resources according to their needs and interests. On its home page, Recovery.gov has a tab which provides links to pages designed with specific users in mind such as citizens, the press, and grant recipients. There are also links to pages on neighborhood Recovery Act projects, information on the Recovery Board, and other information users are looking for. For example, grant recipients have a dedicated page that provides resources such as reporting timelines, user guides, a service/help desk, recipient reporting information, and a recipient awards map. (See figure 4.) On Recovery.gov’s “Developer Center” web page, users can access data reported by recipients of Recovery awards through the Recovery application program interface (API) and the Mapping API. Users can also find widgets providing data summaries by state, county, congressional district, or ZIP code as reported by recipients. The web page also has a tool for users to build customized charts and graphs displaying information such as funds awarded and received by state, agencies by number of awards, and spending categories by funds awarded. The state of Massachusetts also tailored its Recovery Act website to meet its audience’s needs. Prior to its implementation of the Recovery Act’s transparency provisions, Massachusetts had little experience with electronic reporting and disclosure of federal contracts, grants, and loans. The MassRecovery website provided weekly citizen updates and testimonials of how spending has benefited lives. The Citizens’ Update web page provides a summary of where the state’s Recovery Act dollars are going, where jobs are being created and retained, and information on beneficiaries of funds received. In December 2009, MASSPIRG, an independent consumer research group, issued a brief pointing to the strengths of the Massachusetts Recovery website including the ability of the Citizens’ Update web page to show money spent and jobs created and retained in easy-to-read pie charts and tables; a summary of funds distributed through the state; and an interactive state map of Recovery Act spending. Further, in January 2010, Good Jobs First, a national policy resource center, reviewed and evaluated states’ Recovery Act websites. The organization ranked Massachusetts’ Recovery website on its top 10 list citing such beneficial features as the site’s comprehensive search engine, data download capability, and information on five key Recovery Act project elements—description, dollar amount, recipient name, status, and the text of the award. Leading website practices also recommend that developers obtain stakeholder input when designing federal websites by engaging potential users through focus groups and other outreach; regularly conducting usability tests to gather insight into navigation, the organization of content, and the ease with which different types of users can complete specific tasks; and collecting and analyzing performance, customer satisfaction, and other metrics. According to leading website practices, these efforts are important for collecting and analyzing information about audiences, their needs, and how they are using, or want to use, the website. The developers of Recovery.gov followed this leading practice by using input from user forums, focus groups, and usability testing with interested citizens to collect feedback and recommendations, which then inform the development of the website from its initial stages. For example, teaming with OMB and the National Academy of Public Administration, the developers of Recovery.gov hosted a week-long electronic town hall meeting at the end of April 2009 entitled “Recovery Dialogue on Information Technology Solutions.” Over 500 citizens, information technology specialists, and website development experts registered for the event and submitted numerous ideas. Recovery.gov adopted some of the ideas right away and included others in the re-launched version of the website in September 2009. These changes included a standardized reporting system for recipients, a greater use of maps, and a feedback section for users. Additionally, in October 2009, Recovery.gov developers conducted remote usability testing with 72 users, where the developers received suggested changes, some of which they later implemented. Further, in 2012, significant changes were made to Recovery.gov based on user feedback on the website. These changes included creating a recipient and agency data page, agency profiles, and a new Recipient Projects Map with a series of dropdown menus and checkboxes that enable users to filter data so they can see it in a targeted fashion (for example, by state, agency, or category). For websites covering numerous projects at various locations, a place- based geographic information system can be a useful tool. According to the White House’s Digital Government Strategy, the federal government needs to be customer-centric when designing digital service platforms such as websites. In other words, agencies need to be responsive to customers’ needs by making it easy to find and share electronic information and accomplish important tasks. From the beginning, recipient reported data on Recovery.gov was geo-coded in a way that made it possible for users to find awards and track the progress of projects on a block-by-block basis. The presentation of information on Recovery.gov and on many state websites generally targeted individual citizens who were not experts in data analysis. The format and content of data prioritized mapping capabilities and invited people to enter their ZIP code and locate projects in their immediate area. For example, figure 5 shows the map a user sees if ZIP code 30318 in Georgia is entered into this web page. From this map, the user can click on any of the dots that represent Recovery projects to find out information such as the project recipient name, award amount, project description, number of jobs created, and completion status. Additional information available to users includes the amount of funds received by recipients as well as the overall distribution of grants by funding categories for that area. States and localities also utilized mapping features on their Recovery websites. For example, in New York City, Recovery officials launched a Recovery Act website, the NYCStat Stimulus Tracker, as an interactive, comprehensive reporting tool. The federal government’s website, Recovery.gov, served as the design inspiration and, according to a senior city official, Stimulus Tracker was one of the first publicly-accessible websites to report Recovery Act data for a local jurisdiction. City Recovery officials were able to develop and launch New York City’s stimulus website more quickly than other locations—approximately 6 weeks from start to completion—because they were able to leverage a previously implemented information technology platform to support citywide performance reporting. Stimulus Tracker allowed the public to explore several levels deeper than what was at Recovery.gov, which reported at the funding award level. For example, Stimulus Tracker broke down each award into several projects, each of which had its own dashboard page that displayed information such as (1) the status of the project, (2) the percentage of total funds spent, (3) start date and spending deadlines, and (4) the number of jobs created or retained. Visitors to the site could drill into a record of every payment made with stimulus funds through the additional feature “Payment Tracker” and every contract to carry out stimulus-funded work through “Contract Tracker.” Stimulus Tracker also offered an interactive map for site visitors who were interested in knowing how stimulus dollars were allocated geographically and where specific projects were located. This information was layered on top of the city’s existing online map portal. It included such items as the locations of schools, libraries, hospitals, and subways, as well as online property, building, statistics, and census information. As New York City’s existing online map portal could already be navigated either by entering a specific address or simply using zoom and scroll tools, city Recovery Act officials were able to build on this application and include a city mapping tool for Recovery Act funds where the public could find any project with a discrete location. See figure 6 for a screen shot of New York City’s mapping tool depicting the city’s Recovery Act projects. The Recovery Act requires recipients to report on their use of funding and agencies that provide those funds to make the reports publicly available. The Recovery Act’s recipient reporting requirements apply only to nonfederal recipients of funding, including all entities other than individuals receiving Recovery Act funds directly from the federal government such as state and local governments, private companies, educational institutions, nonprofits, and other private organizations. As required by section 1512(c) of the Recovery Act, recipients were to submit quarterly reports that included the total amount of Recovery Act funds received, the amount of funds expended or obligated to projects or activities, and a detailed list of those projects or activities. For each project or activity, the detailed list was to include the project’s name, description, and an evaluation of its completion status. Also, the recipient reports were to include detailed information on any subcontracts or subgrants as required by the Federal Funding Accountability and Transparency Act of 2006. For example, recipient reports are required to also include details on sub-awards and other payments. With the Recovery Act’s enhanced reporting requirements on spending, agencies and recipients faced several challenges. Many agencies and state and local partners were limited in their capacity to meet the enhanced reporting requirements due to a lack of knowledge and expertise. Others struggled with the burden of double reporting when they had to report to federal systems tracking Recovery dollars as well as to agency systems because, in some cases, agencies required more data to manage their programs. Finally, some had trouble reporting data for certain projects within the operational limitations of place-based data mapping systems. Capacity to meet reporting requirements. Many state and local partners were limited in their capacity to meet spending reporting requirements because they lacked knowledge and expertise. Using a centralized mechanism like FederalReporting.gov to capture recipient reporting information was a new process that recipients and agencies had to learn. We have previously reported on the questions raised by state officials regarding the reporting capacities of some local organizations, particularly small rural entities, boards, or commissions, and private entities not used to doing business with the federal government. In addition, some state officials said that the Recovery Act’s requirement that recipients report on the use of funds within 10 days after a quarter ends was a challenge because some sub-recipients were unable to send them the needed data on time. Officials at several agencies suggested that if FederalReporting.gov had allowed certain key award and identifying data fields to be pre-populated each quarter, it would have likely resulted in fewer data errors for agencies to address and eased the reporting burden on recipients. In our September 2013 report and testimony on federal data transparency, we concluded that the transparency envisioned under the Recovery Act for tracking spending was unprecedented for the federal government, requiring the development of a system that could track billions of dollars disbursed to thousands of recipients. Such a system needed to be operational quickly to enable posting of spending information rapidly for a variety of programs. However, because agency systems did not collect spending data in a consistent manner, the most expedient approach for Recovery Act reporting was to collect data directly from fund recipients. Recipients had the additional burden of having to provide this information and when the data had to be entered manually, it could impact the accuracy of the data. Thus, in September 2013 we recommended that the director of OMB, in collaboration with members of the Government Accountability and Transparency Board, develop a plan to implement comprehensive transparency reform, including a long-term timeline and requirements for data standards, such as establishing a uniform award identification system across the federal government. Earlier this year, the Recovery Board noted that agencies and OIGs also experienced difficulties adapting to the more frequent reporting (every quarter) and more detailed reporting (e.g., jobs created or individual project activities) required of most government grant recipients. Agency officials acknowledged spending considerable staff hours training recipients, providing technical assistance to them, verifying and validating their data, and following up with them when issues arose. Despite efforts to streamline and enhance existing review protocols, agencies still needed skilled people to review and process applications for awards. Although agencies and OIGs credited outreach to recipients for reducing noncompliance with reporting requirements, the amount of staffing resources it took to conduct that outreach was significant. Double reporting. We have previously noted that recipients of Recovery Act funds were required to report similar information to both agency reporting systems and FederalReporting.gov. Several federal agency and state government officials we spoke with also mentioned that reporting to FederalReporting.gov resulted in double reporting for their agency and grantees as several of them deemed their existing internal systems superior and therefore would end up reporting to both. For example, at HUD, program offices were unable to abandon their established reporting systems because the agency’s systems collected data necessary to support HUD’s grants management and oversight processes. HUD officials told us that requiring grantees to report using two systems resulted in double reporting of data and proved burdensome to recipients and to HUD staff who spent many hours correcting inaccurate entries. At DOT, officials preferred using the agency’s own data because it was more detailed and was reported monthly—more frequently than the Recovery.gov data. In a focus group involving state transportation officials, several echoed the redundancy of reporting systems. These officials indicated that having to report to three systems—the internal state system, DOT’s system, and FederalReporting.gov—increased their agencies’ burden. As we reported in our previously mentioned September 2013 report and testimony on federal data transparency efforts, the lack of consistent data and standards and commonality in how data elements are defined places undue burden on federal fund recipients. This can result in them having to report the same information multiple times via disparate reporting platforms. procedures for reporting on the use of federal funds, it directed recipients of covered funds to use a series of standardized data elements. Further, rather than report to multiple government entities, each with its own disparate reporting requirements, all recipients of Recovery Act funds were required to centrally report into the Recovery Board’s inbound reporting website, FederalReporting.gov. GAO-13-871T and GAO-13-758. the geospatial reporting presentation format on the website. For example, according to Recovery Board officials, the website only allowed one location to be reported per project even though some projects spanned multiple locations. Therefore, if a DOT highway project crossed multiple ZIP codes, only one location of performance could be reported. Further, certain locations were difficult to map such as rural roads, post office boxes, county level data, and consultant contractors who worked out of their homes. The other major performance measure required under the Recovery Act focused on the estimate of the number of jobs created or number of jobs retained as a result of funding provided by the act. In addition to the previously described reporting on funds spent and activities, recipients were required in their quarterly reports to estimate the number of jobs created or retained by that project or activity. OMB issued clarifying guidance for recipient reporting in June 2009 and recipients began reporting on jobs starting in October 2009. Among other things, the guidance clarified that recipients of Recovery Act funds were to report only on jobs directly created or retained by Recovery Act-funded projects, activities, and contracts. Recipients were not expected to report on the employment impact on materials suppliers (“indirect” jobs) or on the local community. Recipients had 10 days after the end of each calendar quarter to report. OMB’s guidance also provided additional instruction on calculating the number of jobs created or retained by Recovery Act funding on a full-time equivalent (FTE) basis. Recipients faced several challenges meeting these requirements. They had difficulty accurately defining FTEs, as various recipients interpreted and applied the FTE guidance from OMB differently. Further, many recipients struggled to meet reporting deadlines as they had little time to gather, analyze, and pass on information to the federal government at the end of each fiscal quarter. Definitional challenges and discrepancies in reporting FTEs. Under OMB guidance, jobs created or retained were to be expressed as FTEs. In our November 2009 report we found that recipients reported data inconsistently even though OMB and federal agencies provided significant guidance and training. Specifically, we found that while FTE calculations should allow for different types of jobs—part time, full time or temporary—to be aggregated, differing interpretations of the FTE guidance compromised the recipients’ ability to aggregate the data. For example, in California, two higher education systems calculated FTEs differently. One chose to use a 2-month period as the basis for the FTE performance period. The other chose to use a year as the basis. The result was almost a three-to-one difference in the number of FTEs reported for each university system in the first reporting period. Although the Department of Education provided alternative methods for calculating an FTE, in neither case did the guidance explicitly state the period of performance of the FTE. We recommended that OMB clarify the definition of FTE jobs and encourage federal agencies to provide or improve program-specific guidance for recipients. Further, we recommended that OMB be more explicit that jobs created or retained are to be reported as hours worked and paid for by the Recovery Act. In general, OMB and agencies acted upon our recipient reporting-related recommendations and later reporting periods indicated significant improvements in FTE calculations. OMB’s guidance changed the original formula and consequently, agencies had to rush to educate recipients about the changes. Agencies spent extra time and resources that quarter reviewing and validating recipient data to reduce errors. In some cases, agencies communicated daily with recipients via phone or e-mail to ensure their report submissions were accurate. Capacity of recipients to meet deadlines. The requirement to regularly report on jobs created and retained further strained the capacity of some recipients. Recipients only had 10 days after the end of each fiscal quarter to determine this information and pass it on to the federal government.reporting should have been extended by 1 to 2 weeks so they were not rushing to input data. One of these officials said she was directed by other state officials to put in “the best data you have, even if it’s not correct…and go back and correct it later.” City officials also reported concerns with the quick turn-around time for reporting. For example, one city official stated that, in order to meet reporting deadlines, it was necessary had to enter data manually, which created additional work. The Recovery Board accepted these post-correction actions as it extended the quality assurance period to provide more time for agencies to review reports and recipients to make corrections in FederalReporting.gov. As a result, recipients could change their reports up to about 2 weeks before the start of the next reporting period. The administration required agencies receiving Recovery Act funds to submit performance plans that identified additional measures on a program-by-program basis. Consistent with existing GPRA requirements for agencies to set outcome-oriented performance goals and measures, OMB’s initial Recovery Act implementation guidance required federal agencies to ensure that program goals were achieved. OMB required agencies to measure specific program outcomes, supported by corresponding quantifiable output measures, and improved results on broader economic indicators. agencies typically resorted to existing measures in their grant programs’ performance plans. This information is reported by agency and by program within each agency, as opposed to government-wide. While Recovery.gov provided a template for facilitating the reporting of this information, the level of detail and specificity of outcomes varied greatly for some of the agencies we reviewed, making it difficult to determine the extent to which some were making progress toward their goals and demonstrating results. See OMB Memorandum M-09-10 (2009). This information was to be provided by all agencies receiving Recovery Act funds, covering each grant program using these funds, in the agencies’ “Recovery Program Plans” submitted to OMB. Initially due on May 1, 2009, the plans were to be updated by the agencies as needed and were to be published on Recovery.gov as well as agency websites. These plans included information on each Recovery Act program’s objectives, activities, delivery schedule, accountability plan, monitoring plan, and program performance measures. For example, Education’s performance plan described the agency’s accountability mechanisms, the type and scope of project activities, and specific program performance measures. With the exception of the number of jobs created or retained, Education’s plan stated the agency was primarily using existing established agency performance measures that applied to both Recovery and non-Recovery funds. For example to measure the success of one type of education grant fund (specifically, Title I of the Elementary and Secondary Education Act of 1965, as amended) which the Recovery Act made available to local educational agencies, Education used existing agency performance measures, such as the percentage of economically disadvantaged students in grades 3 to 8 scoring at the proficient or advanced levels on state reading and mathematics assessments. On the other hand, DOT filled out the templates to report on its 12 programs, and its performance measures were generally less specific and outcome oriented. For example, DOT’s Capital Assistance for High Speed Rail Corridors and Intercity Passenger Rail Service performance plan metrics included whether interim guidance was published within time frames, the number of applicants received for the program, and the number of grants awarded for the program. Further, as we previously reported, DOT released a series of performance plans in May 2009 to measure the impact of Recovery Act transportation programs, but these plans generally did not contain an extensive discussion of the specific goals and measures to assess the impact of Recovery Act projects. For example, while the plan for the highway program contained a section on anticipated results, three of its five measures were the percent of funds obligated and expended and the number of projects under construction. The fourth measure was the percentage of vehicle miles traveled on pavement on the National Highway System rated in good condition, but the plan said that goals for improvement with Recovery Act funds were yet to be determined. The fifth goal was number of miles of roadway improved, and DOT’s plan reported that even with the addition of Recovery Act funds, the new target would remain the same as previously planned. As a result, we recommended in May 2010 that DOT ensure that the results of these projects were assessed and a determination made about whether these investments produced long-term benefits. DOT did not implement our recommendation. Created in response to the recent serious recession, the Recovery Act represents a significant financial investment in improving the economy. Grant programs were a key mechanism for distributing this support. By increasing accountability and transparency requirements while at the same time setting aggressive timelines for the distribution of funds, the Recovery Act created high expectations as well as uncertainty and risk for federal, state, and local governments responsible for implementing the law. Faced with these challenges, some of these organizations looked beyond their usual way of doing business and adjusted their usual practices to help ensure the accountability and transparency of Recovery Act funds. The oversight community adopted a faster and more flexible approach to how they conducted and reported on their audits and reviews so that their findings could inform programs of needed corrections before all Recovery funds were expended. They leveraged technology by using advanced data analytics to reduce fraud and to create easily accessible Internet resources that greatly improved the public’s access to, and ability to make use of, data about grants funded by the Recovery Act. These and other experiences, as well as the challenges identified in this report, provide potentially valuable lessons for the future. Underlying many of these lessons is the importance of increased coordination and collaboration, both vertically—transcending federal, state, and local levels of government—and horizontally—across organizational silos within the federal community—to share information and work towards common goals. One question that remains unresolved is the extent to which good practices developed in response to the Recovery Act’s special challenges and conditions can ultimately be incorporated in everyday practice for managing and overseeing grants. Some of the practices we found, such as the use of the Recovery Operations Center and state readiness reviews, have been able to make this transition. Others, such as some of the information sharing networks established during the Recovery Act, have had more difficulty in doing so. Proposals under consideration by Congress and the administration to extend Recovery Act requirements for spending transparency to all federal grants suggest that this has been the case for tracking dollars. Still to be seen is whether it will be possible to provide this type of government-wide transparency to other measures of performance, such as grant outcomes. We provided a draft of this report to the Secretaries of the Departments of Education, Energy, Housing and Urban Development, and Transportation; and to the Director of the Office of Management and Budget. We also provided drafts of the examples included in this report to cognizant officials from the relevant state and local agencies to verify accuracy and completeness, and we made technical changes and clarifications where appropriate. The agencies generally agreed with our findings and provided technical comments which were incorporated in the report. We are sending copies of this report to other interested congressional committees; the Secretaries of the Departments of Education, Health and Human Services, Housing and Urban Development, and Transportation; and the Director of the Office of Management and Budget. In addition, the report will be available on our web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6806 or by email at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix I. To better understand grant management lessons resulting from the American Recovery and Reinvestment Act of 2009 (Recovery Act), we focused on two key issues involving grant implementation during the Recovery Act: accountability and transparency. Specifically, this report identifies and provides examples of good practices employed and the challenges faced by select federal, state, and local agencies implementing grant programs funded by the Recovery Act, in the areas of accountability and transparency. To obtain a broad view of lessons learned during the implementation of grants funded by the Recovery Act, we conducted a detailed literature review of relevant reports describing lessons learned from implementing grants funded by the Recovery Act from GAO; federal and state inspectors general; federal agencies; state and local governments; accountability boards; state and local government advocacy organizations; think tanks; and academia. We developed selection criteria to identify relevant federal agencies and state and local governments to obtain their views related to the implementation of grant programs funded by the Recovery Act. We then selected four federal agencies, three states, and two localities based on the extent to which they had information related to our focus areas of accountability and transparency; information from our colleagues, subject matter experts, and academics; and citations in the literature. To capture a diverse mix of Recovery Act grants and identify potential good practices and challenges, we selected a variety of grants—some that had their funding structures already well established, others that had their funding greatly increased as a result of the Recovery Act, as well as new programs. Although Medicaid was the largest grant program funded by the Recovery Act, we deemed it out of scope for the purposes of this review since it is primarily an entitlement and subject to specific rules that are not typical of program grants. Further, Medicare and unemployment insurance were not included in the recipient reports we examined. To obtain illustrative examples of the good practices employed and the challenges faced during the implementation of grants funded by the Recovery Act related to accountability and transparency, we conducted interviews with a wide range of officials and experts. We interviewed cognizant officials and obtained supporting documentation from government-wide oversight entities at the federal level including the Recovery Implementation Office, Office of Management and Budget, and the Recovery Accountability and Transparency Board. In addition, we interviewed and obtained supporting documentation from select federal agency officials from the Departments of Education; Energy; Housing and Urban Development; and Transportation; and their respective inspectors general. At the state level, we interviewed and obtained supporting documentation from agency and audit officials from the states of California, Georgia, and Massachusetts. To get a broader state perspective, we also interviewed officials from the state Recovery Act coordinators’ network, which included key state officials involved in implementing the Recovery Act from several states. interviewed officials from Denver, Colorado and New York, New York. The states represented in the state Recovery Act coordinators’ network meeting were Arizona, Arkansas, Delaware, Florida, Maryland, Massachusetts, Michigan, Minnesota, Missouri, Nebraska, Nevada, Oregon, Rhode Island, Tennessee, Texas, Utah, and Wisconsin. Association of State Budget Officers, and the National Association of Counties. We obtained additional information on lessons learned related to the Recovery Act from officials representing the Government Accountability and Transparency Board, Sunlight Foundation, Council of Government Relations, National Council of Non-profits, Center for Effective Government, the Federal Demonstration Project, and National Association of State Chief Information Officers. In addition, we conducted seven focus groups representing a range of federal fund recipients. Focus groups included: (1) state comptrollers; (2) state education and transportation officials; and (3) local government officials from both large and small municipalities. Each focus group had between four and eight participants who were recruited from randomized member lists provided by the recipient associations we interviewed. Lastly, we reviewed and synthesized information provided in previously issued reports related to the Recovery Act that included the following sources: our previous work; inspectors general from the Departments of Education, Energy, Housing and Urban Development, and Transportation; the Recovery Accountability and Transparency Board; the White House; and various non-governmental sources including the IBM Center for The Business of Government. In addition, we reviewed and applied criteria established by HowTo.gov, a source of guidance and leading practices for government websites, to Recovery.gov and state and local Recovery websites. The scope of our work did not include independent evaluation or verification of the effectiveness of the examples we identified. We also did not attempt to assess the prevalence of the practices or challenges we cite either within or across levels of government. Therefore, entities other than those cited for a particular practice may or may not have employed the same or similar practice, and it is not possible to generalize how prevalent the practices and challenges may be across all Recovery Act grants. We conducted this performance audit from December 2012 through January 2014, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Peter Del Toro, Assistant Director; Mark Abraham; and Jyoti Gupta made significant contributions to this report. Also contributing to this report were Tom Beall, Robert Gebhart, Jacob Henderson, Donna Miller, Robert Robinson, Beverly Ross, and Andrew J. Stephens.
In response to the recent serious recession, Congress enacted the Recovery Act to promote economic recovery, make investments, and minimize or avoid reductions in state and local government services. Approximately $219 billion was distributed as grants for use in states and localities, making grants a major component of the act. These grants covered a broad range of areas including education, transportation, energy, infrastructure, the environment, health care, and housing. GAO was asked to examine grant management lessons learned resulting from the Recovery Act. This report examines federal, state, and local experiences with implementing grants funded by the Recovery Act by identifying examples of good practices employed and challenges faced in meeting the act's accountability and transparency requirements. GAO reviewed relevant documents including OMB and Recovery Board guidance, relevant literature, and previous reports by GAO, federal inspectors general, and others. GAO also interviewed officials from OMB, the Recovery Board, four federal agencies, three state governments, and two local governments, among others. This report also draws on GAO's past bi-monthly reviews of selected states' and localities' use of Recovery funds. Federal, state, and local officials responsible for implementing grants funded by the American Recovery and Reinvestment Act of 2009 (Recovery Act) as well as the external oversight community reported lessons learned regarding both useful practices and challenges to ensuring accountability. Faced with aggressive timelines for distributing billions of dollars, they adopted a number of practices to foster accountability including (1) strong support by top leaders; (2) centrally-situated collaborative governance structures; (3) the use of networks and agreements to share information and work towards common goals; and (4) adjustments to, and innovations in, usual approaches to conducting oversight such as the increased use of up-front risk assessments, the gathering of "real time" information, earlier communication of audit findings, and the use of advanced data analytics. For example, in 2009, the Recovery Accountability and Transparency Board (Recovery Board) established the Recovery Operations Center which used advanced data analysis techniques to identify potential fraud and errors before and after payments were made. The Recovery Act's emphasis on accountability also presented challenges for several states and federal agencies. These included limited resources for oversight at the state and local levels, and the speed with which Recovery Act funds were distributed. One state addressed the challenge of limited resources by transferring funds from its central administration account to Recovery Act oversight. To facilitate the quick distribution of funds, maintenance-of-effort provisions concerning transportation projects (which prevented Recovery funds from being used for planned state projects) were rolled out before the Department of Transportation had time to issue sufficiently detailed definitions of what constituted "state funding." To address this challenge, the department had to issue clarifying guidance to states seven times during the first year of the Recovery Act. Federal, state, and local officials also developed practices and encountered challenges related to the transparency of Recovery Act funds. An example of one good practice that was required by the Recovery Act was the creation of the Recovery.gov website. This site, as well as similar portals created by states and localities, demonstrated several leading practices for effective government websites. These included (1) establishing a clear purpose, (2) using social networking tools to garner interest, (3) tailoring the website to meet audience needs, and (4) obtaining stakeholder input during design. Efforts to increase transparency also led to challenges for several states and federal agencies. For example, some recipients lacked knowledge or expertise in using the data systems needed to report grant spending, while others faced challenges with reporting the same data to multiple systems. Early GAO reviews also found several problems with job reporting data including discrepancies in how full time equivalents were recorded and the capacity of recipients to meet reporting deadlines. The Office of Management and Budget (OMB) addressed these challenges by issuing additional guidance and providing technical support. Finally, agencies receiving Recovery Act funds were required to submit performance plans that identified measures on a program-by-program basis. The level of detail and the specificity of outcomes in these plans varied greatly for the agencies GAO examined, making it difficult to determine the extent to which some were making progress toward their goals and demonstrating results. GAO is not making any recommendations in this report. We provided a draft of this report to relevant agencies for comment. They generally agreed with our findings and provided technical comments.
The federal financial regulators are responsible for examining and monitoring the safety and soundness of approximately 22,000 financial institutions, which, together, manage more than $6 trillion in assets and hold over $3 trillion in deposits. Specifically, The Federal Reserve System supervises about 992 state-chartered, member banks and bank holding companies, which are responsible for $1.2 trillion in assets. The Office of the Comptroller of the Currency (OCC) supervises approximately 2,600 federally-chartered, national banks, which comprise about $2.9 trillion in assets—about 58 percent of the total $5 trillion assets of the FDIC-insured commercial banks. OCC also supervises federal branches and agencies of foreign banks. FDIC supervises about 6,200 state-chartered, nonmember banks, which are responsible for $1 trillion in assets. It is also the deposit insurer of approximately 11,000 banks and savings institutions that have insured deposits totaling upwards of $2.7 trillion. OTS oversees about 1,200 savings and loan associations (thrifts), which primarily emphasize residential mortgage lending and are an important source of housing credit. These institutions hold approximately $770 billion in assets. NCUA supervises and insures more than 11,000 federally- and state-chartered credit unions whose assets total about $345 billion. Credit unions are nonprofit financial cooperatives organized to provide their members with low-cost financial services. As part of their goal of maintaining safety and soundness, these regulators are responsible for assessing whether the institutions they supervise are adequately mitigating the risks associated with the century date change. To ensure consistent and uniform supervision on Year 2000 issues, the five regulators are coordinating their supervisory efforts through FFIEC. For example, they jointly prepared and issued Year 2000-related guidance and letters to banks, thrifts, and credit unions. They also worked together to develop and issue, in May 1997, Year 2000 examination procedures and guidance for all examiners to use in performing their work at the institutions. Additionally, the regulators—under the auspices of FFIEC—are jointly examining the major data service providers and software vendors that support the financial institutions. According to the regulators, virtually every insured financial institution relies on computers—either their own or those of a third-party contractor—to provide for processing and updating of records and a variety of other functions. Because computers are essential to their survival, the regulators believe that all institutions are vulnerable to the problems associated with the year 2000. Failure to address Year 2000 computer issues could lead, for example, to errors in calculating interest and amortization schedules. Moreover, automated teller machines may malfunction, performing erroneous transactions or refusing to process transactions. In addition, errors caused by Year 2000 miscalculations may expose institutions and data centers to financial liability and loss of customer confidence. Other supporting systems critical to the day-to-day business of financial institutions may be affected as well. For example, telephone systems, vaults, and security and alarm systems could malfunction. In addressing the Year 2000 problem, financial institutions must also consider the computer systems that interface with, or connect to, their own systems. These systems may belong to payment system partners, such as wire transfer systems, automated clearinghouses, check clearing providers, credit card merchant and issuing systems, automated teller machine networks, electronic data interchange systems, and electronic benefits transfer systems. Because these systems are also vulnerable to the Year 2000 problem, they can introduce errors into bank, thrift, and credit union systems. In addition to these computer system risks, many financial institutions also face business risks from the Year 2000: exposure from their corporate customers’ inability to manage their own Year 2000 compliance efforts successfully. Consequently, in addition to correcting their computer systems, these institutions have to periodically assess the Year 2000 efforts of large corporate customers to determine whether they are sufficient to avoid significant disruptions to operations. FFIEC established a working group to develop guidance on assessing the risk corporate customers pose to financial institutions and the group issued guidance on March 17, 1998. The Year 2000 efforts of the five regulators began in June 1996, when, through FFIEC, they formally alerted banks, thrifts, and credit unions to the potential dangers of the Year 2000 problem by issuing an awareness letter to chief executive officers. This letter described the Year 2000 problem and highlighted concerns about the industry’s Year 2000 readiness. It also called on institutions to perform a risk assessment of how systems are affected and develop a detailed action plan to fix them. In May 1997, the regulators issued a second, more detailed awareness letter that described the five-phase approach to planning and managing an effective Year 2000 program and highlighted external issues requiring management attention, such as reliance on vendors, risks posed by exchanging data with external parties, and the potential effect of Year 2000 noncompliance on corporate borrowers. The letter also related regulatory plans to facilitate Year 2000 evaluations by using uniform examination procedures. It directed institutions to inventory their core computer functions and set priorities for Year 2000 goals by September 30, 1997. It also directed them to complete programming changes and to have testing of mission-critical systems underway by December 31, 1998. As regulators alerted institutions to the Year 2000 problem, they began assessing whether banks, thrifts, and credit unions had established a structured process for correcting the problem; estimated the costs of remediation; prioritized systems for correction; and determined the Year 2000 impact on other internal systems important to day-to-day operations, such as vaults, security and alarm systems, elevators, and telephones. This initial assessment was completed during November and December 1997. Among other things, it revealed that most institutions were aware of Year 2000 and taking actions to correct their systems. However, the three regulators we reviewed reported—based on the initial assessment—that in total, over 5,000 institutions were not adequately addressing the problem. For example, OTS designated about 170 thrifts as being at high risk due to poor performance in conducting awareness and assessment phase activities. Additionally, FDIC identified over 200 banks that were not adequately addressing Year 2000 risks and 500 banks that were very reliant on third-party servicers and software providers but had not followed up with them to determine their Year 2000 readiness. Furthermore, NCUA reported that it had formal agreements for corrective action with 4,862 credit unions deemed not to be making sufficient progress in at least one awareness or assessment phase activity. The regulators are now conducting a more detailed assessment of Year 2000 readiness. This assessment will involve on-site examinations of institutions and their major data processing services and software vendors. These visits are expected to be completed by the end of June 1998. The results of the servicer assessments will be provided to the banks, thrifts, and credit unions that use these services. Once the on-site assessments are completed, the regulators expect to have a better idea of where the industry stands, which institutions need close attention, and, thus, where to focus supervisory efforts. As noted in our summary, the regulators must successfully address a number of problems to provide adequate assurance that financial institutions will meet the Year 2000 challenge. First, all were behind in assessing individual institution’s readiness due to the fact that they got a late start. For example, the regulators did not complete their initial institution assessments until November and December 1997. According to OMB guidance and GAO’s Assessment Guide, these activities should have occurred by the summer of 1997. Because the regulators are behind the recommended timelines, the time available for assessing institutions’ progress during renovation, validation, and implementation phases and for taking needed corrective actions is compressed. Second, we also found that the FFIEC-developed examination work program and guidance for the initial and follow-on assessments were not designed to collect all the data needed to determine where (i.e., in which phase) the institutions are in the Year 2000 correction process. For example, the guidance for the work program does not contain questions that ask whether specific phases have been completed. In addition, the work program used to perform the on-site assessments is not organized by the 5 phases of the Year 2000 correction process. Furthermore, the terms used in the guidance to describe progress are vague. For example, it notes that banks should be well into assessment by the end of the third quarter of 1997, that renovation for mission-critical systems should largely be completed, and testing should be well underway by December 31, 1998. Without defining any of these terms, it would be very hard to deliver uniform assessments on the status of institutions’ Year 2000 efforts. At the time of our reviews, OTS had issued additional examination guidance and procedures to supplement those of FFIEC. This supplemental guidance, if implemented correctly, will address the FFIEC examination procedure’s shortcomings. However, although we reviewed FDIC and NCUA earlier in the process, we found that both were using or planning to use the FFIEC guidance for their initial and follow-on assessments. We were concerned at the time that by using the FFIEC guidance, FDIC and NCUA would not be able to develop an accurate picture of their institutions’ Year 2000 readiness. In the case of FDIC, this problem was compounded by the fact that the tracking questionnaire FDIC examiners were to complete after their on-site assessment also did not ask enough questions to determine whether the bank had fully addressed the phases. Since our work, FDIC and NCUA have responded to our findings by providing examiners with supplemental guidance, which we think is a positive development. FDIC officials told us that they are also in the process of going back to institutions and asking more detailed questions to provide added assurance that the corporation can tell precisely where each bank is in the Year 2000 correction process. Third, FFIEC is still developing key Year 2000 guidance. For example, as of the time of our review, the regulators had not yet completed critical guidance related to (1) developing contingency plans to mitigate the risk of Year 2000-related disruptions and (2) ensuring that their data processing services, software vendors, and large corporate customers are making adequate Year 2000 progress. In May 1997, the regulators—through FFIEC—recommended that institutions begin these actions. FFIEC recently issued the servicer/vendor and corporate customer guidance on March 17, 1998, but does not plan to provide contingency planning guidance until the end of April 1998. This time lag has increased the risk that institutions have taken little or no action on contingency planning and dealing with servicers, vendors, and corporate customers in anticipation of pending regulator guidance. Moreover, in the absence of guidance, institutions may have initiated action that does not effectively mitigate risk of Year 2000 failures. Finally, although the regulators have been working hard to assess industrywide compliance, it is not clear all have an adequate level of technical resources needed to adequately evaluate the Year 2000 conversion efforts of the institutions and the service providers and software vendors that service them. As institutions and vendors progress in their Year 2000 efforts, we are concerned that the evaluations of the examiners will increase in length and technical complexity, and put a strain on an already small pool of technical resources. Without sufficient resources, the regulators could be forced to slip their schedules for completing the current on-site exams or, worse, reduce the scope of their exams in order to meet deadlines. In the first case, institutions would be left with less time to remediate any deficiencies. In the second, regulators might overlook issues that could lead to failures. In either case, the risk of noncompliance by institutions and service bureaus—and the government’s exposure to losses—is significantly increased. OTS and NCUA have responded to this concern by adding more technical staff or augmenting it with contractors. It will be important for regulators to quickly address problems associated with their late start since the challenge for them is certain to grow as banks progress into the later and more complex stages of their Year 2000 efforts. For example, regulators will soon have to pinpoint which, if any, of the thousands of banks, thrifts, and credit unions are not going to meet their Year 2000 deadline. In doing so, they will have to weigh a range of factors, including the financial condition of the institution, the resources it has to address the problem, how far behind it is in correcting its systems, whether its service provider’s systems are Year 2000 compliant, etc. Once these decisions are made, regulators will have to then determine which enforcement actions—which include increased on-site supervision, directives to institution boards of directors, written supervisory agreements, cease-and-desist orders, civil monetary penalties—are appropriate. All of this needs to be done before the Year 2000 deadline, which is less than 21 months away. In addition, as institutions and vendors progress in their Year 2000 efforts, regulatory evaluations will increase in length and technical complexity and put a strain on an already small pool of technical resources. Thus, the regulators will need to ensure that they have the technical capacity to complete their Year 2000 examinations as well as their routine safety and soundness examinations. Already, some are finding this to be a difficult task. OTS officials, for example, expressed the concern that even if they could hire more technical examiners, it is very hard to find and hire staff with these skills. The regulators will be better prepared to handle these challenges once the on-site assessments are completed. This information should provide good definition as to the size and magnitude of the problem, that is, how many institutions are at high risk of not being ready for the millennium and require immediate attention and which service providers are likely to be problematic. Further, by carefully analyzing available data, the regulators should be able to identify common problems or issues that are generic to institutions that are of similar size, use specific service providers, etc. This in turn will allow regulators to develop a much better understanding of which areas require attention and where to focus limited resources. In short, regulators have an opportunity to regroup, develop specific strategies, and have a more defined sense of the risks and the actions required to mitigate those risks. In conclusion, Mr. Chairman, we believe that the financial regulators have a good appreciation for the Year 2000 problem and have made significant progress in assessing the readiness of banks, thrifts, and credit unions. However, the regulators are facing a finite deadline that offers no flexibility. They need to take several actions to improve their ability to enhance the ability of financial institutions to meet the century deadline with minimal problems and to enhance their own ability to monitor the industry’s efforts and to take appropriate and swift measures against institutions that are neglecting their Year 2000 responsibilities. Accordingly, we have made recommendations to the regulators individually, and collectively via FFIEC, to work together to, among other things, (1) improve their Year 2000 examination and reporting processes, (2) provide additional guidance to the institutions on contingency planning and the latter phases of the Year 2000 correction process, (3) develop a tactical plan that details the results of their on-site assessments, provides a more explicit road map of the actions to be taken based on those results, and includes an assessment of the adequacy of technical resources to evaluate the Year 2000 efforts of institutions and the servicers and vendors that support them, and (4) improve the regulators’ internal system mitigation programs. So far, we have been generally pleased with the regulators’ responsiveness to implementing our recommendations. Mr. Chairman, that concludes my statement. We welcome any questions that you or Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the progress of the federal regulatory agencies in ensuring that the thousands of financial institutions they oversee are ready for the upcoming century date change. GAO noted that: (1) because financial institutions are heavily dependent on information technology, their viability hinges on whether they can successfully remediate systems before the Year 2000 deadline; (2) given this possibility, regulators must take every measure possible to assist banks, thrifts, and credit unions in their Year 2000 efforts as well as to identify and take swift enforcement measures against those in danger of failing; (3) regulators have recognized this responsibility and have begun an intense effort to raise awareness of the problem, develop guidance to facilitate remediation efforts, and determine where individual institutions stand in correcting their systems; (4) in doing so, regulators have initially identified several hundred institutions at high risk of missing the deadline due to their poor performance in conducting awareness and assessment phase activities; (5) despite aggressive efforts, the regulators still face significant challenges in providing a high level of assurance that individual institutions will be ready; (6) they were late in addressing the problem and consequently, are behind in the Year 2000 schedule recommended by both GAO and the Office of Management and Budget; (7) they are also late in developing key guidance on contingency planning and dealing with servicers, vendors, and corporate customers; (8) this guidance is needed by financial institutions to complete their own preparations; (9) in addition, their follow-on assessments to be completed by June 1998 were not in all cases, designed to collect the data required to be defective about the status of individual institutions; (10) furthermore, it is questionable whether all regulators have an adequate level of technical staff to completely evaluate industry readiness; (11) with regard to their own systems, the regulators have generally done much to mitigate the risk to their mission critical systems; and (12) in some areas such as contingency planning, the regulators can do more to provide added assurance that they will be ready for the century date change and any unexpected problems.
In 2005, we reported that 207 federal STEM education programs across 13 different agencies spent $2.8 billion in federal funds in fiscal year 2004. We noted that before increasing investment in STEM education, it is important to know the extent to which existing STEM education programs are appropriately targeted and whether or not they are making the best use of available federal resources. Additionally, information about the effectiveness of these programs could help guide policymakers and program managers. Since then, several other efforts have been conducted to identify federal STEM programs and provide recommendations to improve both coordination and program evaluation as well as reduce potential duplication. For example, in 2006, ACC, led by the Department of Education, created an inventory and assessed the effectiveness of federal STEM programs. ACC recommended further coordination among federal agencies administering STEM programs, states, and local school districts. In addition, ACC recommended that agencies adjust program designs and operations so that programs can be assessed and measurable results can be achieved and that funding for federal STEM education programs should not be increased unless a plan for rigorous, independent evaluation is in place. In 2010, the President’s Council of Advisors on Science and Technology (PCAST), an advisory group of the nation’s leading scientists and engineers housed in OSTP, published a report in response to the President’s request to develop specific recommendations concerning the most important actions that the administration should take to ensure that the United States is a leader in STEM education in the coming decades.PCAST found that approaches to Kindergarten–12th grade (K-12) STEM education across agencies emerged largely without a coherent vision or careful oversight of goals and outcomes. PCAST also found that relatively little funding was targeted at efforts with the potential to transform STEM education, too little attention was paid to replication efforts to disseminate proven programs widely, and too little capacity at key agencies was devoted to strategy and coordination. Our past effort to inventory STEM education programs identified a multitude of agencies that administer such programs. The primary missions of these agencies vary, but most often, they are to promote and enhance an area that is related to a STEM field or enhance general education. See table 1 for relevant agencies and their missions. As part of this effort, we also identified the role that the National Science and Technology Council (NSTC), a component of OSTP, plays in coordinating STEM education programs. NSTC was established in 1993 and is the principal means for the administration to coordinate science and technology with the federal government’s larger research and development effort. NSTC is made up of the Vice President, the Director of the Office of Science and Technology Policy, and officials from other executive branch agencies with significant science and technology responsibilities. One objective of NSTC is to establish clear national goals for federal science and technology investments in areas ranging from information technologies and health research to improving transportation systems and strengthening fundamental research. NSTC is responsible for preparing research and development strategies that are coordinated across federal agencies in order to accomplish these multiple national goals. STEM education programs have been created in two ways—by Congress directly in legislation or through agencies’ broad statutory authority to carry out their missions. The Higher Education Opportunity Act, the No Child Left Behind Act of 2001, and the National Science Foundation Act of 1950 created programs at the Department of Education and the National Science Foundation (NSF)—two key agencies that administer many STEM education programs. In addition, since our 2005 review of STEM education programs, Congress has also passed legislation to examine the overall federal effort to improve STEM education. For example, the Deficit Reduction Act of 2005 established ACC. ACC, consisted of officials from the Department of Education and other federal agencies with responsibility for managing mathematics and science education programs and was mandated to (1) identify all federal programs with a mathematics or science education focus, (2) identify the target populations being served by such programs, (3) determine the effectiveness of such programs, (4) identify areas of overlap or duplication in such programs, and (5) recommend processes to integrate and coordinate such programs. While various pieces of legislation directly created some STEM education programs, agencies reported using their broad statutory authority to create many programs as well. For example, according to agency officials, NSF created 25 of its 37 programs and the Department of Health and Human Services (HHS) created 40 of its 46 programs in this manner. More recently, the America COMPETES Act (COMPETES), enacted in 2007, authorized several programs to promote STEM education. December 2010, Congress reauthorized COMPETES. Pub. L. No. 110-69, 121 Stat. 572 (2007). COMPETES also focused on STEM research programs. reauthorization approved new funding for some STEM education programs and made substantive changes to others by reducing certain nonfederal matching requirements. Additionally, it repealed many of the programs that went unfunded following the original COMPETES passage. The COMPETES reauthorization also sought to address coordination and oversight issues, including those associated with the coordination and potential duplication of federal STEM education efforts. Specifically, Congress required the Director of OSTP to establish a committee under NSTC to inventory, review, and coordinate federal STEM education programs. Congress also directed this NSTC committee to specify and prioritize annual and long-term objectives for STEM education, and to ensure that federal efforts do not duplicate each other, among other things. NSTC is required to report to Congress annually. Beyond STEM-specific efforts, the federal government as a whole is seeking to identify programmatic areas that could be better tracked and coordinated. One such effort revolves around the Government Performance and Results Act (GPRA) Modernization Act of 2010. GPRA Modernization Act established a new framework aimed at taking a more crosscutting and integrated approach to focusing on results and improving government performance. It requires OMB, in coordination with agencies, to develop—at least every 4 years—long-term priority goals, including outcome-oriented goals covering a limited number of crosscutting policy areas. On an annual basis, OMB is to provide information on how these long-term crosscutting goals will be achieved. This approach could provide a basis for more fully integrating a wide array of federal activities as well as a cohesive perspective on the long- term goals of the federal government. Pub. L. No. 111-352, 124 Stat. 3866. In 2010, Congress directed GAO to conduct routine investigations to identify programs, agencies, offices, and initiatives with duplicative goals and activities within departments and governmentwide and report annually to Congress. In March 2011, GAO issued its first annual report In that report, we identified to Congress in response to this requirement.81 areas for consideration—34 areas of fragmentation, overlap, and potential duplication and 47 additional areas—where agencies or Congress may wish to consider taking action in an effort to reduce the cost of government operations or enhance revenue collections. Using the framework established in the March 2011 GAO report, we examine the extent to which federal STEM education programs are fragmented, overlapping, and duplicative. For the purposes of this report, the key terms are defined as follows: Fragmentation occurs when more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need. Overlap occurs when multiple programs offer similar services to similar target groups in similar STEM fields to achieve similar objectives. Duplication occurs when multiple programs offer the same services to the same target beneficiaries in the same STEM fields. Thirteen agencies administered 209 STEM education programs in fiscal year 2010. (See appendix I for our definition of a STEM education program.) Agencies reported that they developed the majority (130) of these programs through their general statutory authority and that Congress specifically directed agencies to create 59 of these programs.The number of programs each agency administered ranged from 3 to 46 with three agencies—HHS, the Department of Energy, and NSF— administering more than half of all programs—112 of 209. Figure 1 provides a summary of the number of programs by agency, and appendix II contains a list of the 209 STEM education programs and reported obligations for fiscal year 2010. Having multiple agencies, with varying expertise, involved in delivering STEM education can be advantageous. One such advantage is that agencies may be better able to tailor programs to suit their specific missions and needs. For example, Energy officials said that their efforts to support students in pursuing a STEM course of study are related to Energy’s mission and work in their labs and can be a way to attract new employees to their workforce. However, this could also make it challenging to develop a coherent federal approach to educating STEM students and creating a workforce with STEM skills. Having multiple agencies involved in the delivery of STEM education could also make it challenging to identify gaps and allocate resources across the federal government. Agencies obligated over $3 billion to STEM education programs in fiscal year 2010. Individual program obligations ranged from $15,000 to hundreds of millions of dollars. NSF and the Department of Education programs account for over half of this funding. Almost a third of the programs had obligations of $1 million or less, with 5 programs having obligations of more than $100 million each. See figure 2 for program obligation ranges. Agencies carried out other activities that did not fit our definition of a STEM education program because STEM education was their secondary or tertiary objective, rather than their primary objective. These efforts include broad-based programs with STEM components, programs that enhance the general public’s knowledge of STEM, and research programs that may hire students. Selected examples of agencies’ efforts as reported to us by agency officials include the following: Broad-Based Programs That Include STEM Components Several of the Department of Education’s programs have STEM components. For example, Title I of the Elementary and Secondary Education Act of 1965, as amended, includes funding for the assessment of math for primary and secondary students, putting a renewed focus on educational attainment in these areas. In addition, the Race to the Top Fund, a competitive grant program, includes bonus points for states that report they will include in their grant activity, efforts to enhance STEM education. The Department of Transportation’s State Maritime Academy program supports maritime training and education programs in an effort to improve the quality of the U.S. maritime industry with a secondary objective to encourage students to pursue careers in STEM fields that can contribute to the maritime industry. Programs to Educate the General Public The National Institutes of Health’s (NIH) Science Education Drug Abuse Partnership Award provides support for the formation of partnerships among scientists and educators, media experts, community leaders, and other interested organizations for the development and evaluation of programs and materials that will enhance knowledge and understanding of science related to drug abuse. The intended focus is on topics not well addressed in existing efforts by educational, community, or media activities. Research Programs That Include Internships or Assistantships Energy’s national laboratories, most of which are managed by contractors and engage in research activities on behalf of multiple federal agencies, sometimes partner with universities and offer students research opportunities in various disciplines, such as science and technology. The primary focus of these laboratories is on research and development, which is determined by the funding institution, and there is not always a requirement that they hire students. When research programs do hire students, this can enhance students’ education and interest in STEM. The Department of Defense has several programs with a primary objective to further research on a specific STEM topic. For example, it has programs that fund university faculty to conduct research on STEM topics and who may hire students to assist with research. The Department of Homeland Security receives funding for technological research in areas that support its mission, and a portion of this may go to student research activities such as hiring a student for the summer or for several weeks to assist with the research. Nonmonetary Partnerships with Schools or through Private Partnerships The Department of the Interior participates in the GeoFORCE program—a precollege program that provides hands-on science learning experiences for middle and high school students (primarily underserved minorities)—which is mostly funded by private donations and the University of Texas. The Environmental Protection Agency has a cooperative agreement with the Hispanic Association of Colleges and Universities that is intended to increase the diversity of students going into science and technology careers. The agreement includes activities such as EPA staff participation in lectures, conferences, and other events, as well as EPA staff members serving as mentors or coaches, among other things. Dedicated Funds for Education Programs NASA’s Science Mission Directorate (SMD) requires each of its missions to fund SMD-related education and public outreach using a small percentage of the research and development program costs, but these funds are not specifically for STEM education. As figure 3 illustrates, in fiscal year 2010, 83 percent of STEM education programs overlapped to some degree with another program in that they offered at least one similar service to at least one similar target group in at least one similar STEM field to achieve at least one similar objective. These programs ranged from being narrowly focused on a specific group or field of study to offering a range of services to students and teachers across STEM fields. This complicated patchwork of overlapping programs has largely resulted from federal efforts to both create and expand programs across many agencies in an effort to improve STEM education and increase the number of students going into STEM fields. Program officials reported that approximately one-third of STEM education programs funded in fiscal year 2010 were first funded between 2005 and 2010. Indeed, the creation of new programs during that time frame may have contributed to overlap and, ultimately, to inefficiencies in how STEM programs across the federal government are focused and delivered. Overlap among STEM education programs is not new. In 2007, ACC identified extensive overlap among STEM education programs, and, in 2009, we identified overlap among teacher quality programs, which include several programs focused on STEM education. Many programs provided services to similar target groups, such as K-12 students, postsecondary students, K-12 teachers, and college faculty and staff. The vast majority of programs (170) served postsecondary students. Ninety-five programs served college faculty and staff, 75 programs served K-12 students, and 70 programs served K-12 teachers. In addition, many programs served multiple target groups. In fact, as figure 4 illustrates, 177 programs were primarily intended to serve two or more target groups. As figure 5 illustrates, we also found many STEM programs providing similar services. To support students, 167 different programs provided research opportunities, internships, mentorships, or career guidance. In addition, 144 programs provided short-term experiential learning opportunities and 127 long-term experiential learning opportunities. Short-term experiential learning activities include field trips, guest speakers, workshops, and summer camps. Long-term experiential learning activities last a semester in length or longer. Furthermore, 137 programs provided outreach and recognition to generate student interest, 124 provided classroom instruction, and 75 provided student scholarships or fellowships. To support teachers, 115 programs provided curriculum development, 83 programs provided teacher in-service, professional development, or retention activities, and 52 programs provided preservice or recruitment activities. To support STEM research, 68 programs reported conducting research to enhance the quality of STEM education. To support institutions, 65 programs provided institutional support to management and administrative activities, and 46 programs provided support for expanding the facilities, classrooms, and other physical infrastructure of institutions. Many programs provided similar services to similar target groups. For example, 39 programs that listed chemistry as a primary field of focus provided student scholarships or fellowships to postsecondary students. Many of these programs offered scholarships and fellowships to minority, disadvantaged, or underrepresented students across a broad range of STEM fields. Specifically, some programs, like NASA’s Minority University Research and Education Program (MUREP) and the Department of Commerce’s Dr. Nancy Foster Scholarship Program, offered scholarships, along with a range of other services, to underrepresented and underserved students in overlapping STEM fields even though the programs focused on preparing students to work in fields that support the science mission of each agency. Overall, most programs provided an array of services to target groups—150 programs provided four or more services, while only 16 programs provide one service. Similar STEM Fields of Focus In addition to the serving multiple target groups, most programs also provided services in multiple STEM fields. Twenty-three programs targeted one specific STEM field, while 121 programs targeted four or more specific STEM fields. In addition, 26 programs indicated not focusing on any specific STEM field, they provided services eligible for use in any STEM field. Five different STEM fields had over 100 programs that provided services. Biological sciences and technology were the most selected STEM fields focused on by programs. Agricultural sciences, which was the least commonly selected, still had 27 programs that provided services specifically to that STEM field. While the data show that many programs had similar target groups and similar STEM fields of focus, it is also important to compare programs’ target groups and STEM fields of focus to get a better picture of the potential target beneficiaries that could be served within a given STEM discipline. For example, both the National Environmental Satellite, Data, and Information Service (NESDIS) Education and the Graduate Automotive Technology Education Program provided scholarships or fellowships to postsecondary students, but one focused on students in earth, atmospheric, and ocean sciences programs, and one on students in engineering, specifically in the areas of hybrid propulsion systems, fuel cells, biofuels, energy storage systems, lightweight materials, and advanced computation; therefore, the target beneficiaries served by these programs are quite different. Nevertheless, 72 programs provided services to postsecondary students in physics. As table 2 illustrates, many programs offered services to similar target groups in similar STEM fields of focus. Overlapping programs can lead to individuals and institutions being eligible for similar services in similar STEM fields offered through multiple programs and, without information sharing, could lead to the same service being provided to the same individual or institution. Many STEM education programs had similar objectives. The vast majority (87 percent) of STEM education programs indicated that attracting and preparing students throughout their academic careers in STEM areas was a primary objective. In addition to attracting and preparing students throughout their academic careers in STEM areas, officials also indicated the following primary program objectives: improving teacher education in STEM areas (teacher development)— 26 percent, improving or expanding the capacity of K-12 schools or postsecondary institutions to promote or foster education in STEM fields (institution capacity building)—24 percent, and conducting research to enhance the quality of STEM education provided to students (STEM education research)—18 percent. Many programs also reported having multiple primary objectives. While 107 programs focused solely on student education, 82 others indicated having multiple primary objectives, and 9 programs reported having 4 or more primary objectives. Few programs reported focusing solely on teacher development, institution capacity building, or STEM education research. Most of these objectives were part of a larger program that also focused on attracting and preparing students in STEM education. However, even when programs overlapped, the services they provided and the populations they served may differ in meaningful ways and would therefore not necessarily be duplicative: There may be important differences between the specific field(s) of focus and the program’s stated goals. For example, both Commerce’s National Estuarine Research Reserve System Education Program and the Nuclear Regulatory Commission’s Integrated University Program provided scholarships or fellowships to doctoral students in the field of physics. However, the National Estuarine Research Reserve System Education Program’s goal was to increase environmental literacy related to estuaries and coastal watersheds by providing students with an opportunity to conduct research of local and national significance that focuses on enhancing coastal zone management; while the Integrated University Program focused on supporting education in nuclear science, engineering, and related fields with the goal of developing a workforce capable of designing, constructing, operating, and regulating nuclear facilities and capable of handling nuclear materials safely. Programs may be primarily intended to serve different specific populations within a given target group. For example, 65 programs were primarily intended to serve minority, disadvantaged, or underrepresented groups and 10 programs limited their services to students or teachers in specific geographic areas.programs providing services to K-12 students in the field of technology, 10 were primarily intended to serve specific underrepresented, minority, or disadvantaged groups, and 2 were limited geographically to individual cities or universities. Indeed, of the 34 Furthermore, individuals may receive assistance from different programs at different points throughout their academic careers that provide services that complement or build upon each other, simultaneously supporting a common goal rather than serving cross purposes. Despite past recommendations from ACC and others to improve coordination among STEM education programs, efforts to coordinate STEM education programs across the government remain limited. Although 83 percent of STEM education programs overlapped to some degree with at least one other program, only 33 percent of programs reported coordinating with other agencies that provide similar STEM education services to similar program beneficiaries, not including basic governmentwide inventory efforts. Some program officials mentioned that they coordinate by employing informal mechanisms for information sharing such as conversations and meetings between program staff, sharing resources or best practices, and participating in conferences with other agency officials. Other efforts included developing memorandums of understanding, issuing joint guidance, cofunding programs, and establishing interagency working groups focused on specific science subjects or providing a specific service to a specific target group. With the growing concern for improved federal coordination and planning in STEM education, Congress passed the America COMPETES Reauthorization Act of 2010, which requires the Director of OSTP to establish a committee under NSTC to coordinate STEM education activities and programs among respective federal agencies and OMB. The NSTC Committee on Science, Technology, Engineering, and Math Education (CoSTEM), comprised of representatives from 11 different federal agencies, convened its first meeting in March 2011. The statute requires NSTC to develop a 5-year governmentwide STEM education strategic plan and identify areas of duplication among federal programs. CoSTEM provides NSTC with an opportunity to improve coordination and be more strategic with the federal investment in STEM education. Best practices in interagency collaboration include developing ongoing mechanisms and processes to monitor, measure, and report agency progress toward NSTC’s strategic planning goals and making the results publicly available to improve accountability. According to OSTP officials, a description of the 5-year strategic plan should be publicly available in early 2012; however, as called for in its charter, the committee will terminate no later than March 31, 2015, before the first 5-year plan is carried out, unless it is renewed by the Director of OSTP. Pursuant to requirements under the 2010 reauthorization of the COMPETES Act, NSTC has implemented several initiatives to enhance coordination. In December 2011, CoSTEM published a report on the inventory of the federal STEM education portfolio that, according to OSTP officials, will be used to improve coordination and inform the strategic planning process. Specifically, OSTP officials said the inventory will allow agencies to identify similar programs and share information and best practices. Without proper coordination, overlapping programs may not share information about the results of the actions taken or research conducted with other interested agencies, possibly leading to numerous programs providing assistance to address the same issue or area of research. To the extent that CoSTEM identifies duplicative programs, it will be important that it considers the trade-offs associated with program consolidation and assist agencies in determining the most effective and efficient way to reduce duplication. Cost savings might be achieved through the consolidation of duplicative program administrative structures. However, our past work has shown that program consolidation can be more expensive in the short term, and, in the long term, cost savings could be diminished if the workload associated with certain administrative activities remains the same, such as reviewing and assessing applications, providing technical assistance, and monitoring program recipients. that reported on administrative costs estimated having administrative costs lower than 10 percent of their total program costs. Last, the consolidation of some programs may require congressional action because some programs may be statutorily mandated. GAO-11-318P. Program officials varied in their ability to provide reliable information on the number of students, teachers, or institutions directly served by their programs—which is a type of output measure. For example, among programs in our review that served postsecondary teachers and students in 2010, about one-fifth of them did not know the number served. However, depending on the service delivery structure of the program, it may be more difficult to track this number. In some cases, the program’s agency did not maintain databases or contracts that would track the number of students served by the program. In other cases, programs may not have been able to provide information on the numbers of institutions they served because they provided grants to secondary recipients. For example, one program indicated that it gives grants to institutions to provide internships or scholarships but that funding goes directly to students, so it does not have information about the number of institutions served. Programs that provide informal educational activities or online services also reported difficulty in tracking the number of individuals who benefited from their programs. The validity and accuracy of the reported output data for some of these programs may be questionable and may hinder program planning and assessment. Programs that reported the numbers they served used varied approaches to collect this information, including annual reports from grant recipients, student enrollment counts, estimates of the expected number of participants reached, and reviews of funding proposals. Some programs had third parties track the numbers served, but did not always take steps to independently verify the data or review the process for how the information was collected. Further, the inconsistent collection of output measures across programs makes it challenging to aggregate the number of students, teachers, and institutions served and to assess the effectiveness of the overall federal effort. Output data are an important component to understanding whether programs are likely to meet their goals. For example, if a K-12 program has the goal of increasing the number of undergraduates pursuing coursework in STEM fields, it is important to know how many K-12 students were in the program. Without such data, it would be challenging to assess the intended outcome of the program—for example, the number of students who actually went on to pursue such coursework. Agencies in our review did not use outcome measures in a way that is clearly reflected in their performance plans and performance reports— publicly available documents they use for performance planning. This may hinder decisionmakers’ ability to assess how agencies’ STEM efforts contribute to agencywide performance goals and the overall federal STEM effort. In our review of fiscal year 2010 annual performance plans and reports of the 13 agencies with STEM programs, we found that most agencies did not connect STEM education activities to agency goals or measure and report on the progress of those activities.documents typically lay out agency performance goals that establish the level of performance to be achieved by program activities during a given fiscal year, the measures developed to track progress, and what progress has been made toward meeting those performance goals. As figure 6 illustrates, in our review of agencies’ specific references to their overall STEM education initiatives, although 38 percent of agencies mentioned STEM education in their performance plans and 62 percent in their performance reports, fewer cited outcome measures related to STEM education. More specifically, in reporting on their progress toward meeting their performance goals, 46 percent of the agencies mentioned STEM education as contributing to one of these goals in their performance reports. Moreover, agencies that spent the most on STEM education were not necessarily more likely to mention, connect to agency performance goals, or measure and report on progress of their STEM efforts. For instance, NASA, which administered 9 STEM programs and obligations of about $209.6 million in fiscal year 2010, mentioned its overall STEM education efforts and connected them to agency performance goals in its planning documents and measured and reported on progress in both its performance plan and report. On the other hand, HHS’s National Institutes of Health, which administered the most STEM education programs (44) and obligations of about $573.6 million, referred to agency performance goals and outcome measures of its STEM education efforts only in some of its institutes’ performance reports, but not in its NIH-wide performance plan. As figure 7 illustrates, in our review of agencies’ specific references to their STEM education programs, while the 13 agencies combined mentioned 38 percent of their programs in their performance plans, they connected 19 percent of their STEM education programs to agency performance goals and measured and reported on progress of 9 percent of the programs. Agencies’ STEM education obligations and number of programs did not correlate directly with their likelihood of connecting the programs to agency performance goals or measuring and reporting on their progress in performance plans and reports. For example, Interior, through the U.S. Geological Survey, which administered just 3 STEM education programs in fiscal year 2010, mentioned all of its programs in its performance plan. In contrast, NSF, which administered 37 STEM education programs and obligated about $1.1 billion in fiscal year 2010, connected only 2 of its programs to agency performance goals while measuring and reporting on progress in its performance plan and report. The GPRA Modernization Act of 2010 and the America COMPETES Reauthorization Act of 2010 afford agencies the opportunity to better utilize performance measures for both governmentwide and agency- specific STEM education efforts. For example, the GPRA Modernization Act will require agencies to identify program activities and other activities, which may include STEM education activities that contribute to each performance goal. It recognizes the importance of governmentwide performance goals as it requires OMB to develop, in coordination with agencies, long-term, crosscutting federal government priority goals that are to be updated or revised every 4 years, which will be tracked quarterly in order to review progress to improve government performance. According to OMB guidance, it will announce interim federal government priority goals in February 2012 and finalize its goals in February 2014. The America COMPETES Reauthorization Act of 2010 also focuses on accountability through strategic planning, and has specific requirements for agencies with STEM programs. Specifically, it requires NSTC to develop a STEM education strategic plan with long-term objectives, metrics to assess agencies’ progress, and approaches taken by participating agencies to assess the effectiveness of their STEM programs and activities. However, while OSTP will be required to report on agencies’ annual progress toward the long-term objectives, an OSTP official said there is no mechanism to make agencies align their performance measures with the goals and objectives in the strategic plan. Little is known about the effectiveness and performance of STEM education programs because the majority of them (66 percent) have not conducted an evaluation of their entire program since 2005 (as figure 8 illustrates). We define “evaluation” as an individual systematic study conducted periodically or on an ad hoc basis to assess how well a program is working, typically relative to its program objectives. Some programs that reported that they did not complete an evaluation reported they had their grantees complete one; however, in those cases, few programs used these grantee evaluations to inform a more comprehensive evaluation of the entire program that they or an external evaluator completed. In total, since 2005, agencies conducting 61 programs, (representing about 61 percent of the $3.1 billion obligated in fiscal year 2010) responded that they had completed evaluations—all of which used a variety of methods and designs. We reviewed evaluations for 35 of the 61 programs. Most of the 35 program evaluations we reviewed used methods and designs that appropriately assessed how well they met their stated objectives. For instance, one evaluation selected a random sample of its former program participants and compared them with a sample of students who had applied to the same program, but had not participated. While former participants had some statistically significant academic outcomes when compared with the nonparticipants, the evaluation also noted other factors that may have influenced the favorable outcomes of the program—for example, that participants, on average, were more interested in careers in science and math than the nonparticipants, so the true effects of program participation may be overstated. Even though most of the 35 programs we reviewed employed appropriate methods and designs to assess their programs’ effectiveness, we identified several ways to improve evaluations of STEM education, based on our review. Improved survey response rates: Many of the evaluations we reviewed had low response rates. Without better response rates, generalizations from the results may be limited. Better alignment of the methods with other components of the evaluation: Specifically, 10 of the programs used evaluation methods that were not fully aligned with the evaluation questions and the program context. For example, 3 of these evaluations had data limitations, thus hindering the use of methods that could collect the full range of data to inform program outcomes. Robust use of criteria to measure outcomes: Among the 27 programs that measured outcomes, 9 did not evaluate them against any criteria. Without criteria to evaluate the outcomes, it may be difficult to establish programmatic impact and assess performance and effectiveness. Furthermore, in order to influence program practice, the evaluation results must be disseminated widely. While nearly all of the STEM education programs that reported completing an evaluation reported using different mechanisms to disseminate results, they did not always share results in a way that facilitated knowledge sharing. Program officials reported that the most common means of dissemination of their results were through their websites or at conferences or forums, which, according to a 2006 NSTC report, were methods that require practitioners to actively seek out results, so such methods may prevent the results of the research from being conveyed to them. However, these mechanisms have limits. For example, NSTC also reported that STEM education research results may not be conveyed to practitioners because the results often lack applicability, some are ambiguous, and the culture of teaching typically does not make decisions based on research findings. NSTC identified other issues with sharing information about STEM education program results and suggested several actions that agencies could take to improve dissemination, such as engaging practitioners to collaborate with researchers in setting research agendas. According to NSTC officials, most agencies do not share or disseminate evaluations in a way that could be useful for coordination. Although the federal government invests billions of dollars annually in STEM education programs, there remains concerns over U.S. economic and educational competitiveness, particularly with regard to the national educational system’s ability to produce citizens literate in STEM subjects and to produce future scientists, technologists, engineers, and mathematicians. Prior reports on STEM education highlighted the lack of federal governmentwide planning and coordination. Recently, both Congress and the administration called for a more strategic and effective approach to the federal government’s investment in STEM education. The America COMPETES Reauthorization Act of 2010 requires the Director of OSTP to establish a committee under NSTC to develop a 5-year strategic plan and submit annual reports, including a description of the plan, to Congress. The plan is expected to include common measures to assess progress toward the plan’s goals. In addition, the GPRA Modernization Act of 2010 requires agencies to identify program activities that contribute to each performance goal, and, as agencies implement this provision, more information about STEM education efforts in performance plans and reports can be expected. NSTC’s ongoing strategic planning efforts provide an opportunity to develop guidance on how to incorporate STEM- and program-specific education goals and measures in agencies’ performance planning and reporting process and align their STEM education efforts with a governmentwide STEM education strategy. To further strengthen strategic planning and coordination efforts, an accountability and reporting framework should exist to ensure agencies are adhering to NSTC’s strategic plan. While the STEM education programs we reviewed in this report are fragmented and overlapping to some degree, they are not necessarily duplicative of one another. More analysis is needed to identify areas of duplication among federal STEM education programs and ensure that the federal investment in these programs advances NSTC’s 5-year strategic plan that is under development. In this era of budget constraints, governmentwide strategic planning can play a critical role in addressing concerns about program fragmentation, overlap and duplication. Fragmentation and overlap can (1) frustrate federal officials’ efforts to administer programs in a comprehensive manner, (2) limit the ability to determine which programs are most cost-effective, and (3) ultimately increase program administrative costs. Therefore, if NSTC’s 5-year strategic plan is not developed in a way that aligns agencies’ efforts to achieve governmentwide goals, enhances the federal government’s ability to assess what works, and concentrates resources on those programs that advance the strategy, the federal government may spend limited funds in an inefficient and ineffective manner that does not best help to improve the nation’s global competitiveness. Understanding program performance and effectiveness is also key in determining where to strategically invest limited federal funds to achieve the greatest impact in developing a pipeline of future workers in STEM fields. Programs need to be appropriately evaluated to determine what is working and how improvements can be made. However, most agencies have not conducted comprehensive evaluations since 2005 to assess the effectiveness of their STEM education programs. Furthermore, methods for dissemination of program evaluations, especially to practitioners, could be improved. Agency and program officials would benefit from guidance and information sharing within and across agencies about what is working and how to best evaluate programs. This could not only help to improve individual program performance, but also inform agency and governmentwide decisions about which programs should continue to be funded. Without an understanding of what is working in some programs, it will be difficult to develop a clear strategy for how to spend limited federal funds. The Director of OSTP should direct NSTC to 1. Develop guidance for how agencies can better incorporate each agency’s STEM education efforts and the goals from NSTC’s 5-year STEM education strategic plan into each agency’s own performance plans and reports. 2. Develop a framework for how agencies will be monitored to ensure that they are collecting and reporting on NSTC strategic plan goals. This framework should include alternatives for a sustained focus on monitoring coordination of STEM programs if the NSTC Committee on STEM terminates in 2015 as called for in its charter. 3. Work with agencies, through its strategic planning process, to identify programs that might be candidates for consolidation or elimination. Specifically, this could be achieved through an analysis that includes information on program overlap, similar to the analysis conducted by GAO in this report, and information on program effectiveness. As part of this effort, OSTP should work with agency officials to identify and report any changes in statutory authority necessary to execute each specific program consolidation identified by NSTC’s strategic plan. 4. Develop guidance to help agencies determine the types of evaluations that may be feasible and appropriate for different types of STEM education programs and develop a mechanism for sharing this information across agencies. This could include guidance and sharing of information that outlines practices for evaluating similar types of programs. We provided a draft of this report to the Office of Science and Technology Policy (OSTP) and the Office of Management and Budget (OMB) for review and comment. OSTP provided technical comments that we incorporated as appropriate. OMB had no concerns with the report. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We are sending copies of this report to relevant congressional committees, OSTP, OMB, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of our report were to determine (1) the number of federal agencies and programs that provided funding for science, technology, engineering, and mathematics (STEM) education programs in fiscal year 2010; (2) the extent to which STEM programs have similar objectives, serve similar target groups, provide similar types of services, and, if necessary, what opportunities exist to increase coordination; and (3) the extent to which STEM programs have measured their effectiveness. To inform all of our objectives, we reviewed relevant federal laws and regulations. We also reviewed previous work that was conducted to catalog and assess the federal investment in STEM education programs, including a 2005 GAO study, the 2007 Academic Competitiveness Council (ACC) report, and the 2010 Office of Management and Budget (OMB) inventory. We reviewed relevant literature and past reports on STEM education, including the 2010 President’s Council of Advisors on Science and Technology (PCAST) report entitled Report to the President: Prepare and Inspire: K-12 Education in Science, Technology, Engineering, and Math (STEM) for America’s Future and the National Academies Press report entitled Rising above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future: Committee on Prospering in the Global Economy of the 21st Century: An Agenda for American Science and Technology. In addition, we interviewed officials from OMB, the Office of Science and Technology Policy (OSTP), and 13 other federal agencies that administer STEM education programs to gather information on their STEM education efforts, the extent of coordination between programs, and the existence of program evaluations. We attended several STEM education conferences to gather additional perspectives about federal STEM education programs. Finally, we reviewed evaluations provided by program officials as well as agencies’ annual performance plans and reports. To gather information on federal STEM education programs and to assess the level of fragmentation, overlap, and potential duplication among them, we first reviewed past GAO work on assessing the level of fragmentation, overlap, and duplication among other groups of federal programs. Next, we surveyed over 200 programs across 13 agencies that met our definition of a STEM education program (see below) with questions about program objectives, target populations, services provided, interagency coordination, outcome measures and evaluations, and funding information. In December 2011, NSTC’s Committee on STEM Education released its inventory of the federal STEM education portfolio. The NSTC inventory differs from GAO’s survey in that it counts investments and allocations, whereas GAO asked agencies to report on programs and obligations. For the purposes of our study, we defined a federally funded STEM education program as a program funded in fiscal year 2010 by congressional appropriation or allocation that includes one or more of the following as a primary objective: attract or prepare students to pursue classes or coursework in STEM areas through formal or informal education activities (informal education programs provide support for activities provided by a variety of organizations that offer students learning opportunities outside of formal schooling through contests, science fairs, summer programs, and other means; outreach programs targeted to the general public should not be included), attract students to pursue degrees (2-year, 4-year, graduate, or doctoral degrees) in STEM fields through formal or informal education activities, provide training opportunities for undergraduate or graduate students in STEM fields (this can include grants, fellowships, internships, and traineeships that are targeted to students; general research grants that are targeted to researchers that may hire a student to work in the lab should not be considered a STEM education program), attract graduates to pursue careers in STEM fields, improve teacher (preservice or in-service) education in STEM areas, improve or expand the capacity of K-12 schools or postsecondary institutions to promote or foster education in STEM fields, or conduct research to enhance the quality of STEM education programs provided to students. In addition, we defined STEM education programs to include grants, fellowships, internships, and traineeships. While programs designed to retain current employees in STEM fields were not included, programs that fund retraining of workers to pursue a degree in a STEM field were included because these programs help increase the number of students and professionals in STEM fields by helping retrain non-STEM workers to work in STEM fields. For the purposes of this study, we defined the term “program” as an organized set of activities supported by a congressional appropriation or allocation. Further, we defined a program as a single program even when its funds were allocated to other programs as well. We asked agency officials to provide a list of programs that received funds in fiscal year 2010. This included programs that received one-time, limited funds in fiscal year 2010, such as earmarks. We determined that a STEM field should be considered any of the following broad disciplines: earth, atmospheric, and ocean sciences; social sciences (e.g., psychology, sociology, anthropology, cognitive science, economics, behavioral sciences); or technology. In addition, we determined that our definition of STEM education would include health care programs that train students for careers that are primarily in scientific research. We did not, however, include health care programs that train students for careers that are primarily in patient care, that is, those that trained nurses, doctors, dentists, psychologists, or veterinarians. To identify federally funded STEM education programs, first we developed a combined list of programs based on the findings of three previous STEM education inventory efforts completed by GAO in 2005, ACC in 2007, and OMB in 2010. Second, we shared our list with agency officials, provided our definition of STEM education program, and asked officials to make an initial determination about which programs should remain on the list and which programs should be added to the list. If agency officials indicated they wanted to remove a program from our list, we asked for additional information. For example, programs on our initial list may have been terminated or consolidated, or did not receive federal funds in fiscal year 2010. In addition, we asked officials to provide program descriptions, program names, and contact information. Next, we reviewed each agency’s submission and individual program information and determination. We also gathered additional information on the program, mainly through agency websites and program materials, and held discussions with program officials to understand the program in more detail. On the basis of this additional information, we excluded programs that we found did not meet our definition of a STEM education program. Once our determinations were made, we asked each agency to confirm the list of programs and the names and contact information for the officials who would be responsible for completing the survey. In total, we determined that 274 programs should receive a survey. We also included several screening questions in the survey to provide an additional verification to ensure the programs met our definition of a STEM education program. Nineteen programs did not pass our screening questions and therefore were excluded from our analysis. All in all, 209 programs were included in our final analysis. For a list of the 209 STEM education programs by agency, see appendix II. For a summary of excluded programs and their exclusion rationales, see table 3. Furthermore, we provide aggregate survey responses from these programs in an e-supplement (GAO-12-110SP). We developed a web-based survey to collect information on federal STEM education programs. See GAO-12-110SP for a copy of the survey’s full text. The survey included questions on program objectives, target groups served, services provided, academic fields of focus, output metrics, outcome measures, obligations, and program evaluations. To minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same, we conducted pretests with 14 different programs in March and April 2011. To ensure that we obtained a variety of perspectives on our survey, we selected 14 programs from 11 different agencies that differed in program scope, objectives, services provided, target groups served, evaluations completed, and funding sources. We included budget staff as well as program officials in the pretests to ensure budget-related terms in the survey were understandable and available. An independent GAO reviewer also reviewed a draft of the survey prior to its administration. On the basis of feedback from these pretests and independent review, we revised the survey in order to improve its clarity. After completing the pretests, we administered the survey. On May 3, 2011, we sent an e-mail announcement of the survey to the officials responsible for the programs selected for our review, notifying them that our online survey would be activated within a week. On May 11, 2011, we sent a second e-mail message to officials that informed them that the survey was available online. In that e-mail message, we also provided them with unique passwords and usernames. We made telephone calls to officials and sent them follow-up e-mail messages, as necessary, to clarify their responses or obtain additional information. We received completed surveys from 269 programs, for a 100 percent response rate. We collected survey responses through August 31, 2011. We used standard descriptive statistics to analyze responses to the survey. Because this was not a sample survey, there were no sampling errors. To minimize other types of errors, commonly referred to as nonsampling errors, and to enhance data quality, we employed survey design practices in the development of the survey and in the collection, processing, and analysis of the survey data. For instance, as previously mentioned, we pretested the survey with federal officials to minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same. We further reviewed the survey to ensure the ordering of survey sections was appropriate and that the questions within each section were clearly stated and easy to comprehend. To reduce nonresponse bias, another source of nonsampling error, we sent out e-mail reminder messages to encourage officials to complete the survey. In reviewing the survey data, we performed automated checks to identify inappropriate answers. We further reviewed the data for missing or ambiguous responses and followed up with agency officials when necessary to clarify their responses. To assess output measures, we asked a series of questions to assess the agency’s procedures, policies, and internal controls to ensure the quality of data provided in the survey. For program obligations questions, we sampled 10 percent of responses reviewing documentary evidence to corroborate survey responses. For evaluation questions, we reviewed program evaluations provided to corroborate survey responses. To assess the reliability of data provided in our survey, we incorporated questions about the reliability of the programs’ data systems, reviewed documentation for a sample of selected questions, conducted internal reliability checks, and conducted follow-up as necessary. While we did not verify all responses, on the basis of our application of recognized survey design practices and follow-up procedures, we determined that the data used in this report were of sufficient quality for our purposes. We did not report on data that we found of questionable reliability based on our review of data reliability questions in the survey—such as the number of students and teachers served. All data analysis programs were also independently verified by a GAO data analyst for accuracy. Program officials who responded on their survey that an evaluation of their program had been completed in 2005 or later provided us with information about their most recent evaluations. GAO defines “evaluation” as an individual systematic study conducted periodically or on an ad hoc basis to assess how well a program is working. Studies are often conducted by experts external to the program, inside or outside the agency, as well as by program managers. Furthermore, an evaluation typically examines achievement of program objectives in the context of other aspects of program performance or in the context in which it occurs. After ensuring that the evaluations met this definition, we reviewed them to analyze their characteristics, including their methods and designs, and the extent to which program outcomes were measured. In addition, we examined whether the methods and designs were appropriate given the evaluation questions and program context. In total, 61 programs responded that they had completed a program evaluation since 2005, and we reviewed evaluations from 35 of those programs. Because we requested that officials provide us with a citation for the most recent evaluation, we selected the most recent one for our review. We did not review evaluations from the remaining 26 programs for a variety of reasons. Specifically, they were committee of visitors reports, and other types of reports that did not have evaluation information that aligned with the criteria by which we analyzed the other evaluations. Among these reports, we were unable to obtain 6 of them. As a result, we were unable to analyze them and determine whether they met GAO’s definition of evaluation. For more details about the evaluations in our review, see appendix III. We reviewed agencies’ fiscal year 2010 required strategic planning documents—performance plans and performance reports—to determine the extent to which they incorporated program-specific and broad-based STEM goals and objectives. The performance plans and reports were done at the agency level, while others were done at other levels, such as the institute or office level—in which case we reviewed the documents that covered the particular STEM program(s) in our review. When reviewing these documents, we determined the extent to which agencies made any reference to agencywide STEM initiatives or particular STEM education programs, in general, but not in the context of agency goals or of outcome measures; agencies connected their STEM initiatives or their individual STEM programs to agency goals; and agencies articulated outcome measures of their STEM initiatives or of individual STEM programs. Exploration Systems Directorate-STEM Education activities Minority University Research and Education Program NASA Informal Education Opportunities (NIEO) Advanced Technological Education (ATE) Alliances for Graduate Education and the Professoriate (AGEP) Broadening Participation in Computing (BPC) Centers for Ocean Science Education Excellence CISE Pathways to Revitalized Undergraduate Computing Education (CPATH) Cyberinfrastructure Training, Education, Advancement, and Mentoring for Our 21st Century Workforce (CI-TEAM) Discovery Research K-12 (DR-K12) East Asia & Pacific Summer Institutes for U.S. Graduate Students (EAPSI) Engineering Education (EE) Enhancing the Mathematical Sciences Workforce in the 21st Century (EMSW21) Ethics Education in Science & Engineering (EESE) Federal Cyber Service: Scholarship for Service (SFS) Geoscience Teacher Training (GEO-Teach) Global Learning and Observations to Benefit the Environment (GLOBE) Graduate Research Fellowship Program (GRFP) Graduate STEM Fellows in K-12 Education Program (GK-12) Historically Black Colleges and Universities Undergraduate Program (HBCU- UP) Informal Science Education (ISE) Integrative Graduate Education and Research Traineeship (IGERT) Program Interdisciplinary Training for Undergraduates in Biological and Mathematical Sciences (UBM) International Research Experiences for Students (IRES) Program Louis Stokes Alliances for Minority Participation (LSAMP) Nanotechnology Undergraduate Education in Engineering Opportunities for Enhancing Diversity in the Geosciences Research and Evaluation on Education in Science and Engineering (REESE) Research Experiences for Teachers (RET) in Engineering and Computer Science Research Experiences for Undergraduates (REU) Research in Disabilities Education (RDE) Research on Gender in Science and Engineering (GSE) Robert Noyce Teacher Scholarship Program Science, Technology, Engineering, and Mathematics Talent Expansion Program (STEP) Transforming Undergrad Education in STEM (TUES) Tribal Colleges and Universities Program (TCUP) Undergraduate Research and Mentoring in the Biological Sciences (URM) Minority Serving Institutions Program (MSIP) Animal and Plant Health Inspection Service (APHIS) National Institute of Food and Agriculture (NIFA) National Institute of Standards and Technology (NIST) Awards to Stimulate and Support Undergraduate Research Experience (ASSURE) Army Educational Outreach Program (AEOP) Consortium Research Fellows Program (CRFP) National Science Center (NSC) Autonomous Robotic Manipulation (ARM) Computer Science in Science, Technology, Engineering, and Mathematics Education (CS-STEM) National Defense Education Program (NDEP) K-12 National Defense Education Program (NDEP) Science, Mathematics And Research for Transformation (SMART) Uniformed Services University of the Health Sciences (USUHS) Historically Black College and Universities/Minority Institutions Research Education Partnership Science and Engineering Apprentice Program (SEAP) The Naval Research Enterprise Intern Program (NREIP) University / Laboratory Initiative (ULI) Developing Hispanic-Serving Institutions: STEM and Articulation Programs (mandatory) Academies Creating Teacher Scientists (DOE Acts) HBCU Mathematics, Science & Technology, Engineering and Research Workforce Development Program Minority University Research Associates Program (MURA) National Undergraduate Fellowship Program in Plasma Physics and Fusion Energy Sciences Office of Science Graduate Fellowship (SCGF) program Pan American Advanced Studies Institute Summer Applied Geophysical Experience (SAGE) Bridges to the Baccalaureate Program CCR/JHU Master of Science in Biotechnology Concentration in Molecular Targets and Drug Discovery Technologies Community College Summer Enrichment Program Education Programs for Population Research (R25) Initiative for Maximizing Student Development Material Development for Environmental Health Curriculum National Cancer Institute Cancer Education and Career Development Program NCRR Science Education Partnership Award (SEPA) NHLBI Minority Undergraduate Biomedical Education Program NIH Summer Research Experience Programs NINDS Diversity Research Education Grants in Neuroscience NLM Institutional Grants for Research Training in Biomedical Informatics Office of Science Education K-12 Program Post-baccalaureate Intramural Research Training Award Program Postbaccalaureate Research Education Program (PREP) Recovery Act Limited Competition: NIH Challenge Grants in Health and Science Research Research Scientist Award for Minority Institutions Research Supplements to Promote Diversity in Health-Related Research RISE (Research Initiative for Scientific Enhancement ) Ruth L. Kirschstein National Research Service Award Institutional Research Training Grants**(T32, T35) U.S. Geological Survey (USGS) EDMAP Component of the National Cooperative Geologic Mapping Program National Association of Geoscience Teachers (NAGT)-USGS Cooperative Summer Field Training Program Student Intern in Support of Native American Relations (SISNAR) Federal Aviation Administration (FAA) National Center of Excellence for Aviation Operations Research (NEXTOR) Different types of evaluation designs can provide rigorous evidence of effectiveness if designed well and implemented with a thorough understanding of their vulnerability to potential sources of bias. There are four main types of evaluations that GAO has identified: Implementation evaluations (which assess the extent to which the program is operating as intended), Impact evaluations (which include experimental and quasi- experimental designs), Outcome evaluations (which assess the extent to which a program achieves its objectives), and Cost-benefit and cost-effectiveness analyses (which assess a program’s outputs or outcomes with the costs to produce them). Deciding which evaluation type to use involves a variety of different considerations, as no one evaluation is suitable for all programs. For instance, as we have previously reported, an impact evaluation is more likely to provide useful information about what works when the intervention consists of clearly defined activities and goals and has been well implemented.experimental comparison group design—which compares outcomes for program participants with those of a similar group not in the program, is used in instances when random assignment to the participant and nonparticipant groups is not possible, ethical, or practical. It is most successful in providing credible estimates of program effectiveness when the groups are formed in parallel ways and are not based on self- selection. On the other hand, case studies are recommended for assessing the effectiveness of complex interventions in limited circumstances when assessing comprehensive reforms that are so deeply integrated with the context (for example, the community) that no truly adequate comparison case can be found. Furthermore, every research method has inherent limitations; therefore, it is often advantageous to combine multiple measures or two or more designs in a One type of impact evaluation—the quasi- study or group of studies to obtain a more comprehensive picture of the program’s effect. As we have also previously reported, the evaluation methods literature describes a variety of issues to consider in planning which methods to use in carrying out an evaluation, including the expected use of the evaluation, the nature and implementation of program activities, and the resources available for the evaluation. We identified the following methods and designs of evaluation in our review, which may be used to carry out one or more of the main types of evaluation listed above: committee of visitors, and other report types, which are generally external peer reviews that examine programs’ managerial stewardship, compare plans with progress made, and evaluate outcomes to determine whether the research contributes to the agency’s mission and goals; experimental methods, which involve randomly assigning one group to a program and another to not participate in the program in order to compare outcomes of both groups; mixed methods, which combine qualitative and quantitative designs; qualitative, such as interviews or focus groups; surveys, which involve the systematic collection of data from a respondent using a structured instrument (i.e., a questionnaire) to ensure that the collected data are as accurate as possible; and quasi-experimental comparison groups. In addition, there were two evaluations based solely on a compilation of grantee reports. As stated previously, other evaluations also used grantee evaluations, but these used other data sources to inform their results, and so were classified as using either mixed or qualitative methods. The most common evaluation designs that we classified programs as using were the committee of visitors and mixed methods. We reviewed 35 evaluations from the following agencies and programs, and determined their primary method for assessing effectiveness: The following are different types of reports, including the committee of visitors, that programs used to assess effectiveness of their STEM education programs. As stated in appendix I, we did consider these to be evaluations but did not review them because they did not align with the criteria we used to assess the evaluations. The following staff members made key contributions to this report: Bill Keller, Assistant Director; Susan Baxter; James Bennett; Karen Brown; David Chrisinger; Melinda Cordero; Elizabeth Curda; Karen Febey; Jill Lacey; Ben Licht; Amy Radovich; James Rebbe; Nyree Ryder Tee; Martin Scire; Ryan Siegel; and Walter Vance. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-635T. Washington, D.C.: May 25, 2011. Managing for Results: GPRA Modernization Act Implementation Provides Important Opportunities to Address Government Challenges. GAO-11-617T. Washington, D.C.: May 10, 2011. Performance Measurement and Evaluation: Definitions and Relationships (Supercedes GAO-05-739SP). GAO-11-646SP. Washington, D.C.: May 2011. Opportunities to Reduce Potential Duplication in Federal Teacher Quality Programs GAO-11-510T. Apr 13, 2011. Government Performance: GPRA Modernization Act Provides Opportunities to Help Address Fiscal, Performance, and Management Challenges. GAO-11-466T. Washington, D.C.: March 16, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-441T. Washington, D.C.: March 3, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue, GAO-11-318SP. Washington, D.C.: Mar. 1, 2011. Program Evaluation: Experienced Agencies Follow a Similar Model for Prioritizing Research. GAO-11-176. Washington, D.C.: January 14, 2011. America COMPETES Act: It Is Too Early to Evaluate Programs Long- Term Effectiveness, but Agencies Could Improve Reporting of High-Risk, High-Reward Research Priorities. GAO-11-127R. Washington, D.C.: October 7, 2010. Federal Education Funding: Overview of K-12 and Early Childhood Education Programs GAO-10-51. Washington, D.C.: Jan 27, 2010. Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effective Interventions. GAO-10-30. Washington, D.C.: November 23, 2009. Government Performance: Strategies for Building a Results-Oriented and Collaborative Culture in the Federal Government. GAO-09-1011T. Washington, D.C.: September 24, 2009. Teacher Quality: Sustained Coordination among Key Federal Education Programs Could Enhance State Efforts to Improve Teacher Quality. GAO-09-593. Washington, D.C.: July 6, 2009. Higher Education: Federal Science, Technology, Engineering, and Mathematics Programs and Related Trends. GAO-06-114. Washington, D.C.: Oct. 12, 2005. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: Oct. 21, 2005.
Science, technology, engineering, and mathematics (STEM) education programs help to enhance the nation’s global competitiveness. Many federal agencies have been involved in administering these programs. Concerns have been raised about the overall effectiveness and efficiency of STEM education programs. GAO examined (1) the number of federal agencies and programs that provided funding for STEM education programs in fiscal year 2010; (2) the extent to which STEM education programs have similar objectives, serve similar target groups, and provide similar types of services, and, if necessary, what opportunities exist to increase coordination; and (3) the extent to which STEM education programs measured effectiveness. To answer these questions, GAO reviewed relevant federal laws, regulations, and plans; surveyed federal STEM education programs; analyzed programs’ STEM evaluations; and interviewed relevant federal officials. An electronic supplement—GAO-12-110SP—provides survey results. In fiscal year 2010, 13 federal agencies invested over $3 billion in 209 programs designed to increase knowledge of STEM fields and attainment of STEM degrees. The number of programs within agencies ranged from 3 to 46, with the Departments of Health and Human Services and Energy and the National Science Foundation administering more than half of these programs. Almost a third of the programs had obligations of $1 million or less, while some had obligations of over $100 million. Beyond programs specifically focused on STEM education, agencies funded other broad efforts that contributed to enhancing STEM education. Eighty-three percent of the programs GAO identified overlapped to some degree with at least 1 other program in that they offered similar services to similar target groups in similar STEM fields to achieve similar objectives. Many programs have a broad scope—serving multiple target groups with multiple services. However, even when programs overlap, the services they provide and the populations they serve may differ in meaningful ways and would therefore not necessarily be duplicative. Nonetheless, the programs are similar enough that they need to be well coordinated and guided by a robust strategic plan. Currently, though, less than half of the programs GAO surveyed indicated that they coordinated with other agencies that administer similar STEM education programs. Current efforts to inventory federal STEM education activities and develop a 5-year strategic plan present an opportunity to enhance coordination, align governmentwide efforts, and improve efficiency of limited resources by identifying opportunities for program consolidation and reducing administrative costs. Agencies’ limited use of performance measures and evaluations may hamper their ability to assess the effectiveness of their individual programs as well as the overall STEM education effort. Specifically, program officials varied in their ability to provide reliable output measures—for example, the number of students, teachers, or institutions directly served by their program. Further, most agencies did not use outcomes measures in a way that is clearly reflected in their performance planning documents. This may hinder decision makers’ ability to assess how agencies' STEM education efforts contribute to agencywide performance goals and the overall federal STEM effort. In addition, a majority of programs did not conduct comprehensive evaluations since 2005 to assess effectiveness, and the evaluations GAO reviewed did not always align with program objectives. Finally, GAO found that completed STEM education evaluation results had not always been disseminated in a fashion that facilitated knowledge sharing between both practitioners and researchers. GAO recommends that as OSTP leads the governmentwide STEM education strategic planning effort, it should work with agencies to better align their activities with a governmentwide strategy, develop a plan for sustained coordination, identify programs for potential consolidation or elimination, and assist agencies in determining how to better evaluate their programs. OSTP provided technical comments that we incorporated as appropriate. OMB had no concerns with the report.
Virtually all federal operations are supported by automated systems, mobile devices, and electronic media that may contain sensitive information such as Social Security numbers, medical records, law enforcement data, national or homeland security information, and proprietary information that could be inappropriately disclosed, browsed, or copied for improper or criminal purposes. In our survey of 24 major federal agencies, 10 agencies reported having systems that contain sensitive medical information, 16 reported having systems that contain sensitive regulatory information, 19 reported having systems that contain sensitive personal information, and 20 reported having systems that contain sensitive program-specific information. It is important for agencies to safeguard sensitive information because, if left unprotected, the information could be compromised—leading to loss or theft of resources (such as federal payments and collections), modification or destruction of data, or unauthorized use of computer resources, including launching attacks on other computer systems. Many factors can threaten the confidentiality, integrity, and availability of sensitive information. Cyber threats to federal systems and critical infrastructures containing sensitive information can be intentional or unintentional, targeted or nontargeted, and can come from a variety of sources. Intentional threats include both targeted and nontargeted attacks. A targeted attack occurs when a group or individual specifically attacks an information system. A nontargeted attack occurs when the intended target of the attack is uncertain, such as when a virus, worm, or malware is released on the Internet with no specific target. Unintentional threats can be caused by software upgrades or maintenance procedures that inadvertently disrupt systems. The Federal Bureau of Investigation has identified multiple sources of threats to our nation’s critical information systems, including those from foreign nation states engaged in information warfare, domestic criminals, hackers, virus writers, and disgruntled current and former employees working within an organization. There is increasing concern among both government officials and industry experts regarding the potential for a cyber attack. According to the Director of National Intelligence, ‘‘our information infrastructure— including the Internet, telecommunications networks, computer systems, and embedded processors and controllers in critical industries—increasingly is being targeted for exploitation and potentially for disruption or destruction by a growing array of state and non-state adversaries. Over the past year, cyber exploitation activity has grown more sophisticated, more targeted, and more serious. The intelligence community expects these trends to continue in the coming year.’’ Threats to mobile devices are posed by people with malicious intentions, including causing mischief and disruption as well as committing identity theft and other forms of fraud. For example, malware threats can infect data stored on devices, and data in transit can be intercepted through many means, including from e-mail, Web sites, file downloads, file sharing, peer-to-peer software, and instant messaging. Another threat to mobile devices is the loss or theft of the device. Someone who has physical access to an electronic device can attempt to view the information stored on it. The need for effective information security policies and practices is further illustrated by the increasing number of security incidents reported by federal agencies that put sensitive information at risk. Personally identifiable information about millions of Americans has been lost, stolen, or improperly disclosed, thereby potentially exposing those individuals to loss of privacy, identity theft, and financial crimes. Reported attacks and unintentional incidents involving critical infrastructure systems demonstrate that a serious attack could be devastating. Agencies have experienced a wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. When incidents occur, agencies are to notify the federal information security incident center—the U.S. Computer Emergency Readiness Team (US-CERT). As shown in figure 1, the number of incidents reported by federal agencies to US-CERT has increased dramatically over the past 3 years, increasing from 3,634 incidents reported in fiscal year 2005 to 13,029 incidents in fiscal year 2007 (about a 259 percent increase). Data breaches present federal agencies with potentially serious and expensive consequences; for example, a security breach might require an agency to fund the burdensome costs of notifying affected individuals and associated credit monitoring services or it could jeopardize the agency’s mission. Implementation of a risk-based framework of management, operational, and technical controls that includes controls such as encryption technology can help guard against the inadvertent compromise of sensitive information. While encrypting data might add to operational burdens by requiring individuals to enter pass codes or use other means to encrypt and decrypt data, it can also help to mitigate the risk associated with the theft or loss of computer equipment that contains sensitive data. Protecting information has become more challenging in today’s IT environment of highly mobile workers and decreasing device size. Using small, easily pilferable devices such as laptop computers, handheld personal digital assistants, thumb-sized Universal Serial Bus (USB) flash drives, and portable electronic media such as CD-ROMs and DVDs, employees can access their agency’s systems and information from anywhere. When computers were larger and stationary, sensitive information that was stored on mainframe computers was accessible by only a limited number of authorized personnel via terminals that were secured within the physical boundaries of the agency’s facility. Now, mobile workers can process, transport, and transmit sensitive information anywhere they work. This transition from a stationary environment to a mobile one has changed the type of controls needed to protect the information. Encryption technologies, among other controls, provide agencies with an alternate method of protecting sensitive information that compensates for the protections offered by the physical security controls of an agency facility when the information is removed from, or accessed from, outside of the agency location. Data breaches can be reduced through the use of encryption, which is the process of transforming plaintext into ciphertext using a special value known as a key and a mathematical process called an algorithm (see fig. 2). Cryptographic algorithms are designed to produce ciphertext that is unintelligible to unauthorized users. Decryption of ciphertext—returning the encoded data to plaintext—is possible by using the proper key. Encryption can protect sensitive information in storage and during transmission. Encryption of data in transit hides information as it moves, for example, between a database and a computing device over the Internet, local networks, or via fax or wireless networks. Stored data include data stored in files or databases, for example, on a personal digital assistant, a laptop computer, a file server, a DVD, or a network storage appliance. Encryption may also be used in system interconnection devices such as routers, switches, firewalls, servers, and computer workstations to apply the appropriate level of encryption required for data that pass through the interconnection. Commercially available encryption technologies can help federal agencies protect sensitive information and reduce the risks of its unauthorized disclosure and modification. These technologies have been designed to protect information stored on computing devices or other media and transmitted over wired or wireless networks. Because the capability of each type of encryption technology to protect information is limited by the boundaries of the file, folder, drive, or network covered by that type of technology, a combination of several technologies may be required to ensure that sensitive information is continuously protected as it flows from one point, such as a remote mobile device, to another point, such as a network or portable electronic media. For example, one product that encrypts a laptop’s hard drive may not provide any protection for files copied to portable media, attached to an e- mail, or transmitted over a network. Agencies have several options available when selecting an encryption technology for protecting stored data. According to NIST guidance on encrypting stored information, these include full disk, hardware-based, file, folder, or virtual disk encryption. Through the use of these technologies, encryption can be applied granularly, to an individual file that contains sensitive information, or broadly, by encrypting an entire hard drive. The appropriate encryption technology for a particular situation depends primarily on the type of storage, the amount of information that needs to be protected, and the threats that need to be mitigated. Storage encryption technologies require users to authenticate successfully before accessing the information that has been encrypted. The combination of encryption and authentication controls access to the stored information. Full disk encryption software encrypts all data on the hard drive used to boot a computer, including the computer’s operating system, and permits access to the data only after successful authentication to the full disk encryption software. The majority of current full disk encryption products are implemented entirely within a software application. The software encrypts all information stored on the hard drive and installs a special environment to authenticate the user and begin decrypting the drive. Users enter their user identification and password before decrypting and starting the operating system. Once a user authenticates to the operating system by logging in, the user can access the encrypted files without further authentication, so the security of the solution is heavily dependent on the strength of the operating system authenticator. When a computer is turned off, all the information encrypted by full disk encryption is protected, assuming that pre-boot authentication is required. After the computer is booted, full disk encryption provides no protection and the operating system becomes fully responsible for protecting the unencrypted information. Full disk encryption can also be built into a hard drive. Hardware and software-based full disk encryption offer similar capabilities through different mechanisms. When a user tries to boot a device protected with hardware-based full disk encryption, the hard drive prompts the user to authenticate before it allows an operating system to load. The full disk encryption capability is built into the hardware in such a way that it cannot be disabled or removed from the drive. The encryption code and authenticators, such as passwords and cryptographic keys, are stored securely on the hard drive. Because the encryption and decryption are performed by the hard drive itself, without any operating system participation, typically there is very little performance impact. A major difference between software- and hardware-based full disk encryption is that software-based full disk encryption can be centrally managed, but hardware-based full disk encryption can usually be managed only locally. This makes key management and recovery actions considerably more resource-intensive and cumbersome for hardware- based full disk encryption than for software-based. Another major difference is that because hardware-based full disk encryption performs all cryptographic processing within the hard drive’s hardware, it does not need to place its cryptographic keys in the computer’s memory, potentially exposing the keys to malware and other threats. A third significant difference is that hardware-based full disk encryption does not cause conflicts with software that modifies the master boot record, for example, software that allows the use of more than one operating system on a hard drive. File, folder, and virtual disk encryption are all used to encrypt specified areas of data on a storage medium such as a laptop hard drive. File encryption encrypts files, a collection of information logically grouped into a single entity and referenced by a unique name, such as a file name. Folder encryption encrypts folders, a type of organizational structure used to group files. Virtual disk encryption encrypts a special type of file— called a container—that is used to encompass and protect other files. File encryption is the process of encrypting individual files on a storage medium and permitting access to the encrypted data only after proper authentication is provided. Folder encryption is very similar to file encryption, except that it addresses individual folders instead of files. Some operating systems offer built-in file and/or folder encryption capabilities, and many third-party programs are also commercially available. File/folder encryption does not provide any protection for data outside the protected files or folders such as unencrypted temporary files that may contain the contents of any unencrypted files being held in computer memory. Virtual disk encryption is the process of encrypting a container. The container appears as a single file but can hold many files and folders that are not seen until the container is decrypted. Access to the data within the container is permitted only after proper authentication is provided, at which point the container appears as a logical disk drive that may contain many files and folders. Virtual disk encryption does not provide any protection for data created outside the protected container, such as unencrypted temporary files, that could contain the contents of any unencrypted files being held in computer memory. Sensitive data are also at risk during transmission across unsecured— untrusted—networks such as the Internet. For example, as reported by NIST, transmission of e-mail containing sensitive information or direct connections for the purpose of processing information between a mobile device and an internal trusted system can expose sensitive agency data to monitoring or interception. According to both NIST and an industry source, agencies can use commercially available encryption technologies such as virtual private networks and digital signatures to encrypt sensitive data while they are in transit over a wired or wireless network. According to NIST, a virtual private network is a data network that enables two or more parties to communicate securely across a public network by creating a private connection, or “tunnel,” between them. Because a virtual private network can be used over existing networks such as the Internet, it can facilitate the secure transfer of sensitive data across public networks. Virtual private networks can also be used to provide a secure communication mechanism for sensitive data such as Web-based electronic transactions and to provide secure remote access to an organization’s resources. Properly implemented digital signature technology uses public key cryptography to provide authentication, data integrity, and nonrepudiation for a message or transaction. As NIST states, public key infrastructures (PKI) can be used not only to encrypt data but also to authenticate the identity of specific users. Just as a physical signature provides assurance that a letter has been written by a specific person, a digital signature is an electronic credential created using a party’s private key with an encryption algorithm. When it is added to a document, it can be used to confirm the identity of a document’s sender since it also contains the user’s public key and name of the encryption algorithm. Validating the digital signature not only confirms who signed it, but also ensures that there have been no alterations to the document since it was signed. Digital signatures may also be employed in authentication protocols to confirm the identity of the user before establishing a session. Specifically, digital signatures can be used to provide higher assurance authentication (in comparison with passwords) when establishing virtual private networks. Digital signatures are often used in conjunction with digital certificates. A digital certificate is an electronic credential that guarantees the association between a public key and a specific entity, such as a person or organization. As specified by NIST, the signature on the document can be validated by using the public key from the digital certificate issued to the signer. Validating the digital certificate, the system can confirm that the user’s relationship to the organization is still valid. The most common use of digital certificates is to verify that a user sending a message is who he or she claims to be and to provide the receiver with a means to encode a reply. For example, an agency virtual private network could use these certificates to authenticate the identity of the user, verify that the key is still good, and that he or she is still employed by the agency. NIST guidance further states that encryption software can be used to protect the confidentiality of sensitive information stored on handheld mobile computing devices and mirrored on the desktop computer. The information on the handheld’s add-on backup storage modules can also be encrypted when not in use. This additional level of security can be added to provide an extra layer of defense, further protecting sensitive information stored on handheld devices. In addition, encryption technologies can protect data on handheld devices while the data are in transit. Users often subscribe to third-party wireless Internet service providers, which use untrusted networks; therefore, the handheld device would require virtual private network software and a supporting corporate system to create a secure communications tunnel to the agency. Table 1 describes the types of commercial encryption technologies available to agencies. While many technologies exist to protect data, implementing them incorrectly—such as failing to properly configure the product, secure encryption keys, or train users—can result in a false sense of security or even render data permanently inaccessible. See appendix II for a discussion of decisions agencies face and important considerations for effectively implementing encryption to reduce agency risks. Although federal laws do not specifically require agencies to encrypt sensitive information, they give federal agencies responsibilities for protecting it. Specifically, FISMA, included within the E-Government Act of 2002, provides a comprehensive framework for ensuring the effectiveness of information security controls over federal agency information and information systems. In addition, other laws frame practices for protecting specific types of sensitive information. OMB is responsible for establishing governmentwide policies and for providing guidance to agencies on how to implement the provisions of FISMA, the Privacy Act, and other federal information security and privacy laws. In the wake of recent security breaches involving personal data, OMB issued guidance in 2006 and 2007 reiterating the requirements of these laws and guidance. In this guidance, OMB directed, among other things, that agencies encrypt data on mobile computers or devices and follow NIST security guidelines. In support of federal laws and policies, NIST provides federal agencies with planning and implementation guidance and mandatory standards for identifying and categorizing information types, and for selecting adequate controls based on risk, such as encryption, to protect sensitive information. Although federal laws do not specifically address the use of encryption, they provide a framework for agencies to use to protect their sensitive information. FISMA, which is Title III of the E-Government Act of 2002, emphasizes the need for federal agencies to develop, document, and implement programs using a risk-based approach to provide information security for the information and information systems that support their operations and assets. Its purposes include the following: providing a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets; recognizing the highly networked nature of the current federal computing environment and providing effective governmentwide management and oversight of the related information security risks, including coordination of information security efforts throughout the civilian, national security, and law enforcement communities; providing for development and maintenance of minimum controls required to protect federal information and information systems; acknowledging that commercially developed information security products offer advanced, dynamic, robust, and effective information security solutions, reflecting market solutions for the protection of critical information infrastructures important to the national defense and economic security of the nation that are designed, built, and operated by the private sector; and recognizing that the selection of specific technical hardware and software information security solutions should be left to individual agencies choosing from among commercially developed products. This act requires agencies to provide cost-effective controls to protect federal information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction, and it directs OMB and NIST to establish policies and standards to guide agency implementation of these controls, which may include the use of encryption. The E-Government Act of 2002 also strives to enhance protection for personal information in government information systems by requiring that agencies conduct privacy impact assessments. A privacy impact assessment is an analysis of how personal information is collected, stored, shared, and managed in a federal system. Additionally, the Privacy Act of 1974 regulates agencies’ collection, use, and dissemination of personal information maintained in systems of records. In this regard, the Privacy Act requires agencies to establish appropriate administrative, technical, and physical safeguards to ensure the security and confidentiality of records and to protect against any threats or hazards to their security or integrity that could result in substantial harm, embarrassment, inconvenience, or unfairness to any individual on whom information is maintained. Congress has also passed laws requiring protection of sensitive information that are agency-specific or that target a specific type of information. These laws include the Health Insurance Portability and Accountability Act of 1996, which requires additional protections to sensitive health care information and the Veterans Benefits, Health Care, and Information Technology Act, enacted in December 2006, which establishes information technology security requirements for personally identifiable information that apply specifically to the Department of Veterans Affairs. Table 2 summarizes the laws that provide a framework for agencies to use in protecting sensitive information. OMB is responsible for establishing governmentwide policies and for providing guidance to agencies on how to implement the provisions of FISMA, the Privacy Act, and other federal information security and privacy laws. OMB policy expands on the risk-based information security program requirements of FISMA in its 2002 and 2004 guidance and in the wake of recent security breaches involving personal data, outlines minimum practices for implementation of encryption required by federal agencies in guidance issued in 2006 and 2007. Specifically OMB memorandum M-04-04, E-Authentication Guidance for Federal Agencies, requires that agencies implement specific security controls recommended by NIST, including the use of approved cryptographic techniques for certain types of electronic transactions that require a specified level of protection. OMB memorandum M-06-16, Protection of Sensitive Agency Information, recommends, among other things, that agencies encrypt all agency data on mobile computers and devices or obtain a waiver from the Deputy Secretary of the agency that the device does not contain sensitive information. The memorandum also recommends that agencies use a NIST checklist provided in the memorandum that states agencies should verify that information requiring protection is appropriately categorized and assigned an appropriate risk impact category. OMB memorandum M-07-16, Safeguarding Against and Responding to the Breach of Personally Identifiable Information, restated the M-06-16 recommendations as requirements, and also required the use of NIST- certified cryptographic modules. These OMB memorandums significant to the use of encryption are briefly described in table 3. In support of federal laws and policies, NIST provides federal agencies with implementation guidance and mandatory standards for identifying and categorizing information types and for selecting adequate controls based on risk, such as encryption, to protect sensitive information. Specifically, NIST Special Publication 800-53 instructs agencies to follow the implementation guidance detailed in supplemental NIST publications, including the following: NIST Special Publication 800-21, Guideline for Implementing Cryptography in the Federal Government, guides the implementation of encryption by agencies. It recommends that prior to selecting a cryptographic method, or combination of methods, agencies address several implementation considerations when formulating an approach and developing requirements for integrating cryptographic methods into new or existing systems, including installing and configuring appropriate cryptographic components associated with selected encryption technologies; monitoring the continued effectiveness and functioning of encryption technologies; developing policies and procedures for life cycle management of cryptographic components (such as procedures for management of encryption keys, backup and restoration of services, and authentication techniques); and training users, operators, and system engineers. Special Publication 800-57, Recommendation for Key Management, provides guidance to federal agencies on how to select and implement cryptographic controls for protecting sensitive information by describing cryptographic algorithms, classifying different types of keys used in encryption, and providing information on key management. Special Publication 800-60, Guide for Mapping Types of Information and Information Systems to Security Categories, provides implementation guidance on the assignment of security categories to information and information systems using FIPS 199. Special Publication 800-63, Electronic Authentication Guideline, addresses criteria for implementing controls that correspond to the assurance levels of OMB memorandum M-04-04 such that, if agencies assign a level 2, 3, or 4 to an electronic transaction, they are required to implement specific security controls, including the use of approved cryptographic techniques. Special Publication 800-77, Guide to IPsec VPNs, provides technical guidance to agencies in the implementation of virtual private networks, such as identifying needs and designing, deploying, and managing the appropriate solution, including the use of Federal Information Processing Standards (FIPS)-compliant encryption algorithms. NIST also issues FIPS, which frame the critical elements agencies are required to follow to protect sensitive information and information systems. Specifically FIPS 140-2, Security Requirements for Cryptographic Modules. Agencies are required to encrypt agency data, where appropriate, using NIST- certified cryptographic modules. This standard specifies the security requirements for a cryptographic module used within a security system protecting sensitive information in computer and telecommunication systems (including voice systems) and provides four increasing, qualitative levels of security intended to cover a wide range of potential applications and environments. Several standards describe the technical specifications for cryptographic algorithms, including those required when using digital signatures. FIPS 199 provides agencies with criteria to identify and categorize all of their information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels. FIPS 200 requires a baseline of minimum information security controls for protecting the confidentiality, integrity, and availability of federal information systems and the information processed, stored, and transmitted by those systems. FIPS 200 directs agencies to implement the baseline control recommendations of NIST Special Publication 800-53. The following security-related areas in FIPS 200 whose controls are further detailed in Special Publication 800-53 pertain to the use of encryption: Access control—describes controls for developing and enforcing policies and procedures for access control including remote access, wireless access, and for portable and mobile devices using mechanisms such as authentication and encryption. Contingency planning—includes controls to ensure that the organization protects system backup information from unauthorized modification by employing appropriate mechanisms such as digital signatures. Identification and authentication—describes controls for developing and documenting identification and authentication policies and procedures. Maintenance—includes remote maintenance control that addresses how an organization approves, controls, and monitors remotely executed maintenance and diagnostic activities including using encryption and decryption of diagnostic communications. Media protection—describes developing policies and procedures for media protection including media storage (which may include encrypting stored data) and transport. System and communications protection—includes controls to ensure the integrity and confidentiality of information in transit by employing cryptographic mechanisms if required, including establishing and managing cryptographic keys. NIST publications pertaining to the use of encryption in federal agencies are briefly described in table 4. The extent to which 24 major federal agencies reported that they had implemented encryption and developed plans to implement encryption varied across agencies. Although all agencies had initiated efforts to encrypt stored and transmitted sensitive agency information, none had completed these efforts or developed and documented comprehensive plans to guide their implementation of encryption technologies. Our tests at 6 selected agencies revealed weaknesses in the encryption implementation practices involving the installation and configuration of FIPS-validated cryptographic modules encryption products, monitoring the effectiveness of installed encryption technologies, the development and documentation of policies and procedures for managing these technologies, and the training of personnel in the proper use of installed encryption products. As a result of these weaknesses, federal information may remain at increased risk of unauthorized disclosure, loss, and modification. All 24 major federal agencies reported varying degrees of progress in their efforts to encrypt stored and transmitted sensitive agency information. While most of the agencies reported that they had not completed efforts to encrypt stored sensitive information, they reported being further along with efforts to encrypt transmitted sensitive information. Preparing for the implementation of encryption technologies involves numerous considerations. In response to our survey, agencies reported that they had encountered challenges that hinder the implementation of encryption. See appendix III for a discussion of the hindrances identified by agencies. OMB requires agencies to encrypt all agency data on mobile computers and devices or obtain a waiver from the Deputy Secretary of the agency stating that the device does not contain sensitive information. Of 24 agencies that reported from July through September 2007 on the status of their efforts to encrypt sensitive information stored on their laptops and handheld mobile devices, 8 agencies reported having encrypted information on less than 20 percent of these devices and 5 agencies reported having encrypted information on between 20 and 39 percent of these devices (see fig. 3). Overall, the 24 agencies reported that about 70 percent of laptop computers and handheld devices had not been encrypted. In addition, 10 of 22 agencies reported having encrypted information on less than 20 percent of portable storage media taken offsite, and 3 of 22 reported having encrypted between 20 and 39 percent. Further, 9 of 17 agencies reported encrypting sensitive information on less than 20 percent of offsite backup storage media. However, while agencies were encrypting sensitive data on mobile computers and devices such as laptop computers and handheld devices (e.g. personal digital assistants), 6 agencies reported having other storage devices, such as portable storage media, that could contain sensitive data. Of the 6 agencies, 4 had not encrypted these additional devices. Further, officials at 1 agency had no plans to encrypt sensitive data contained on their portable media. In response to our query in April 2008, OMB officials stated that the term “mobile computers and devices” was intended to include all agency laptops, handheld devices, and portable storage devices such as portable drives and CD-ROMs that contain agency data. Nevertheless, this description is not clear in any of its memorandums. Until OMB clarifies the applicability of the encryption requirement so that agencies can complete encrypting sensitive agency information stored on applicable devices, the information will remain at risk of unauthorized disclosure. Most agencies reported that they had encrypted sensitive information transmitted over wired and wireless networks. Of 23 agencies reporting on their efforts to encrypt wired Internet transmissions of sensitive information, 18 agencies reported encrypting nearly all or all (80 percent to 100 percent), of their transmissions over wired Internet networks. In addition, of 21 agencies reporting on their efforts to encrypt wireless transmissions of sensitive information, 12 reported having encrypted all or nearly all such transmissions (see fig. 4). Although 24 major federal agencies reported having encryption efforts under way, none of the agencies had documented a comprehensive plan that considered the security control implementation elements recommended by NIST. According to NIST, cryptography is best designed as an integrated part of a comprehensive information security program rather than as an add-on feature and it suggests that implementing technical approaches without a plan to guide the process is the least effective approach to making use of cryptography. Specifically, as part of an effective information security program, NIST Special Publication 800-53 requires agencies to inventory and categorize information and systems according to risk as well as to document the baseline security controls— such as encryption—selected to adequately mitigate risks to information. However, of the 24 agencies we surveyed, 18 reported that they had not completed efforts to inventory sensitive information that they hold. Further, NIST recommends that agencies follow NIST Special Publication 800-21 guidance when formulating their approach for integrating cryptographic methods into new or existing systems and documenting plans for implementing encryption, such plans consist of the following minimum elements: installing and properly configuring FIPS-validated cryptographic modules associated with selected encryption technologies; monitoring the continued effectiveness of installed cryptographic controls, including the proper functioning of encryption technologies; documenting and implementing policies and procedures for management of cryptographic components, such as the effective implementation and use of FIPS-compliant encryption technologies and the establishment and management of encryption keys; and providing training to users, operators, and system engineers. Although several agencies had developed ad hoc encryption technology acquisition or deployment plans, none of the agencies had documented comprehensive plans that addressed the elements recommended by NIST. In response to our query, OMB officials stated that they monitor agencies’ progress toward implementing encryption through quarterly data submitted by the agencies as part of the President’s Management Agenda scorecard. However, OMB did not provide us with evidence to demonstrate monitoring of the agencies’ efforts to inventory the sensitive information they hold or to develop implementation plans. As previously noted, agencies did not have such plans and often did not have inventories. Until agencies develop and document comprehensive plans to guide their approach for implementing encryption, including completing an inventory of the sensitive information they hold, agencies will have limited assurance that they will be able to effectively complete implementation and manage life cycle maintenance of encryption technologies, as we observed at selected agencies and discuss later in this report. Practices for implementing encryption displayed weaknesses at the 6 federal agencies we reviewed for testing. Specifically, 2 of the 6 agencies had not installed FIPS-validated cryptographic modules encryption technologies and 4 had not configured installed encryption technologies in accordance with FIPS 140-2 specifications. In addition, all of the 6 agencies had not either developed or documented policies and procedures for managing encryption implementation, and 3 of these agencies had not adequately trained personnel in the effective use of installed encryption technologies. Protection of information and information systems relies not only on the technology in place but on establishing a foundation for effective implementation, life cycle management, and proper use of the technologies. Until agencies resolve these weaknesses, their data may not be fully protected. OMB requires agencies to protect sensitive agency data stored on mobile devices by installing a FIPS-validated cryptographic modules product, and NIST Special Publication 800-53 recommends that agencies install FIPS- compliant products when the agency requires encryption of stored or transmitted sensitive information. Agencies can now acquire FIPS- validated cryptographic module products for encrypting stored information through the General Services Administration’s (GSA) SmartBUY program (see app. IV for a description of this program). Use of encryption technologies approved by NIST as compliant with FIPS 140-2 provides additional assurance that the product implemented—if configured correctly—will help protect sensitive information from unauthorized disclosure. Laptop computers. The Department of Housing and Urban Development (HUD) had not installed encryption products on any of its laptop computers despite an agency official’s assertions to the contrary. HUD officials explained that they had planned to implement FIPS-compliant encryption in fiscal year 2008 but that implementation was delayed until late in fiscal year 2008 due to lack of funding and it is now part of their fiscal year 2009 budget request. In addition, the Department of Education had not installed a FIPS-validated cryptographic modules product to encrypt sensitive information on any of the 15 laptop computers that we tested at one of its components. Of the 4 remaining agencies, 3—the Department of Agriculture (USDA), the State Department, and GSA—had installed FIPS-compliant technologies on all 58 of the laptop computers that we tested at specific locations within each agency. At the National Aeronautics and Space Administration (NASA) location we tested, we confirmed that the agency’s selected FIPS-compliant encryption software had been installed on 27 of 29 laptop computers. Although the agency asserted that it had installed it on all 29 laptops, officials explained that they did not have a mechanism to detect whether the encryption product was successfully installed and functioning. Handheld devices. All 6 of the agencies had deployed FIPS-compliant handheld mobile devices (specifically, BlackBerry® devices) for use by personnel. BlackBerry software and the BlackBerry enterprise server software enable users to store, send, and receive encrypted e-mail and access data wirelessly using FIPS-validated cryptographic modules encryption algorithms. Virtual private networks. One of three virtual private networks installed by the Department of Education was not a FIPS-compliant product. The remaining 5 agencies had installed FIPS-validated cryptographic modules products to protect transmissions of sensitive information. Although most agencies had installed FIPS-compliant products to encrypt information stored on devices and transmitted across networks, some did not monitor whether the product was functioning or configure the product to operate only in a FIPS-validated cryptographic modules secure mode. Until agencies configure FIPS-compliant products in a secure mode as directed by NIST—for example, by enabling only FIPS-validated cryptographic module encryption algorithms—protection against unauthorized decryption and disclosure of sensitive information will be diminished. Laptop computers. Of the 4 agencies with FIPS-compliant products installed on laptop computers, 3 had configured the product to operate in a secure mode as approved by NIST on all devices that we examined. However, a component of the Department of Agriculture had not effectively monitored the effectiveness and continued functioning of encryption products on 5 of the 52 laptop computers that we examined. Agency officials were unaware that the drives of these devices had not been correctly encrypted. The drives, while having the encryption software installed, did not encrypt the data on the drive. This agency’s system administrator attributed the noncompliance to the failure of a step in the installation process; specifically, the laptop had not been connected to the agency’s network for a sufficient period of time to complete activation of the user’s encryption key by the central server, and the agency had no mechanism in place to monitor whether the installed product was functioning properly. Handheld devices. Three of the 6 agencies—the Department of Education, HUD, and NASA—had not configured their handheld BlackBerry devices to encrypt the data contained on the devices. All six of the agencies encrypted data in transit because FIPS-validated cryptographic modules encryption was built into the BlackBerry device software. However, agencies must enable the encryption to protect information stored on the device itself by making a selection to do so and requiring the user to input a password. Officials at these 3 agencies stated that they had not enabled this protective feature on all their BlackBerry devices due to operational issues with enabling content protection and that they are awaiting a solution from the vendor. Virtual private networks. Two of the 6 agencies—the Department of Education and HUD—had not configured their virtual private networking technologies to use only strong, FIPS-validated cryptographic modules encryption algorithms for encrypting data and to ensure message integrity. The use of weak encryption algorithms—ones that have not been approved by NIST or that have explicitly been disapproved by NIST— provides limited assurance that information is adequately protected from unauthorized disclosure or modification. The weaknesses in encryption practices identified at the 6 selected agencies existed in part because agencywide policies and procedures did not address federal guidelines related to implementing and using encryption. NIST Special Publication 800-53 recommends that agencies develop a formal, documented policy that addresses the system and communications controls as well as a formal, documented procedure to facilitate implementation of these controls. While policies should address the agency’s position regarding use of encryption and management of encryption keys, the implementation procedures should describe the steps for performance of specific activities such as user registration, system initialization, encryption key generation, key recovery, and key destruction. However, 4 of the 6 agencies did not have a policy that addressed the establishment and management of cryptographic keys as directed by NIST, and none of the 6 agencies had detailed procedures for implementing this control. Furthermore, according to NIST guidance, agency policies and procedures are to address how agencies will install and configure FIPS-compliant encryption products. All agencies’ policies addressed how the agency planned to comply with these requirements. However, 4 agencies did not have detailed procedures requiring installation and configuration of FIPS- compliant cryptography (see table 5). Policies and procedures for installing and configuring encryption technologies and for managing encryption keys provide the foundation for the effective implementation and use of encryption technologies and are a necessary element of agency implementation plans. Until these agencies develop, document, and implement these policies and procedures, the agencies’ implementation of encryption may be ineffective in protecting the confidentiality, integrity, and availability of sensitive information. Also contributing to the weaknesses at 3 of 6 agencies was the failure to adequately train personnel in the proper use of installed encryption technology. Specifically USDA officials stated that users had not been trained to check for continued functioning of the software after installation but that they were in the process of including encryption concepts in its annual security awareness training required for all computer users. At the conclusion of our review, USDA had not yet completed this effort. At the component of the Department of Education where testing was conducted, some users were unaware that the agency had installed encryption software on their laptop computers. These users, therefore, had never used the software to encrypt sensitive files or folders. Further, while an agency official asserted that encryption training was provided, the training documents provided pertained only to the protection of personally identifiable information and did not provide specifics on how to use the available encryption products. Users we spoke with were unaware of any available training. At NASA, several users stated that they had refused to allow the encryption software to be installed on their devices, while other users said they were unfamiliar with the product. Although NASA requires users to receive training when encryption is installed and has developed a training guide, there was no mechanism in place to track whether users complete the necessary training. Until these agencies provide effective training to their personnel in the proper management and use of installed encryption technologies, they will have limited confidence in the ability of the installed encryption technologies to function as intended. Despite the availability of numerous types of commercial encryption technologies and federal policies requiring encryption, most federal agencies had just begun to encrypt sensitive information on mobile computers and devices. In addition, agencies had not documented comprehensive plans to guide activities for effectively implementing encryption. Although governmentwide efforts were under way, agency uncertainty with OMB requirements hampered progress. In addition, weaknesses in encryption practices at six selected federal agencies— including practices for installing and configuring FIPS-validated cryptographic modules products, monitoring the effectiveness of these technologies, developing encryption policies and procedures, and training personnel—increased the likelihood that the encryption technologies used by the agencies will not function as intended. Until agencies address these weaknesses, sensitive federal information will remain at increased risk of unauthorized disclosure, modification, or loss. We are making 20 recommendations to the Director of the Office of Management and Budget and six federal departments and agencies to strengthen encryption of federal systems. To assist agencies with effectively planning for and implementing encryption technologies to protect sensitive information, we recommend that the Director of the Office of Management and Budget take the following two actions: clarify governmentwide policy requiring agencies to encrypt sensitive agency data through the promulgation of additional guidance and/or through educational activities and monitor the effectiveness of the agencies’ encryption implementation plans and efforts to inventory the sensitive information they hold. To assist the Department of Agriculture as it continues to deploy its departmentwide encryption solutions and to improve the life cycle management of encryption technologies, we recommend that the Secretary of Agriculture direct the chief information officer to take the following three actions: establish and implement a mechanism to monitor the successful installation and effective functioning of encryption products installed on devices, develop and implement departmentwide procedures for encryption key establishment and management, and develop and implement a training program that provides technical support and end-user personnel with adequate training on encryption concepts, including proper operation of the specific encryption products used. We also recommend that the Secretary of the Department of Education direct the chief information officer to take the following five actions to improve the life cycle management of encryption technologies: evaluate, select, and install FIPS 140-compliant products for all encryption needs and document a plan for implementation that addresses protection of all sensitive information stored and transmitted by the agency; configure installed FIPS-compliant encryption technologies in accordance with FIPS-validated cryptographic modules security settings for the product; develop and implement departmentwide policy and procedures for encryption key establishment and management; develop and implement departmentwide procedures for use of FIPS- develop and implement a training program that provides technical support and end-user personnel with adequate training on encryption concepts, including proper operation of the specific encryption products used. To ensure that the Department of Housing and Urban Development is adequately protecting its sensitive information and to improve the life cycle management of encryption technologies at the department, we recommend that the Secretary of Housing and Urban Development direct the chief information officer to take the following three actions: evaluate, select, and install FIPS 140-compliant products for all encryption needs and document a plan for implementation that addresses protection of all sensitive information stored and transmitted by the agency; configure installed FIPS-compliant encryption technologies in accordance with FIPS-validated cryptographic modules security settings for the product; and develop and implement departmentwide procedures for the use of FIPS- compliant cryptography and for encryption key establishment and management. To improve the life cycle management of encryption technologies at the Department of State, we recommend that the Secretary of State direct the chief information officer to take the following two actions: develop and implement departmentwide policy and procedures for encryption key establishment and management and develop and implement departmentwide procedures for use of FIPS- compliant cryptography. To improve the life cycle management of encryption technologies at the General Services Administration, we recommend that the Administrator of the General Services Administration direct the chief information officer to take the following two actions: develop and implement departmentwide policy and procedures for encryption key establishment and management and develop and implement departmentwide procedures for use of FIPS- compliant cryptography. As the National Aeronautics and Space Administration continues to plan for a departmentwide encryption solution and to improve the life cycle management of encryption technologies, we recommend that the Administrator of the National Aeronautics and Space Administration direct the chief information officer to take the following three actions: establish and implement a mechanism to monitor the successful installation and effective functioning of encryption products installed on devices, develop and implement departmentwide policy and procedures for encryption key establishment and management, and develop and implement a training program that provides technical support and end-user personnel with adequate training on encryption concepts, including proper operation of the specific encryption products used. We received written comments on a draft of this report from the Administrator, Office of E-Government and Information Technology at OMB (reproduced in app. V). OMB generally agreed with the report’s contents and stated that it would carefully consider our recommendations. We also received written comments from Education’s Chief Information Officer (reproduced in app. VI), from HUD’s Acting Chief Information Officer (reproduced in app. VII), from the Department of State (reproduced in app. VIII), from the Acting Administrator of the GSA (reproduced in app. IX), and from the Deputy Administrator of NASA (reproduced in app. X). We received comments via email from the Department of Agriculture. In these comments, the Departments of Agriculture, Education, HUD, and State; the GSA; and NASA agreed with our recommendations to their respective departments. Agencies also stated that they had implemented or were in the process of implementing our recommendations. In addition, NIST and the Social Security Administration provided technical comments, which we have incorporated as appropriate. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees and the agency heads and inspectors general of the 24 major federal agencies. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions or wish to discuss this report, please contact me at (202) 512-6244 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XI. The objectives of our review were to determine (1) how commercially available encryption technologies could help federal agencies protect sensitive information and reduce risks; (2) the federal laws, policies, and guidance for using encryption technologies to protect sensitive information; and (3) the extent to which agencies have implemented, or planned to implement, encryption technologies to protect sensitive information. To address the first objective, we reviewed prior GAO reports and reviewed documentation regarding products validated by the National Institute of Standards and Technology (NIST) Cryptographic Module Validation Program to identify commercially available encryption technologies. Additionally, we met with a vendor of an encryption product and interviewed NIST encryption experts regarding the characteristics of products that can reduce risks to agencies. To address the second objective, we reviewed prior GAO and agency inspector general reports to identify relevant laws and guidance such as the Federal Information Security Management Act of 2002 (FISMA) and the Privacy Act of 1974 to identify mandatory and optional practices for protecting sensitive information (including personally identifiable information but excluding classified national security information) that federal agencies collect, process, store, and transmit. We examined the laws to identify federal agencies responsible for promulgating federal policies and standards on the use of encryption. Additionally, we researched official publications issued by the Office of Management and Budget and NIST and interviewed officials from these agencies to identify the policies, standards, and guidance on encryption that have been issued. To address the third objective, we collected and analyzed agency-specific policies, plans, and practices through a data request and also conducted a survey of the 24 major federal agencies. A survey specialist designed the survey instrument in collaboration with subject matter experts. Then, the survey was pretested at 4 of these agencies to ensure that the questions were relevant and easy to comprehend. For each agency surveyed, we identified the appropriate point of contact, notified each one of our work, and distributed the survey along with a data request to each via e-mail in June 2007. In addition, we discussed the purpose and content of the survey and data request with agency officials when requested. All 24 agencies responded to our survey and data request from June to September 2007; results are reported as of this date. We contacted agency officials when necessary for additional information or clarification of agencies’ status of encryption implementation. We did not verify the accuracy of the agencies’ responses; however, we reviewed supporting documentation that agencies provided to corroborate information provided in their responses. We then analyzed the results from the survey and data request to identify: the types of information encrypted in data when stored and in transit; technologies used by the agency to encrypt information; whether the technologies implemented by the agency met federal the extent to which the agency has implemented, or plans to implement, encryption; and any challenges faced and lessons learned by agencies in their efforts to encrypt sensitive but unclassified information. Conducting any survey may introduce errors. For example, differences in how a particular question is interpreted, the sources of information that are available to respondents, or how the data are entered or were analyzed can introduce variability into the survey results. We took steps in the development of the survey instrument, the data collection, and the data analysis to minimize errors. In addition, we tested the implementation of encryption technologies at 6 agencies to determine whether each agency was complying with federal guidance that required agencies to use NIST-validated encryption technology. Out of 24 major federal agencies, we selected 6 that met one or more of the following conditions: they (1) had not been tested under a recent GAO engagement, (2) had reported having initiated efforts to install FIPS-validated cryptographic modules encryption technologies, (3) had experienced publicized incidents of data compromise, or (4) were reasonably expected to collect, store, and transmit a wide range of sensitive information. Specifically, we tested the implementation of encryption for BlackBerry servers, virtual private networks, or a random selection of laptop computers at specific locations at the following 6 agencies within the Washington, D.C. area: U.S. Department of Agriculture, Department of Housing and Urban Development, General Services Administration, and National Aeronautics and Space Administration At each of these agencies, we requested an inventory of laptop computers that were located at agency facilities in the Washington, D.C., metro area and that were encrypted. For each agency, we nonstatistically selected one location at which to perform testing, and thus the encryption test results for each agency cannot be projected to the entire agency. We performed the testing between September and December 2007. At each location where laptop computers were tested, there were a small number of laptop computers that were requested as part of our sample but which were not made available to us for testing. Department officials cited several reasons for this, including that the user of the device could not bring it to the location in time for our testing. In testing the laptop computers, we determined whether encryption software had been installed on the device and whether the software had been configured properly to adhere to federally required standards. Although we identified unencrypted laptop computers at each agency, we were not able to make statistical estimates of the percentage of unencrypted devices at each location. The small number of devices in each sample not made available to us for our testing compromised the randomness of each sample. Additionally, for each of the selected locations among the 6 agencies, we requested information on their BlackBerry servers, chose the server with the greatest number of users for testing, and reviewed through observation the specific security configuration settings. We also requested and examined agency-provided information for their virtual private networks to determine if encrypted networks were using products validated by NIST. Finally, we interviewed agency officials regarding their practices for encrypting stored data as well as data in transit, and for encryption key establishment and management. Furthermore, we reviewed and analyzed data on the General Services Administration’s SmartBUY program to determine the extent of savings the program provides to federal agencies and how certain agencies have already benefited from the program. We conducted this performance audit from February 2007 through June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Encryption technology may help protect sensitive information from compromise; however, there are several important implementation and management considerations when selecting and implementing this technology. Encryption can be a powerful tool, but implementing it incorrectly—such as by failing to properly configure the product, secure encryption keys, or train users—can, at best, result in a false sense of security and, at worst, render data permanently inaccessible. Designing, building, and effectively implementing commercially available cryptographic technologies involves more than installing the technology. Decisions must be made for managing encryption keys and related cryptographic components, as well as for managing mobile devices and using public key infrastructure (PKI) technologies. Ultimately, the effectiveness of the encryption technologies used to protect agency information and reduce risk depends on how an agency implements and manages these technologies and the extent to which they are an integral part of an effectively enforced information security policy that includes sound practices for managing encryption keys. Policies and procedures. Comprehensive policies for the management of encryption and decryption keys—the secret codes that lock and unlock the data—are an important consideration. Providing lifetime management of private keys and digital certificates across hundreds of applications and thousands of servers, end-users, and networked devices can quickly strain an agency’s resources. For example, if a key is lost or damaged, it may not be possible to recover the encrypted data. Therefore, it is important to ensure that all keys are secured and managed properly by planning key management processes, procedures, and technologies before implementing storage encryption technologies. According to NIST, this planning would include all aspects of key management, including key generation, use, storage, and destruction. It would also include a careful consideration of how key management practices can support the recovery of encrypted data if a key is inadvertently destroyed or otherwise becomes unavailable (for instance, because a user unexpectedly resigns or loses a cryptographic token containing a key). An example of recovery preparation is storing duplicates of keys in a centralized, secured key repository or on physically secured removable media. Additional considerations for the encryption of removable media are how changing keys may affect access to encrypted storage on the media and what compensating controls could be developed, such as retaining the previous keys in case they are needed. Key storage location. Another consideration that NIST describes is deciding where the local keys will be stored. For some encryption technologies, such as full disk encryption and many file/folder encryption products, there are often several options for key location, including the local hard drive, a flash drive, a cryptographic token, or a trusted platform module chip. Some products also permit keys to be stored on a centralized server and retrieved automatically after the user authenticates successfully. For virtual disk encryption, the main encryption key is often stored encrypted within the disk or container itself. Access to encryption keys. Another consideration is properly restricting access to encryption keys. According to NIST, storage encryption technologies should require the use of one or more authentication mechanisms, such as passwords, smart cards, and cryptographic tokens, to decrypt or otherwise gain access to a storage encryption key. The keys themselves should be logically secured (encrypted) or physically secured (stored in a tamper-resistant cryptographic token). The authenticators used to retrieve keys should also be secured properly. Managing cryptographic components related to encryption keys. In addition to key management, NIST describes several other considerations when planning a storage encryption technology. Setting the cryptography policy involves choosing the encryption algorithm, mode of cryptographic operation, and key length. Federal agencies must also use NIST-validated cryptographic modules configured for FIPS-compliant algorithms and key lengths. In addition, several FIPS-compliant algorithms are available for integrity checking. Another consideration for managing cryptographic components is how easily an encryption product can be updated when stronger algorithms and key sizes become available in the future. Centralized management of mobile devices. NIST recommends centralized management for most storage encryption deployments because of its effectiveness and efficiency for policy verification and enforcement, key management, authenticator management, data recovery, and other management tasks. Centralized management can also automate these functions: deployment and configuration of storage encryption software to end user devices, distribution and installation of updates, collection and review of logs, and recovery of information from local failures. PKI technology. Because PKI technology uses a public key as part of its encryption system, PKI systems with key management can be used to avoid the problem of lost keys. Data encrypted with PKI relies on one public key, so the private key of the person encrypting the data isn’t necessarily required to decrypt it. However, if an unauthorized user is able to obtain a private key, the digital certificate could then be compromised. Agencies considering PKI technology must ensure that the key systems of different agencies are compatible for cross-agency collaboration on tasks such as security incident information sharing. Further, users of certificates are dependent on certification authorities to verify the digital certificates. If a valid certification authority is not used, or a certification authority makes a mistake or is the victim of a cyber attack, a digital certificate may be ineffective. Ongoing maintenance of encryption technologies. Systems administrators responsible for encryption technology maintenance should be able to configure and manage all components of the technology effectively and securely. According to NIST, it is particularly important to evaluate the ease of deployment and configuration, including how easily the technology can be managed as the technology is scaled to larger deployments. Another consideration is the ability of administrators to disable configuration options so that users cannot circumvent the intended security. Other maintenance considerations NIST describes include the effects of patching/upgrading software, changing software settings (changing cryptographic algorithms or key sizes), uninstalling or disabling encryption software, changing encryption/decryption keys, and changing user or administrator passwords. Preparing an agency for encryption presents numerous challenges to agencies, including selecting an adequate combination of cost-effective baseline security controls, properly configuring the networks and user devices within the information technology (IT) infrastructure to accommodate selected encryption technologies, providing training to personnel, and managing encryption keys. In response to our survey, agencies reported several conditions that hinder their ability to encrypt sensitive information as required by the Office of Management and Budget. In response to our survey, all 24 agencies reported hindrances with implementing encryption. These hindrances included prohibitive costs; user acceptance; user training; data backup and recovery; data archival and retrieval; interoperability; infrastructure; vendor support for encryption products acquired; availability of FIPS-compliant products to meet the needs of uncommon or unique devices, applications, or environments within the agency’s IT infrastructure; and management support for migration to encryption controls. Agencies noted the level of hindrance caused by these challenges ranged from little or no hindrance to great or very great hindrance. The most challenging conditions are discussed below. Prohibitive costs. Nine agencies reported that the cost of acquiring and implementing encryption was their greatest hindrance, and 13 agencies cited this condition as somewhat of a hindrance or a moderate hindrance. As reported in appendix IV, a governmentwide initiative (SmartBUY) has been established to assist agencies with overcoming this hindrance. User acceptance and training. Some encryption technologies can be burdensome to users and can require specialized training on encryption concepts and proper installation, maintenance, and use of encryption products. Sixteen agencies reported facing somewhat of a hindrance or a moderate hindrance in obtaining user acceptance of encryption implementations and in training personnel. Four agencies reported a great or very great hindrance from lack of user acceptance, and 2 agencies reported a great hindrance from insufficient training. Data backup, recovery, archiving, and retrieval. Agencies must establish policies and procedures for management of encryption keys, which are necessary to recover data from back-ups in the event of a service interruption or disaster, or to retrieve data in archived records, perhaps many years in the future. For example, if the key is not properly backed up and is on a server that has been destroyed in a fire or the key used to encrypt archived records changes over time, data encrypted with the key may be irretrievably lost. Sixteen agencies reported facing somewhat of a hindrance or a moderate hindrance with backup and recovery, and 15 agencies reported the same level of hindrance with data archiving and retrieval. Interoperability. Key systems and technologies of different agencies need to be compatible with each other for cross-agency collaboration. Five agencies reported that lack of interoperability was a great or very great hindrance, and 13 reported somewhat of a hindrance or a moderate hindrance. Infrastructure considerations. Six agencies reported facing a great or very great hindrance in readying their IT infrastructure for encryption and 11 reported this was somewhat of a hindrance or a moderate hindrance. Table 6 summarizes the number of agencies reporting the extent to which 10 conditions affect their agency’s ability to implement encryption. Although agencies reported facing hindrances to implementing encryption, a new program (GSA SmartBUY specific to encryption products) established after we started our review, offers agencies options to overcome key hindrances. For example, prohibitive costs and acquiring FIPS-compliant products are two hindrances that agencies may be able to address through SmartBUY. As discussed in appendix IV, discounted pricing is available for data-at-rest encryption software. In addition, all products available through SmartBUY use cryptographic modules validated under FIPS 140-2 security requirements. To help agencies comply with OMB requirements for encrypting information on mobile devices, a governmentwide acquisition vehicle was established for encryption products for stored data. Through a governmentwide program known as SmartBUY (Software Managed and Acquired on the Right Terms), agencies can procure encryption software at discounted prices. According to the General Services Administration (GSA), SmartBUY is a federal government procurement vehicle designed to promote effective enterprise-level software management. By leveraging the government’s immense buying power, SmartBUY could save taxpayers millions of dollars through governmentwide aggregate buying of commercial off-the-shelf software products. SmartBUY officially began in 2003, when OMB issued a memo emphasizing the need to reduce costs and improve quality in federal purchases of commercial software. The memo designates GSA as the executive agent to lead the interagency initiative in negotiating governmentwide enterprise licenses for software. SmartBUY establishes strategic enterprise agreements with software publishers (or resellers) via blanket purchase agreements. OMB Memorandum 04-08, Maximizing Use of SmartBUY and Avoiding Duplication of Agency Activities with the President’s 24 E-Gov Initiatives, requires agencies to review SmartBUY contracts to determine whether they satisfy agency needs—such as for products to encrypt stored data—and, absent a compelling justification for doing otherwise, acquire their software requirements from the SmartBUY program. The issuance of OMB’s May 2006 recommendation to encrypt mobile devices contributed to the addition of 11 SmartBUY agreements for stored data encryption products established in June 2007. The products offered fall into one of three software and hardware encryption product categories: full disk encryption, file encryption, or integrated full disk/file encryption products. All products use cryptographic modules validated under FIPS 140-2 security requirements. Volume discounts on encryption products are available when purchasing in tiers of 10,000, 33,000, and 100,000 users. Each of the 11 agreements has its own pricing structure, which may include maintenance and training in addition to licenses for users. Discounts on volume pricing can range up to 85 percent off GSA schedule prices. Table 7 provides an example of the discounted pricing available from 1 of the 11 SmartBUY agreements for encryption software. As of January 2008, 10 agencies had purchased encryption products—such as software licenses, annual maintenance services, and training—from the stored data SmartBUY list, realizing significant cost savings. One of those agencies—the Social Security Administration—purchased 250,000 licenses of one of the stored data products at a savings of $6.7 million off the GSA schedule prices. Additionally, USDA negotiated an agreement for 180,000 licenses at $9.63 each, as opposed to the GSA unit price of $170 per license. The large number of licenses acquired allowed USDA to negotiate the low price. Several agencies noted that considering an enterprisewide deployment of encryption can be helpful with issues of standardization, interoperability, and infrastructure readiness. While 10 agencies have already acquired encryption products through the SmartBUY program, several agencies are still in the process of assessing which encryption products (including those available under the SmartBUY program) will best suit agency needs. In addition to the individual named above, Nancy DeFrancesco (Assistant Director), James Ashley, Debra Conner; Season Dietrich, Neil Doherty, Nancy Glover, Joel Grossman, Ethan Iczkovitz, Stanley J. Kostyla, Lowell Labaro, Rebecca Lapaze, Anjalique Lawrence, Harold Lewis, Lee McCracken, and Tammi L. Nguyen made key contributions to this report. Process of determining the permissible activities of users and authorizing or prohibiting activities by each user. Process of confirming an asserted identity with a specified or understood level of confidence. Granting the appropriate access privileges to authenticated users. A system that manages life cycle maintenance tasks associated with the credentials, such as unlocking the personal identity verification cards during issuance or updating a personal identification number or digital certificate on the card. A digital representation of information that (1) identifies the authority issuing the certificate; (2) names or identifies the person, process, or equipment using the certificate; (3) contains the user’s public key; (4) identifies the certificate’s operational period; and (5) is digitally signed by the certificate authority issuing it. A certificate is the means by which a user is linked—or bound—to a public key. Data in an encrypted form. The file used by a virtual disk encryption technology to encompass and protect other files. An object, such as a smart card, that identifies an individual as an official representative of, for example, a government agency. The result of the transformation of a message by means of a cryptographic system using digital keys, so that a relying party can determine (1) whether the transformation was created using the private key that corresponds to the public key in the signer’s digital certificate and (2) whether the message has been altered since the transformation was made. Digital signatures may also be attached to other electronic information and programs so that the integrity of the information and programs may be verified at a later time. The electronic equivalent of a traditional paper-based credential—a document that vouches for an individual’s identity. The encryption of information at its origin and decryption at its intended destination without any intermediate decryption. A collection of information that is logically grouped into a single entity and referenced by a unique name, such as a file name. An organizational structure used to group files. The process of encrypting all the data on the hard drive used to boot a computer, including the computer’s operating system, that permits access to the data only after successful authentication with the full disk encryption product. Encryption that is normally performed by dedicated hardware in the client/host system. The process of determining to what identity a particular individual corresponds. A value used to control cryptographic operations, such as decryption, encryption, signature generation, or signature verification. A program that is inserted into a system, usually covertly, with the intent of compromising the confidentiality, integrity, or availability of the victim’s data, applications, or operating system or of otherwise annoying or disrupting the victim. A computer’s master boot record is a reserved sector on its bootable media that determines which software (e.g., operating system, utility) will be executed when the computer boots from the media. The program that, after being initially loaded into the computer by a boot program, manages all the other programs in a computer. Examples of operating systems include Microsoft Windows, MacOS, and Linux. The other programs are called applications or application programs. The application programs make use of the operating system by making a request for service through a defined application program interface. In addition, users can interact directly with the operating system through a user interface such as a command language or a graphical user interface. The secret part of an asymmetric key pair that is typically used to digitally sign or decrypt data. The public part of an asymmetric key pair that is typically used to verify signatures or encrypt data. A system of hardware, software, policies, and people that, when fully and properly implemented, can provide a suite of information security assurances—including confidentiality, data integrity, authentication, and nonrepudiation—that are important in protecting sensitive communications and transactions. The expectation of loss expressed as the probability that a threat will exploit a vulnerability with a harmful result. Any information that an agency has determined requires some degree of heightened protection from unauthorized access, use, disclosure, disruption, modification, or destruction because of the nature of the information, e.g., personal information required to be protected by the Privacy Act of 1974, proprietary commercial information, information critical to agency program activities, and information that has or may be determined to be exempt from public release under the Freedom of Information Act. A statement published on a given topic by organizations such as the National Institute of Standards and Technology, the Institute of Electrical and Electronics Engineers, the International Organization for Standardization, and others specifying characteristics—usually measurable ones—that must be satisfied to comply with the standard. A tamper-resistant integrated circuit built into some computer motherboards that can perform cryptographic operations (including key generation) and protect small amounts of sensitive information such as passwords and cryptographic keys. The process of encrypting a container, which can hold many files and folders, and of permitting access to the data within the container only after proper authentication is provided. In this case, the container is typically mounted as a virtual disk; it appears to the user as a logical disk drive. A virtual private network is a logical network that is established, at the application layer of the open systems interconnection model, over an existing physical network and typically does not include every node present on the physical network.
Many federal operations are supported by automated systems that may contain sensitive information such as national security information that, if lost or stolen, could be disclosed for improper purposes. Compromises of sensitive information at numerous federal agencies have raised concerns about the extent to which such information is vulnerable. The use of technological controls such as encryption--the process of changing plaintext into ciphertext--can help guard against the unauthorized disclosure of sensitive information. GAO was asked to determine (1) how commercially available encryption technologies can help agencies protect sensitive information and reduce risks; (2) the federal laws, policies, and guidance for using encryption technologies; and (3) the extent to which agencies have implemented, or plan to implement, encryption technologies. To address these objectives, GAO identified and evaluated commercially available encryption technologies, reviewed relevant laws and guidance, and surveyed 24 major federal agencies. Commercially available encryption technologies can help federal agencies protect sensitive information that is stored on mobile computers and devices (such as laptop computers, handheld devices such as personal digital assistants, and portable media such as flash drives and CD-ROMs) as well as information that is transmitted over wired or wireless networks by reducing the risks of its unauthorized disclosure and modification. For example, information stored in individual files, folders, or entire hard drives can be encrypted. Encryption technologies can also be used to establish secure communication paths for protecting data transmitted over networks. While many products to encrypt data exist, implementing them incorrectly------such as failing to properly configure the product, secure encryption keys, or train users------can result in a false sense of security and render data permanently inaccessible. Key laws frame practices for information protection, while federal policies and guidance address the use of encryption. The Federal Information Security Management Act of 2002 mandates that agencies implement information security programs to protect agency information and systems. In addition, other laws provide guidance and direction for protecting specific types of information, including agency-specific information. For example, the Privacy Act of 1974 requires that agencies adequately protect personal information, and the Health Insurance Portability and Accountability Act of 1996 requires additional protections for sensitive health care information. The Office of Management and Budget has issued policy requiring federal agencies to encrypt all data on mobile computers and devices that carry agency data and use products that have been approved by the National Institute for Standards and Technology (NIST) cryptographic validation program. Further, NIST guidance recommends that agencies adequately plan for the selection, installation, configuration, and management of encryption technologies. The extent to which 24 major federal agencies reported that they have implemented encryption and developed plans to implement encryption of sensitive information varied across agencies. From July through September 2007, the major agencies collectively reported that they had not yet installed encryption technology to protect sensitive information on about 70 percent of their laptop computers and handheld devices. Additionally, agencies reported uncertainty regarding the applicability of OMB's encryption requirements for mobile devices, specifically portable media. While all agencies have initiated efforts to deploy encryption technologies, none had documented comprehensive plans to guide encryption implementation activities such as installing and configuring appropriate technologies in accordance with federal guidelines, developing and documenting policies and procedures for managing encryption technologies, and training users. As a result federal information may remain at increased risk of unauthorized disclosure, loss, and modification.
DOD has a long history of troubled space systems acquisitions. Over the past decade, most of the large DOD space systems acquisition programs collectively experienced billions of dollars in cost increases and delayed schedules. In particular, a long-standing problem in DOD space systems acquisitions is that program costs have tended to go up significantly from initial cost estimates. As shown in figure 1, estimated costs for selected major space systems acquisition programs have increased by about $22.6 billion—nearly 230 percent—from fiscal years 2012 through 2017. The gap between original and current estimates shows that DOD has fewer dollars available to invest in new programs or add to existing ones. DOD’s overall level of investment over the five year period decreases until fiscal year 2014, at which point it levels off. The declining investment in the later years is the result of mature programs that have planned lower out-year funding, cancellation of a major space system acquisition program and several development efforts, and the exclusion of several space systems acquisition efforts for which total cost data were unavailable. These efforts include the Joint Space Operations Center Mission System (JMS), Space Fence, Space Based Space Surveillance (SBSS) Follow-on, Precision Tracking Space System (PTSS), and Weather Satellite Follow-on. We have previously reported that programs have experienced cost increases and schedule delays that have resulted in potential capability gaps in missile warning, military communications, and weather monitoring. For instance, unit costs for one of the most troubled programs, the Space Based Infrared System (SBIRS) have climbed about 230 percent to over $3 billion per satellite, with the launch of the first satellite about 9 years later than predicted. Similarly, 8 years after a development contract for the National Polar-orbiting Operational Environmental Satellite System (NPOESS) program was awarded in 2002, the cost estimate had more than doubled—to about $15 billion, launch dates had been delayed by over 5 years, significant functionality had been removed from the program, and the program’s tri-agency management structure had proven to be ineffective. In February 2010, it was announced that the National Oceanic and Atmospheric Agency (NOAA) and DOD would no longer jointly procure the NPOESS satellite system and, instead, each agency would undertake separate acquisitions. Consequently, the risks of gaps in weather satellite monitoring data have increased. Other programs, such as the Transformational Satellite Communications System, were canceled several years earlier because they were found to be too ambitious and not affordable at a time when the DOD was struggling to address critical acquisition problems elsewhere in the space systems portfolio. Our past work has identified a number of causes of acquisition problems, but several consistently stand out. At a higher level, DOD tended to start more weapon programs than was affordable, creating a competition for funding that focused on advocacy at the expense of realism and sound management. DOD also tended to start its space systems programs before it had the assurance that the capabilities it was pursuing could be achieved within available resources and time constraints. For example, when critical technologies planned for a satellite system are still in relatively early stages of discovery and invention, there is no way to accurately estimate how long it would take to design, develop, and build the system. Finally, programs typically attempted to satisfy all requirements in a single step, regardless of the design challenges or the maturity of the technologies necessary to achieve the full capability. DOD’s preference to make larger, complex satellites that perform a multitude of missions stretched technology challenges beyond current capabilities in some cases. In the past, funding instability, poor contractor oversight, and relaxed quality standards have also contributed to acquisition problems. We have also reported that fragmented leadership and lack of a single authority in overseeing the acquisition of space programs have created challenges for optimally acquiring, developing, and deploying new space systems. Past studies and reviews have found that responsibilities for acquiring space systems are diffused across various DOD organizations, even though many of the larger programs, such as the Global Positioning System (GPS) and those to acquire imagery and environmental satellites, are integral to the execution of multiple agencies’ missions. We reported that with multiagency space programs, success is often only possible with cooperation and coordination; however, successful and productive coordination appears to be the exception and not the rule. This fragmentation is problematic not only because of a lack of coordination that has led to delays in fielding systems, but also because no one person or organization is held accountable for balancing governmentwide needs against wants, resolving conflicts and ensuring coordination among the many organizations involved with space systems acquisitions, and ensuring that resources are directed where they are most needed. Over the past 5 years, our work has recommended numerous actions that can be taken to address the problems we identified. Generally, we have recommended that DOD separate technology discovery from acquisition, follow an incremental path toward meeting user needs, match resources and requirements at program start, and use quantifiable data and demonstrable knowledge to make decisions to move to next phases. We have also identified practices related to cost estimating, program manager tenure, quality assurance, technology transition, and an array of other aspects of acquisition program management that could benefit space programs. DOD has generally concurred with our recommendations, and has undertaken a number of actions to establish a better foundation for acquisition success. For newer satellite acquisition efforts, DOD has attempted to incorporate lessons learned from its experiences with earlier efforts. For example, the GPS III program, which began product development in 2008, is using a “back to basics” approach, emphasizing rigorous systems engineering, use of military specifications and standards, and an incremental approach to providing capability. Thus far, the work performed on the development of the first two satellites is costing more than expected—but not on the scale of earlier programs— and its schedule remains on track. Our prior testimonies have cited an array of actions as well. the Office of the Secretary of Defense created a new office under the Undersecretary of Defense for Acquisition, Technology and Logistics to oversee all major DOD space and intelligence related acquisitions and it began applying its broader weapon system acquisition policy (DOD Instruction 5000.02, Operation of the Defense Acquisition System (Dec. 8, 2008)) to space systems, instead of allowing a tailored policy for space that enabled DOD to commit to major investments before knowing what resources will be required to deliver promised capability. Among other initiatives, the Air Force undertook efforts to improve cost estimating and revitalize its acquisition workforce and program management assistance programs. Further, in 2009, for major weapons programs, Congress enacted the Weapon Systems Acquisition Reform Act of 2009, which required greater emphasis on front-end planning and, for example, refining concepts through early systems engineering, strengthening cost estimating, building prototypes, holding early milestone reviews, and developing preliminary designs before starting system development. Most of DOD’s major satellite programs are in mature phases of acquisition and cost and schedule growth is not as widespread as it was in prior years. However, the satellites, ground systems, and user terminals are not optimally aligned and the cost of launching satellites continues to be expensive. GAO, Space Acquisitions: DOD Faces Challenges in Fully Realizing Benefits of Satellite Acquisition Improvements, GAO-12-563T (Washington, D.C.: Mar. 21, 2012); and Space Acquisitions: DOD Delivering New Generations of Satellites, but Space System Acquisition Challenges Remain, GAO-11-590T (Washington, D.C.: May 11, 2011). Most of DOD’s major satellite programs are in mature phases of acquisition, that is, the initial satellites have been designed, fabricated and launched into orbit while additional satellites of the same design are being produced. Only two major satellite programs are in earlier phases of acquisition—the GPS III program and the PTSS program. For the portfolio of major satellite programs, new cost and schedule growth is not as widespread as it was in prior years, but DOD is still experiencing problems in these programs. For example, though the first two SBIRS satellites have launched, program officials are predicting a 14 month delay on the production of the third and fourth geosynchronous earth orbit (GEO) satellites due in part to technical challenges, parts obsolescence, and test failures. As we reported in March 2013, program officials are predicting about a $440 million cost overrun for these satellites. Also, the work performed to date for development of the first two GPS III satellites continues to cost more than DOD expected. Since the program entered system development, total program costs have increased approximately $180 million. The GPS III program office has attributed this to a variety of factors, such as inefficiencies in the development of the satellite bus and the navigation payload. Program officials stated that the cost growth was partially due to the program’s use of a back to basics approach, which they stated shifted costs to earlier in the acquisition as a result of more stringent parts and materials requirements. They anticipate these requirements will result in fewer problems later in the acquisition. Table 1 describes the status of the satellite programs we have been tracking in more detail. Though satellite programs are not experiencing cost and schedule problems as widespread as in years past, we have reported that ground control systems and user terminals in most of DOD’s major space systems acquisitions are not optimally aligned, leading to underutilized on-orbit satellite resources and limited capability provided to the warfighter. For example: Over 90 percent of the MUOS’s planned capability is dependent on the development of compatible user terminals. Although the first MUOS satellite was launched over a year ago, operational testing of MUOS with production-representative user terminals is not expected to occur until the second quarter of fiscal year 2014. The SBIRS program revised its delivery schedule of ground capabilities to add increments that will provide the warfighter some capabilities sooner than 2018, but complete and usable data from a critical sensor will not be available until about 7 years after the satellite is on orbit. The Family of Advanced Beyond Line-of-Sight Terminals (FAB-T) program, which is developing user terminals intended to communicate with AEHF satellites, has experienced numerous cost and schedule delays and is currently not synchronized with the AEHF program, which launched its second satellite last year while the FAB-T program has yet to deliver any capabilities. Current estimates show that FAB-T will reach initial operational capability for some requirements in 2019, about 5 years after AEHF is scheduled to reach its initial operational capability. GPS OCX is required for the launch of the first GPS III satellite because the existing ground control software is not compatible with the new GPS satellites. Realizing that the new ground control system would not be delivered in time to launch the first GPS III satellite, the Air Force added funding to the contract to accelerate development of the software that can launch and checkout the GPS III satellite, leaving the other capabilities—like the ability to command and control the satellite—to be delivered in late 2016. Subsequently, the launch of the first GPS III satellite has been delayed to May 2015 to better synchronize with the availability of the launch software. Though there are inherent difficulties in aligning delivery of satellites, ground control systems, and user terminals, we reported in 2009 that the lack of synchronization between segments of space acquisition programs is largely the result of the same core issues that hamper acquisitions in general—requirements instability, funding instability, insufficient technology maturity, underestimation of complexity, and poor contractor oversight, among other issues. In addition, user terminals are not optimally aligned because of a lack of coordination and effective oversight over the many military organizations that either develop user terminals or have some hand in development. We recommended that the Secretary of Defense take a variety of actions to help ensure that DOD space systems provide more capability to the warfighter through better alignment and increased commonality, and to provide increased insight into ground asset costs. DOD generally agreed with these recommendations. Another acquisition challenge facing DOD is the cost of launching satellites into space. DOD has benefited from a long string of successful launches, including three military and four intelligence community satellites this year. However, each launch can range from $100 million to over $200 million. Additional money is spent to support launch infrastructure. An analysis we performed this year showed that from fiscal years 2013 through 2017, the government can expect to spend approximately $46 billion on launch activities. Meanwhile, we reported in prior years that too little was known about the factors that were behind cost and price increases. The Air Force has developed a new launch acquisition strategy which includes a block buy approach for future launches. At the same time, it is implementing an effort to introduce new launch providers. Both efforts are designed to help lower costs for launch, but they face challenges, which are discussed further in the next section. Over the past year, we have reported on DOD’s progress in closing knowledge gaps in its new Evolved Expendable Launch Vehicle (EELV) acquisition strategy, DOD’s efforts to introduce new launch providers, opportunities to help reduce satellite program costs, and the Air Force’s satellite control operations and modernization efforts with comparisons to commercial practices. These reports further highlight the successes and challenges that have faced the space community as it has sought to mitigate rising costs and deliver modernized capabilities. We reported in September 2011 that DOD needed to ensure the new acquisition strategy was based on sufficient information, as there were significant uncertainties relating to the health of the launch industrial base, contractor cost or pricing data, mission assurance costs and activities, numbers of launch vehicles needed, and future engine prices which were expected to double or triple in the near term. As a result, DOD was at risk of committing to an acquisition strategy—including an expensive, multi-billion dollar block buy of launch vehicle booster cores— before it had information essential to ensuring business decisions contained in the strategy were sound. Among other things, we recommended DOD assess engine costs and mission assurance activities, reassess the length of the proposed block buy, and consider how to address broader launch acquisition and technology development issues. DOD generally concurred with the recommendations. The Air Force issued its new EELV acquisition strategy in November 2011. Following our review, the National Defense Authorization Act for Fiscal Year 2012 required that DOD report to congressional committees a description of how it implemented the recommendations contained in our report and for GAO to assess that information. We reported in July 2012 that DOD had numerous efforts in progress to address the knowledge gaps and data deficiencies identified in our September 2011 report, such as completing or obtaining independent cost estimates for two EELV engines and completing a study of the liquid rocket engine industrial base. We reported that officials from DOD, NASA, and NRO had initiated several assessments to obtain needed information, and had worked closely to finalize new launch provider certification criteria for national security space launches. However, we found that more action was needed to ensure that launch mission assurance activities were not excessive, to identify opportunities to leverage the government’s buying power through increased efficiencies in launch acquisitions, and to strategically address longer-term technology investments. We reported that some information DOD was gathering could set the stage for longer-term strategic planning for the program, especially in critical launch technology research and development decisions and that investing in a longer-term perspective for launch acquisitions was important to fully leverage the government’s buying power and maintain a healthy industrial base. In 2011, the Air Force, National Aeronautics and Space Administration (NASA), and National Reconnaissance Office (NRO) began implementing a coordinated strategy—called the Air Force Launch Services New Entrant Certification Guide (Guide)—to certify new entrants to provide launch capability on EELV-class launch vehicles. New entrants are launch companies that are working toward certifying their launch vehicle capabilities so that they may be allowed to compete with the current sole- source contractor for government launches. Launch vehicle certification is necessary to ensure that only proven, reliable launch vehicles will be used to launch government satellites. The House Armed Services Committee Report accompanying the National Defense Authorization Act for Fiscal Year 2013 directed GAO to review and analyze the implementation of the Guide. In February 2013, we reported that the Air Force based its Guide on existing NASA policy and procedures with respect to payload risk classification and launch vehicle certification.Force, NASA, and NRO were working to coordinate and share information to facilitate launch vehicle certification efforts, but that each agency would determine for itself when certification had been achieved. As a result, some duplication and overlap of efforts could occur. We also found that the Air Force had added other prerequisites to certification for new entrants that were not captured within the Guide. In our April 2013 report on reducing duplication, overlap, and fragmentation within the federal government, we found that government agencies, including DOD, could achieve considerable cost savings on some missions by leveraging commercial spacecraft through innovative mechanisms. These mechanisms include hosted payload arrangements where government instruments are placed on commercial satellites, and ride sharing arrangements where multiple satellites share the same launch vehicle. We reported that DOD is among the agencies that are actively using or beginning to look at these approaches in order to save costs. For instance, DOD has two ongoing hosted payload pilot missions and has taken preliminary steps to develop a follow-on effort.that the Commercially Hosted Infrared Payload Flight Demonstration Program answered the majority of the government’s technical questions through its commercial partnership, while saving it over $200 million over a dedicated technical demonstration mission. In addition, DOD is investigating ride sharing to launch GPS satellites beginning in fiscal year 2017, which could save well over $60 million per launch. While hosted payloads and ride sharing hold promise for providing lower- cost access to space in the future, we found that there are a variety of challenges. For instance, government agencies that have traditionally managed their own space missions face cultural challenges in using hosted payload arrangements and in November 2010, we found that the DOD space community is highly risk averse to adopting technologies from commercial providers that are new to DOD. expressed concerns about using a commercial host for their payloads, noting that they would lose some control over their missions. DOD officials noted that their security and mission assurance requirements and processes may make integrating hosted payloads on commercial satellites more complicated to manage. Further, agency officials expressed concerns about scheduling launches and noted that commercial providers may not be flexible about changing launch dates if the instruments or satellites experience delays. See GAO, Space Acquisitions: Challenges in Commercializing Technologies Developed under the Small Business Innovation Research Program, GAO-11-21 (Washington, D.C.: Nov. 10, 2010). hosted payloads, actual data on cost savings and cost avoidances should be more readily available. DOD manages the nation’s defense satellites, which are worth at least $13.7 billion, via ground stations located around the world. These ground stations and supporting infrastructure perform, in part, the function of maintaining the health of the satellite and ensuring it stays in its proper orbit (activities collectively known as satellite control operations). Some of DOD’s ground stations are linked together to form networks. The Air Force Satellite Control Network (AFSCN) is the largest of these networks. Based on the direction in a House Armed Services Committee Report for our review and discussions with defense committee staff, we reviewed the Air Force’s satellite control operations and modernization efforts. We reported this month that DOD’s satellite control networks are fragmented and potentially duplicative. increasingly deployed standalone satellite control operations networks, which are designed to operate a single satellite system, as opposed to shared systems that can operate multiple kinds of satellites. Dedicated networks can offer many benefits to programs, including possible lower risks and customization for a particular program’s needs. However, they can also be more costly and have led to a fragmented, and potentially duplicative, approach which requires more infrastructure and personnel than shared operations. We reported that, according to Air Force officials, DOD has not worked to move its current dedicated operations towards a shared satellite control network, which could better leverage DOD investments. We also reported that the AFSCN was undergoing modernization efforts, but these would not increase the network’s capabilities. The efforts—budgeted at about $400 million over the next 5 years—primarily focus on sustaining the network at its current level of capability and do not apply a decade of research recommending more significant improvements to the AFSCN that would increase its capabilities. GAO, Satellite Control: Long-Term Planning and Adoption of Commercial Practices Could Improve DOD’s Operations, GAO-13-315 (Washington, D.C.: Apr. 18, 2013). Additionally, we found that commercial practices like network interoperability, automation, and use of commercial off-the-shelf products have the potential to increase the efficiency and decrease costs of DOD satellite control operations. Both DOD and commercial officials we spoke to agreed that there were opportunities for DOD to increase efficiencies and lower costs through these practices. Numerous studies by DOD and other government groups have recommended implementing or considering these practices, but DOD has generally not incorporated them into DOD satellite control operations networks. Finally, we found that DOD faced barriers that complicate its ability to make improvements to its satellite control networks and adopt commercial practices. For example, DOD did not have a long-term plan for satellite control operations; DOD lacked reliable data on the costs of its current control networks and was unable to isolate satellite control costs from other expenses; there was no requirement for satellite programs to establish a business case for their chosen satellite control operations approach; and even if program managers wanted to make satellite control operations improvements, they did not have the autonomy to implement changes at the program level. We concluded that until DOD begins addressing these barriers, the department’s ability to achieve significant improvements in satellite control operations capabilities would be hindered. We recommended that the Secretary of Defense direct future DOD satellite acquisition programs to determine a business case for proceeding with either a dedicated or shared network for that program’s satellite control operations and develop a department-wide long-term plan for modernizing its AFSCN and any future shared networks and implementing commercial practices to improve DOD satellite control networks. DOD agreed with our recommendations. Congress and DOD continue to take steps towards reforming the defense acquisition system to increase the likelihood that acquisition programs will succeed in meeting planned cost and schedule objectives. For example, in December 2012, we reported that the DOD had taken steps to implement fundamental Weapon Systems Acquisition Reform Act of 2009 (the Reform Act) provisions, including those for approving acquisition strategies and better monitoring weapon acquisition programs., The offices established by the Reform Act are in the process of developing, issuing, and implementing policies in response to the Reform Act’s provisions. We reported that DOD has taken steps to: develop policy and guidance to the military services for conducting work in their respective areas, approve acquisition documents prior to milestone reviews, monitor and assess weapon acquisition program activities on a develop performance measures to assess acquisition program activities. Fundamentally, these Reform Act provisions should help (1) programs replace cost and schedule risk with knowledge and (2) set up more executable programs. Additionally, as part of its Better Buying Power initiative, DOD in November 2012 issued descriptions of 36 initiatives aimed at increasing productivity and efficiency in DOD acquisitions. DOD plans to solicit industry and stakeholder comments on these initiatives and plans to ultimately provide detailed requirements on implementing these initiatives to the acquisition workforce. Further, in January 2013, the Congress passed the National Defense Authorization Act of 2013, which required that DOD’s Under Secretary of Defense for Acquisition, Technology and Logistics submit a report on schedule integration and funding for each major satellite acquisition program. The report must include information on the segments of the programs; the amount of funding approved for the program and for each segment that is necessary for full operational capability of the program; and the dates by which the program and each segment are anticipated to reach initial and full operational capability, among other items. If the program is considered to be non-integrated, DOD must submit the required report to Congress annually. Tracking the schedules of major satellite programs and the ground systems and user equipment necessary to utilize the satellites may help DOD synchronize its systems. Additionally, officials from the Space and Intelligence Office, within the Office of Secretary of Defense, told us that DOD has undertaken additional actions to improve space systems acquisitions since we last reported on its efforts in March 2012. These actions include chartering Defense Space Council architecture reviews in key space mission areas that are ongoing or completed, such as resilient protected, narrowband, and wideband satellite communications; environmental monitoring; overhead persistent infrared; and space control, according to these officials. The architecture reviews are to inform DOD’s programming, budgeting, and prioritization for the space mission area. According to the officials, the Defense Space Council has brought a high-level focus on space issues through active senior-level participation in monthly meetings. DOD also participates in the newly re-formed Space Industrial Base Council, which is made up of senior level personnel at agencies across the federal government that develop space systems. The purpose of the council is to understand how DOD’s and other agencies’ acquisition strategies impact the space industrial base. Additionally, according to the officials, the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics completed a major study on space acquisition reform to assess the root causes of poor performance in the space acquisition enterprise, focusing on the largest areas of cost growth. Furthermore, the officials stated that they are continuing efforts to buy blocks of AEHF and SBIRS satellites to realize savings that will be reinvested in high-priority research and development for space programs to mitigate the challenges associated with planned use of critical technologies when a satellite system is in the early stages of development. The officials stated that these block buys will also encourage stable production and help to achieve affordability targets DOD has set for the majority of the large, critical space programs. While these actions are encouraging, we have not evaluated their effectiveness. The changes DOD has been making to leadership and oversight appear to be increasing senior management attention on space programs, but it is unclear whether the changes will be enough to overcome the problems we identified with fragmented leadership in the past. We have consistently found that the lack of a single authority for cross cutting missions, such as GPS or space situational awareness, has contributed to disconnects in the delivery of related systems as well as delays in the development of architectures and other tools important to balancing wants versus needs. Fragmented leadership has also been a contributing factor to other challenges we have noted in this statement—increasing launch service costs, synchronizing ground and satellite systems, and improving satellite operations. This condition persists. As part of our April 2013 annual report on reducing duplication, overlap, and fragmentation within the federal government, we reported that the administration has taken an initial step to improve interagency coordination, but has not fully addressed the issues of fragmented leadership and a lack of a single authority in overseeing the acquisition of space programs. Lastly, the Air Force and other offices within DOD are also considering different acquisition models for the future, including the use of hosted payloads as well as developing larger constellations of smaller, less- complex satellites that would require small, less-costly launch vehicles and offer more resilience in the face of growing threats to space assets. However, such a transition could also have risk and require significant changes in acquisition processes, requirements setting, organizational structures, and culture. The long-standing condition of fragmented leadership and the risk-averse culture of space could stand in the way of making such a change. In conclusion, DOD has made credible progress in stabilizing space programs. However, there are challenges still to be dealt with, such as disconnects between the delivery of satellites and their corresponding ground control systems and user equipment and the rising cost of launch. The ultimate challenge, however, will be preparing for the future, as budget constraints will require DOD to make tough tradeoff decisions in an environment where leadership is fragmented. We look forward to continuing to work with the Congress and DOD in assessing both today and tomorrow’s challenges in space acquisition and identifying actions that can be taken to help meet these challenges. Chairman Udall, Ranking Member Sessions, this completes my prepared statement. I would be happy to respond to any questions you and Members of the Subcommittee may have at this time. For further information about this statement, please contact Cristina Chaplain at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Art Gallegos, Assistant Director; Erin Cohen; Rich Horiuchi; Jeff Sanders; Roxanna Sun; Bob Swierczek; and Marie Ahearn. Satellite Control: Long-Term Planning and Adoption of Commercial Practices Could Improve DOD’s Operations. GAO-13-315. (Washington, D.C.: April 18, 2013). 2013 Annual Report: Actions Needed to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-13-279SP. (Washington, D.C.: April 9, 2013). Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-13-294SP. (Washington, D.C.: March 28, 2013). High-Risk Series: An Update. GAO-13-283. (Washington, D.C.: February 2013). Launch Services New Entrant Certification Guide. GAO-13-317R. (Washington, D.C.: February 7, 2013). Evolved Expendable Launch Vehicle: DOD Is Addressing Knowledge Gaps in Its New Acquisition Strategy. GAO-12-822. (Washington, D.C.: July 26, 2012). Environmental Satellites: Focused Attention Needed to Mitigate Program Risks. GAO-12-841T. (Washington, D.C.: June 27, 2012). Polar-Orbiting Environmental Satellites: Changing Requirements, Technical Issues, and Looming Data Gaps Require Focused Attention. GAO-12-604. (Washington, D.C.: June 15, 2012). Missile Defense: Opportunities Exist to Strengthen Acquisitions by Reducing Concurrency and Improving Parts Quality. GAO-12-600T. (Washington, D.C.: April 25, 2012). Missile Defense: Opportunity Exists to Strengthen Acquisitions by Reducing Concurrency. GAO-12-486. (Washington, D.C.: April 20, 2012). Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-12-400SP. (Washington, D.C.: March 29, 2012). Space Acquisitions: DOD Faces Challenges in Fully Realizing Benefits of Satellite Acquisition Improvements. GAO-12-563T. (Washington, D.C.: March 21, 2012). 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. (Washington, D.C.: February 28, 2012). Evolved Expendable Launch Vehicle: DOD Needs to Ensure New Acquisition Strategy Is Based on Sufficient Information. GAO-11-641. (Washington, D.C.: September 15, 2011). Space Research: Content and Coordination of Space Science and Technology Strategy Need to Be More Robust. GAO-11-722. (Washington, D.C.: July 19, 2011). Space and Missile Defense Acquisitions: Periodic Assessment Needed to Correct Parts Quality Problems in Major Programs. GAO-11-404. (Washington, D.C.: June 24, 2011). Space Acquisitions: DOD Delivering New Generations of Satellites, but Space System Acquisition Challenges Remain. GAO-11-590T. (Washington, D.C.: May 11, 2011). Space Acquisitions: Challenges in Commercializing Technologies Developed under the Small Business Innovation Research Program. GAO-11-21. (Washington, D.C.: November 10, 2010). Global Positioning System: Challenges in Sustaining and Upgrading Capabilities Persist. GAO-10-636. (Washington, D.C.: September 15, 2010). Space Acquisitions: DOD Poised to Enhance Space Capabilities but, Persistent Challenges Remain in Developing Space Systems. GAO-10-447T. (Washington, D.C.: March 10, 2010). Defense Acquisitions: Challenges in Aligning Space System Components. GAO-10-55. (Washington, D.C.: October 29, 2009). Space Acquisitions: Uncertainties in the Evolved Expendable Launch Vehicle Program Pose Management and Oversight Challenges. GAO-08-1039. (Washington, D.C.: September 26, 2008). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Each year, DOD spends billions of dollars to acquire space-based capabilities that support military and other government operations. Just a few years ago, the majority of DOD's space programs were characterized by significant cost and schedule growth. In 2012, GAO reported that the worst of those space acquisition problems now appear to be behind the department. While new major satellite acquisitions are facing potential cost growth and schedule slips, they are not as widespread and significant as they were several years ago. However, the department still faces serious challenges, such as the high cost of launching satellites, fragmented satellite control operations, as well as disconnects between fielding satellites and synchronizing ground systems. To address the progress DOD has made this year, this testimony focuses on (1) the current status and cost of DOD space systems acquisitions, (2) the results of GAO's space system-related reviews this past year, and (3) recent actions taken to address acquisition problems. This testimony is based on previously issued GAO products over the past 5 years, interviews with DOD officials, and an analysis of DOD funding estimates. GAO is not making recommendations in this testimony. However, in previous reports, GAO has generally recommended that DOD adopt best practices for developing space systems. DOD agreed and is in the process of implementing such practices. DOD agreed with GAO's characterization of recent actions it has taken to improve space acquisitions. Most of the Department of Defense's (DOD) major satellite programs are in mature phases of development, that is, the initial satellites have been designed, fabricated, and launched into orbit while additional satellites of the same design are being produced. For the portfolio of major satellite programs, new cost and schedule growth is not as widespread as it was in prior years, but DOD is still experiencing problems. For example, total program costs have increased approximately $180 million from a baseline of $4.1 billion for one of two satellite programs that are in the earlier phases of acquisition. Though satellite programs are not experiencing problems as widespread as in years past, ground control systems and user terminals in most of DOD's major space system acquisitions are not optimally aligned, leading to underutilized satellites and limited capability provided to the warfighter. For example, the development and fielding of user terminals for a Navy communications satellite program lag behind the launch of new satellites by more than a year. Additionally, the development of ground software needed to extract capabilities of new missile warning satellites is not expected to be complete until at least 2018, even though satellites are being launched. Another acquisition challenge facing DOD is the cost of launching satellites into space, which range from around $100 million to over $200 million per launch. Recent GAO space system-related reviews highlight other difficulties facing the space community as it has sought to mitigate rising costs and deliver modernized capabilities. For instance, in July 2012 GAO reported that DOD had numerous efforts in progress to address knowledge gaps and data deficiencies in its Evolved Expendable Launch Vehicle acquisition strategy. However, GAO also reported that more action was needed to identify opportunities to leverage the government's buying power through increased efficiencies in launch acquisitions. In April 2013 GAO reported that satellite control networks are fragmented and potentially duplicative. Moreover, GAO found that DOD faced barriers--such as lacking long-term plans and reliable cost data--that complicate its ability to make improvements to its satellite control networks and adopt commercial practices. GAO recommendations included determining business cases for proceeding with either dedicated or shared satellite control networks for future satellite programs and implementing commercial practices to improve DOD satellite control networks. Congress and DOD continue to take steps towards reforming the defense acquisition system to increase the likelihood that acquisition programs will succeed in meeting planned cost and schedule objectives. For instance, in response to legislation passed in 2009, DOD has taken steps that should help improve the department's acquisition process and create more executable programs, such as developing performance measures to assess acquisition program activities. DOD has also undertaken actions such as chartering senior-level reviews of space programs and participating in governmentwide space councils. The changes DOD has been making to leadership and oversight appear to be increasing senior management attention on space programs, but it is unclear whether the changes will overcome the problems GAO has identified with fragmented leadership in the past.
Created in 2008, CPP was the primary initiative under TARP to help stabilize the financial markets and banking system by providing capital to qualifying regulated financial institutions through the purchase of senior preferred shares and subordinated debt. Rather than purchasing troubled mortgage-backed securities and whole loans, as initially envisioned under TARP, Treasury used CPP investments to strengthen financial institutions’ capital levels. Treasury determined that strengthening capital levels was the more effective mechanism to help stabilize financial markets, encourage interbank lending, and increase confidence in lenders and investors. Treasury believed that strengthening the capital positions of viable financial institutions would enhance confidence in the institutions themselves and the financial system overall and increase the institutions’ capacity to undertake new lending and support the economy. On October 14, 2008, Treasury allocated $250 billion of the original $700 billion in overall TARP funds for CPP. The allocation was subsequently reduced in March 2009 to reflect lower estimated funding needs, as evidenced by actual participation rates, and the program was closed to new investments on December 31, 2009. Under CPP, qualified financial institutions were eligible to receive an investment of between 1 and 3 percent of their risk-weighted assets, up to a maximum of $25 billion. In exchange for the investment, Treasury generally received senior preferred shares that would pay dividends at a rate of 5 percent annually for the first 5 years and 9 percent annually CPP investments made in late 2008 began paying the higher thereafter.dividend or interest rate in late 2013 whereas the remaining investments will see the increase begin sometime in 2014. EESA required that Treasury also receive warrants to purchase shares of common or preferred stock or a senior debt instrument to further protect taxpayers and help ensure returns on the investments. Institutions are allowed to repay CPP investments with the approval of their primary federal bank regulator and afterward to repurchase warrants. As of January 31, 2014, a total of 624 of the 707 institutions that originally participated in CPP, about 88 percent, had exited the program. Of the 624 institutions that exited CPP, 239 institutions repurchased their preferred shares or subordinated debentures in full (see fig. 1). Another 165 institutions refinanced their shares through other federal programs: 28 through the Community Development Capital Initiative (CDCI) and 137 An additional 162 through the Small Business Lending Fund (SBLF).institutions had their investments sold through auction and 29 institutions went into bankruptcy or receivership. The remaining 29 had their investments sold by Treasury (25), or merged with another institution (4). Repayments and income from dividends, interest, and warrants from CPP investments have exceeded the amounts originally disbursed. Treasury disbursed $204.9 billion to 707 financial institutions nationwide from October 2008 through December 2009. As of January 31, 2014, Treasury had received $225 billion in repayments and income from its CPP investments, exceeding the amount originally disbursed by $20.1 billion (see fig. 2).in repayments and $2.8 billion in auction sales of original CPP investments as well as $18.9 billion in dividends, interest, and other income and $8.0 billion in warrants sold. After accounting for write-offs and realized losses totaling $4.7 billion, CPP had $2.1 billion in The repayments and income amount include $195.3 billion outstanding investments as of January 31, 2014. Treasury estimated a lifetime gain of $16.1 billion for CPP as of November 30, 2013. About half of the institutions (37 of 72) that responded to our questionnaire about the impact of the upcoming or recent increase in the dividend or interest rate on CPP securities stated that it impacted or is impacting their efforts to exit the CPP program and/or retire any outstanding CPP securities.that the institutions are taking a range of actions. Some institutions indicated that the increase led them to raise alternative capital. For example, one institution completed a public offering of common stock and used the proceeds to redeem its CPP shares. Others indicated that they are working with Treasury to participate in a future auction or attempting to negotiate with Treasury to restructure the CPP debt. Still others commented that the increase in the interest or dividend rate will increase the burden on the institution, make it more difficult to raise alternative capital, and further reduce their ability to exit the program. Thirty-three institutions responded that the increase did not or is not impacting their efforts to exit the program or retire CPP securities. As of January 31, 2014, the 83 remaining institutions accounted for the $2.1 billion in outstanding investments or about 1 percent of the original investment. The outstanding investments were concentrated in a relatively small number of institutions. Specifically, the 10 largest remaining CPP investments accounted for $1.5 billion (73 percent) of outstanding investments and 2 institutions accounted for more than half of this amount (see fig. 3). In contrast, the remaining $557 million (24 percent) was spread among the other 73 institutions. As of January 31, 2014, the number of states with at least one institution with CPP investments outstanding was 28, and the number of states with at least 5 such institutions was 7 (see fig. 4). California had the highest number of remaining CPP institutions with 8, followed by Illinois with 7. In terms of total CPP investments outstanding, Puerto Rico had the largest amount ($1.2 billion), followed by North Carolina ($108 million), Virginia ($91 million), and Florida ($74 million). Treasury began selling its investments in banks through auctions beginning in March 2012 as a way to balance the speed of exiting the investments with maximizing returns for taxpayers. As of January 31, 2014, Treasury had conducted a total of 23 auctions and received a total of about 80 percent of the principal amount (see fig. 5). As figure 5 shows, the total proceeds from selling securities do not include any income received from repurchases, dividends, or other sources or any missed dividend or interest payments, the rights to which are sold with the securities. For example, if an institution whose securities were being sold by Treasury at auction had missed $100,000 worth of dividend payments, the purchaser of the securities would own the right to receive those past- due dividends if the institution can pay them. As of January 31, 2014, Treasury has sold all or part of its investments in 162 institutions, through the auction process, including the rights to approximately $207 million in missed dividends and interest payments. In 2013, we reported that according to Treasury officials, the auction results reflected the potential risk associated with the liquidity of the investments, the credit quality of the financial institutions (including their ability to make future dividend or interest payments), and the prospect of receiving previous missed payments that had accrued. For example, later auctions have tended to include smaller institutions with more cumulative missed payments. In a few cases, the prospect of recouping these missed payments made the institutions particularly attractive to investors and helped raise the sale price of those securities above their par value. Although Treasury has not generally recouped its full investment in individual institutions through the auctions, in 2013, Treasury officials told us that accepting a discount and transferring ownership of these institutions to the private sector was in the best interest of the taxpayer. Because of the inherent risk factors of these institutions, Treasury officials did not anticipate that they would be able to make full repayments in the near future. The officials added that had they chosen not to auction these positions, their values could have decreased later. Treasury officials also said that while auctions were generally priced at a discount to the principal amount, the prices were generally equal to or above Treasury’s internal valuations. Institutions that remain in CPP tend to be financially weaker than institutions that have exited the program and institutions that did not receive CPP capital. Our analysis considered various measures that describe banking institutions’ profitability, asset quality, capital adequacy, and ability to cover losses. We analyzed financial data on the 83 institutions remaining in CPP as of January 31, 2014, and 482 former CPP institutions, which we split into three groups: (1) those that repaid their investments, (2) those that exited through an auction, and (3) those that refinanced their investments through SBLF. The current and former CPP institutions in our analysis accounted for 565 of the 707 institutions that participated in CPP. We compared the 565 institutions to a non-CPP group (i.e., institutions that have not participated in CPP) of 7,177 active financial institutions for which financial information was available. All financial information generally reflects quarterly regulatory filings on December 31, 2013. Table 1 provides the results of our analysis of these measures, including the following. Mostly smaller institutions remain in the program and larger institutions tended to exit through repayment. For example, institutions that exited through repayment had a median asset size of $1.7 billion, compared with $548 million for those that refinanced through SBLF and $385 million for those that exited through an auction. In the aggregate, the remaining institutions were noticeably less financially healthy than each of the groups of former CPP participants. As a group, institutions that exited through auctions were significantly less financially healthy than the group of institutions that repaid their investments or refinanced through SBLF. Overall, the institutions that remain in CPP are less financially healthy than both the group of institutions that never participated in CPP and the aggregate group that had exited CPP. In particular, remaining CPP institutions had noticeably higher median Texas Ratios than each group of former CPP institutions as well as the non-CPP group. The Texas Ratio helps determine a bank’s likelihood of failure by comparing its troubled loans to its capital. The higher the ratio, the more likely the institution is to fail. As of December 31, 2013, remaining CPP institutions had a median Texas Ratio of 53.21, compared with 19.58 for former CPP institutions and 12.37 for the non-CPP group. Further, of the institutions that exited CPP, those that exited through auctions had the highest median Texas Ratio (33.90), compared with those that exited through full repayments (17.18) or by refinancing to SBLF (15.03). Profitability measures for remaining CPP institutions were lower than those for former CPP participants and the non-CPP group. For example, the median return on average assets measure shows how profitable a company is relative to its total assets and how efficient management is at using its assets to generate earnings. For the quarter ending December 31, 2013, remaining CPP institutions had a median return on average assets of 0.30, compared with 0.77 for former CPP institutions and 0.75 for the non-CPP group. Further, among the institutions that had exited CPP, those that participated in Treasury’s auctions had the lowest return on average assets at 0.56, compared with 0.86 for those that repaid their investments and 0.76 for those that refinanced to SBLF. Remaining CPP institutions also held relatively more poorly performing assets. For example, remaining CPP institutions had a higher median percentage of noncurrent loans than former CPP institutions and the non- CPP group. As of December 31, 2013, a median of 2.83 percent of loans for remaining CPP institutions were noncurrent, compared with 1.33 percent for former CPP institutions and 1.05 percent for the non-CPP group. Remaining CPP institutions had a median ratio of net charge-offs to average loans (0.18) about equal to that of former CPP institutions (0.20), but a higher median ratio than the non-CPP group (0.09), as of December 31, 2013.had higher values than institutions that made full repayments or refinanced to SBLF. For both of these ratios, the auction participants Compared with former CPP institutions and the non-CPP group, remaining CPP institutions held less regulatory capital as a percentage of assets. Regulators require minimum amounts of capital to lessen an institution’s risk of default and improve its ability to sustain operating losses. Regulatory capital can be measured in several ways, but we focused on Tier 1 capital, which includes both a common-equity capital ratio and a Tier 1 capital ratio, because it is the most stable form of regulatory capital.capital as a share of risk-weighted assets, and the common equity Tier 1 ratio measures common equity Tier 1 as a share of risk-weighted assets, which generally does not include TARP funds. Using these measures, the remaining CPP institutions had lower median Tier 1 capital levels than former CPP institutions and the non-CPP group. The remaining CPP institutions also had a median common equity Tier 1 capital ratio below that of the former CPP institutions and the non-CPP group. As of December 31, 2013, the median common equity Tier 1 capital ratio for remaining CPP institutions was 10.48 percent of risk-weighted assets, compared with 11.76 percent for former CPP institutions and 15.48 percent for the non-CPP group. The Tier 1 risk-based capital ratio measures Tier 1 Finally, remaining CPP institutions had significantly lower reserves for covering losses compared with former CPP institutions and the non-CPP group. As of December 31, 2013, the median ratio of reserves to nonperforming loans was lower for remaining CPP institutions (41.92) than for former CPP participants (70.37) and the non-CPP group (76.16). Of those institutions that have exited the program, auction participants had the lowest ratio (55.35), compared with 77.34 for those that repaid their investments and 84.06 for those that refinanced to SBLF. The number of CPP participating institutions missing dividend or interest payments in a given quarter increased steadily from 8 in February 2009 to 159 in August 2011 and has since declined each quarter to 60 in November 2013 (see fig. 6). Almost 84 percent, or 75 of the 89 financial institutions remaining in CPP as of November 30, 2013, have missed a dividend payment. Most of the institutions with missed payments have missed them in several quarters. In particular, all but one of the institutions that missed payments in November 2013 had also missed payments in each of the previous three quarters. Moreover, the 60 institutions that missed payments in November 2013 had an average of 13 missed payments. Institutions can elect whether to pay dividends and may choose not to pay for a variety of reasons, including decisions that they or their federal and state regulators make to conserve cash and maintain (or increase) capital levels. Institutions are required to pay dividends only if they declare dividends, although unpaid cumulative dividends generally accrue and the institution must pay them before making payments to other types of shareholders, such as holders of common stock. However, investors view a company’s ability to pay dividends as an indicator of its financial strength and may see failure to pay full dividends as a sign of financial weakness. Showing a similar trend to missed dividend or interest payments, the number of CPP institutions on the Federal Deposit Insurance Corporation’s (FDIC) “problem bank list” has decreased in recent months after months of steady increases. This list is a compilation of banks with demonstrated financial, operational, or managerial weaknesses that threaten their continued financial viability and is publicly reported on a quarterly basis. As of December 31, 2013, 47 CPP institutions were on the problem bank list (see fig. 7). The number of these institutions increased every quarter beginning in March 2009, hitting a high of 134 in June 2011, even as the number of institutions participating in CPP declined. As figure 7 shows, the number of problem banks fell slightly for the first time in the third quarter of 2011 and has declined to 47 as of December 31, 2013. Federal and state bank regulators may not allow institutions on the problem bank list to make dividend payments in an effort to preserve their capital and promote safety and soundness. These observations are consistent with the analysis in our May 2013 and March 2012 reports, which also showed that the remaining CPP institutions were financially weaker than institutions that had exited the program and institutions that did not receive CPP capital. We provided a draft of this report to Treasury for its review and comment. Treasury provided written comments that we have reprinted in appendix II. In its written comments, Treasury generally concurred with our findings. Treasury noted that it had realized a positive return of $20.19 billion as of April1, 2014, and that 71 institutions remained in the program representing a remaining investment of $1.96 billion. Treasury also emphasized its commitment to keeping the public informed of its progress in winding down CPP. We are sending copies of this report to the Special Inspector General for TARP, interested congressional committees and members, and Treasury. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact A. Nicole Clowers at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of our report were to examine (1) the status of the Capital Purchase Program (CPP), including repayments and other proceeds, as well as investments outstanding; and (2) the financial condition of institutions remaining in CPP. To assess the status of CPP at the program level, we analyzed data from the Department of the Treasury (Treasury). In particular, we used Treasury’s January 2014 Monthly Report to Congress to determine the dollar amounts of outstanding investments, the number of remaining and former participants, and the geographical distribution of each as of January 31, 2014. To assess the financial condition of institutions that received investments under CPP, we used data from Treasury’s Dividends and Interest reports from February 2009 through November 2013 to determine the extent to which participants had missed payments throughout the life of the program. To assess whether the upcoming 2014 increase in the dividend or interest rate on CPP securities was impacting CPP participants’ efforts to exit the program and/or retire CPP securities, we sent an email containing two questions to all current CPP participants (89) and any past CPP participants that raised capital in calendar year 2013. The first question asked if the increase had impacted them (Yes or No) and, if yes, to describe actions taken. We sent the questionnaire to 104 institutions and received responses from 72. We used the “actions taken” responses to provide examples of how the increase in the dividend or interest rate on CPP securities is impacting CPP participants. We defined current CPP participants to be those institutions that Treasury classifies as “full investment outstanding; warrants outstanding,” “full investment outstanding; warrants not outstanding,” and “sold in part, warrants outstanding” in its November 20, 2013 TARP transaction report. We identified those CPP participants that raised capital in calendar year 2013 using the SNL database. We also obtained from the Federal Deposit Insurance Corporation (FDIC) summary information on its quarterly problem bank list to show the trend of CPP institutions appearing on the list from December 2008 through December 2013. We used financial measures for depository institutions that we had identified in our previous reporting on CPP. These measures help demonstrate an institution’s financial health as it relates to a number of categories, including profitability, asset quality, capital adequacy, and loss coverage. We obtained such financial data for depository institutions using a private financial database provided by SNL Financial that contains publicly filed regulatory and financial reports. We merged the data with SNL Financial’s CPP participant list to create the three comparison groups—remaining CPP institutions, former CPP institutions, and a non-CPP group comprised of all institutions that did not participate in CPP. We analyzed financial data on the 83 institutions remaining in CPP as of January 31, 2014, and 482 former CPP institutions that exited CPP through full repayments, conversion to the Small Business Lending Fund, or Treasury’s sale of its investments through an auction, accounting for 565 of the 707 CPP participants. We identified the 83 institutions remaining in CPP as of January 31, 2014, using Treasury’s January 2014 Monthly Report to Congress. The 142 CPP institutions our analysis excluded had no data available in SNL Financial, had been acquired, or were defunct. We compared the remaining and former CPP institutions to a non-CPP group of 7,177 active financial institutions for which financial information was available. We chose to present median values. Financial data were available from SNL Financial for 440 of the 565 CPP institutions, and we accounted for the remaining 125 institutions using SNL Financial information for the holding company or its largest subsidiary. Although this approach has limitations such as excluding other financial subsidiaries, we deemed it to be sufficient for the purpose of our work. All financial information reflects quarterly regulatory filings on December 31, 2013, unless otherwise noted. We downloaded all financial data from SNL Financial on March 4, 2014. Finally, we leveraged our past reporting on the Troubled Asset Relief Program (TARP), as well as that of the Special Inspector General for TARP, as appropriate. We determined that the financial information used in this report, including CPP program data from Treasury and financial data on institutions from SNL Financial, was sufficiently reliable to assess the condition and status of CPP and institutions that participated in the program. For example, we tested the Office of Financial Stability’s internal controls over financial reporting as they relate to our annual audit of the office’s financial statements and found the information to be sufficiently reliable based on the results of our audits of fiscal years 2009, 2010, 2011, and 2012 financial statements for TARP. We have assessed the reliability of SNL Financial data—which are obtained from financial statements submitted to the banking regulators—as part of previous studies and found the data to be reliable for the purposes of our review. We verified that no changes had been made that would affect the data’s reliability. We conducted this performance audit from December 2013 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Karen Tremba (Assistant Director), Emily Chalmers, William Chatlos, Chris Forys, Matthew Keeler, Risto Laboski, Marc Molino, and Patricia Moye made significant contributions to this report.
CPP was established as the primary means of restoring stability to the financial system under the Troubled Asset Relief Program (TARP). Under CPP, Treasury invested almost $205 billion in 707 eligible financial institutions between October 2008 and December 2009. CPP recipients have made dividend and interest payments to Treasury on the investments. TARP's authorizing legislation requires GAO to report every 60 days on TARP activities. This report examines (1) the status of CPP and (2) the financial condition of institutions remaining in the program. To assess the program's status, GAO reviewed Treasury reports on the status of CPP. GAO also used financial and regulatory data to compare the financial condition of institutions remaining in CPP with those that had exited the program and those that did not participate. GAO also obtained information through a questionnaire from CPP participants as of November 20, 2013, and former CPP participants that raised capital in calendar year 2013. GAO received completed questionnaires from 72 of the 104 institutions. GAO provided a draft of this report to Treasury for its review and comment. Treasury generally concurred with GAO's findings. The Department of Treasury (Treasury) continues to make progress in winding down the Capital Purchase Program (CPP). As of January 31, 2014, Treasury's data showed that 624 of the original 707 institutions, or about 88 percent, had exited CPP. Treasury had received about $225 billion from its CPP investments, exceeding the approximately $205 billion it had disbursed. Most institutions exited by repurchasing their preferred shares in full or by refinancing their investments through other federal programs. Treasury also continues to sell its investments in the institutions through auctions; a strategy first implemented in March 2012 to expedite the exit of a number of CPP participants. As of January 31, 2014, Treasury has sold all or part of its CPP investment in 162 institutions through auctions, receiving a total of about 80 percent of the principal amount. A relatively small number of the remaining 83 institutions accounted for most of the outstanding investments. Specifically, 10 institutions accounted for $1.5 billion or about 73 percent of the $2.1 billion in outstanding investments. Treasury estimated a lifetime gain of $16.1 billion for CPP as of November 30, 2013. GAO's analysis of financial data found that the institutions remaining in CPP were generally less financially healthy than those that have exited or that never participated. In particular, the remaining CPP institutions tended to be less profitable, hold riskier assets, and have lower capital levels and reserves. Most remaining participants also have missed scheduled dividend or interest payments, with 60 missing their November 2013 payment. Further, 47 of the remaining CPP institutions were on the Federal Deposit Insurance Corporation's problem bank list in December 2013—that is, they demonstrated financial, operational, or managerial weaknesses that threatened their continued financial viability. Institutions that continue to miss dividend payments or find themselves on the problem bank list may have difficulty fully repaying their CPP investments because federal and state bank regulators may not allow these institutions to make dividend payments or repurchase outstanding CPP shares in an effort to preserve their capital and promote safety and soundness.
Recovery Act funds are being distributed to states, localities, other entities, and individuals through a combination of formula and competitive grants and direct assistance. Nearly half of the approximately $580 billion associated with Recovery Act spending programs will flow to states and localities affecting about 50 state formula and discretionary grants as well as about 15 entitlement and other countercyclical programs. As noted above, three of the largest streams of funds flowing to states and localities are (1) the temporary increase in FMAP funding which will provide states with approximately $87 billion in assistance; (2) the State Fiscal Stabilization Fund, which will provide nearly $54 billion to help state and local governments avert budget cuts, primarily in education; and (3) highway infrastructure investment funds of approximately $27 billion. Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the FMAP. Across states, the FMAP may range from 50 to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. Under the Recovery Act, states are eligible for an increased FMAP for expenditures that states make in providing services to their Medicaid populations. The Recovery Act provides eligible states with this increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, CMS made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for fiscal year 2009 through the first quarter of fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for: (1) the maintenance of states’ prior year FMAPs; (2) a general across-the-board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. For the first two quarters of 2009, the increases in the FMAP for the 16 states and the District ranged from 7.09 percentage points in Iowa to 11.59 percentage points in California. (See table 1.) The Recovery Act provides approximately $48 billion to fund grants to states, localities, regional authorities and others for transportation projects of which the largest piece is $27.5 billion for highway and related infrastructure investments. The Recovery Act largely provides for increased transportation funding through existing programs-such as the Federal-Aid Highway Surface Transportation Program—a federally funded, state-administered program. Under this program, funds are apportioned annually to each state department of transportation (or equivalent) to construct and maintain roadways and bridges on the federal-aid highway system. The Federal-Aid Highway Program refers to the separately funded grant programs mostly funded by formula, administered by the Federal Highway Administration (FHWA) in the U.S. Department of Transportation. The Recovery Act provided $53.6 billion in appropriations for the State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education. The Recovery Act requires that the Secretary of Education set aside $5 billion for State Incentive Grants, referred to by the department as the Reach for the Top program, and the establishment of an Innovation Fund. After reserving these and certain other funds, the remaining funds are to be distributed to states by formula, with 61 percent of the state award based on the state’s relative share of the population aged 5 to 24 and 39 percent based on the state’s relative share of the total U.S. population. The Recovery Act specifies that 81.8 percent (about $39.5 billion) of these remaining funds are to be distributed to states for support of elementary, secondary, and postsecondary education, and early childhood education programs. The remaining 18.2 percent of SFSF (about $8.8 billion) is available for public safety and other government services including for educational purposes. The Department of Education announced on April 1, 2009 that it will award the SFSF in two phases. The first phase—$32.6 billion—represents about two-thirds of the SFSF. Figure 1 shows the distribution of Recovery Act funds to states by broad functional categories over the next several years. The timeline of Recovery Act spending has been a key issue in the debate and design of the Recovery Act because of the elapsed time between when policy changes are first proposed and actual spending begins to flow from enacted changes. Figure 2 shows the projected timing of state and local- administered Recovery Act spending. Over time, the programmatic focus of Recovery Act spending will change. As shown in figure 3, about two-thirds of Recovery Act funds expected to be spent by states in the current 2009 fiscal year will be health related, primarily temporary increases in Medicaid FMAP funding. Health, education, and transportation is estimated to account for approximately 90 percent of fiscal year 2009 Recovery Act funding for states and localities. However, by fiscal year 2012, transportation will be the largest share of state and local Recovery Act funding. Taken together, transportation spending, along with investments in the community development, energy, and environmental areas that are geared more toward creating long-run economic growth opportunities will represent approximately two-thirds of state and local Recovery Act funding in 2012. The administration has stipulated that every taxpayer dollar spent on economic recovery must be subject to unprecedented levels of transparency and accountability. To that end, the Recovery Act established the Recovery Accountability and Transparency Board to coordinate and conduct oversight of funds distributed under the Act in order to prevent fraud, waste and abuse. The Board includes a Chairman appointed by the President, and ten Inspectors General specified by the Act. The Board has a series of functions and powers to assist it in the mission of providing oversight and promoting transparency regarding expenditure of funds at all levels of government. The Board will report on the use of Recovery Act funds and may also make recommendations to agencies on measures to avoid problems and prevent fraud, waste and abuse. The Board is also charged under the Act with establishing and maintaining a web site, www.recovery.gov, (Recovery.gov) to foster greater accountability and transparency in the use of covered funds. The website currently includes overview information about the Recovery Act, a timeline for implementation, a frequently asked questions page, and an announcement page that is to be regularly updated. The administration plans to develop the site to encompass information about available funding, distribution of funds, and major recipients. The website is required to include plans from federal agencies; information on federal awards of formula grants and awards of competitive grants; and information on federal allocations for mandatory and other entitlement programs by state, county, or other appropriate geographical unit. Eventually, prime recipients of Recovery Act funding will provide information on how they are using their federal funds. Currently, Recovery.gov features projections for how, when, and where the funds will be spent, as well as which states and sectors of the economy are due to receive what proportion of the funds. As money starts to flow, additional data will become available. In addition to Recovery.gov, OMB has also issued guidance directing executive branch agencies to develop a dedicated portion of their web sites for information related to the recovery. To ensure a high level of accountability, OMB has issued guidance to the heads of federal departments and agencies for implementing and managing activities enacted under the Recovery Act. OMB has also issued for comment detailed reporting requirements for Recovery Act fund recipients that include the number of jobs created and jobs retained as a result of Recovery Act funding. OMB’s guidance documents are available on Recovery.gov. In addition, the Civilian Acquisition Council and the Defense Acquisition Regulations Council have issued an interim rule revising the Federal Acquisition Regulation (FAR) to require a contract clause that implements these reporting requirements for contracts funded with Recovery Act dollars. The Recovery Act also assigns GAO a range of responsibilities to help promote accountability and transparency. Some are recurring requirements such as providing bimonthly reviews of the use of funds made available under Division A of the Recovery Act by selected states and localities and reviews of quarterly reports on job creation and job retention as reported by Recovery Act fund recipients. Other requirements include targeted studies in several areas such as small business lending, education, and trade adjustment assistance. We completed the first of these mandates on April 3, 2009, by announcing the appointment of 13 members to the Health Information Technology Policy Committee, a new advisory body established by the Recovery Act. The committee will make recommendations on creating a policy framework for the development and adoption of a nationwide health information technology infrastructure, including standards for the exchange of patient medical information. On April 16, 2009, we issued a report completing a second mandate to report on the actions of the Small Business Administration (SBA) to, among other things, increase liquidity in the secondary market for SBA loans. Officials in the 16 selected states and the District indicated they have used certain Recovery Act funds and continue planning for the use of additional funds they have not yet received. States’ existing intergovernmental programs—such as Medicaid, transportation, and education—have been among the first programs to receive Recovery Act funds. Planning continues for the use of Recovery Act funds for these and other program areas. States’ planning actions include appointing Recovery Czars; establishing task forces and other entities, and developing public web sites to solicit input and publicize selected projects. In some cases, according to state officials, state legislation will be required to receive and expend funds or to make required changes to programs for eligibility prior to using the funds. States’ approaches to planning for Recovery Act funds also vary in response to state legislative and budget processes regarding the use of federal funds and states’ fiscal situations. The three largest programs making funds available to the state and localities so far have been the Medicaid FMAP, highways funds, and the SFSF. Table 2 shows the breakout of funding available for these three programs in the 16 selected states and the District that GAO visited. Recovery Act funding for these 17 jurisdictions accounts for a little less than two-thirds of total Recovery Act funding for these three programs. Under the Recovery Act, states are eligible for an increased FMAP for expenditures that states make in providing services to their Medicaid populations. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008 and December 31, 2010. Generally, for fiscal year 2009 through the first quarter of fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for: (1) the maintenance of states’ prior year FMAPs; (2) a general across-the- board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. In our sample of 16 states and the District, officials from 15 states and the District indicated that they had drawn down increased FMAP grant awards, totaling $7.96 billion for the period of October 1, 2008 through April 1, 2009—47 percent of their increased FMAP grant awards. In our sample, the extent to which individual states and the District accessed these funds varied widely, ranging from 0 percent in Colorado to about 66 percent in New Jersey. Nationally, the 50 states and several territories combined have drawn down approximately $11 billion as of April 1, 2009, which represents almost 46 percent of the increased FMAP grants awarded for the first three quarters of federal fiscal year 2009 (Table 3). In order for states to qualify for the increased FMAP available under the Recovery Act, they must meet certain requirements. In particular Maintenance of Eligibility: In order to qualify for the increased FMAP, states generally may not apply eligibility standards, methodologies, or procedures that are more restrictive than those in effect under their state Medicaid programs on July 1, 2008. In guidance to states, CMS noted that examples of restrictions of eligibility could include (1) the elimination of any eligibility groups since July 1, 2008 or (2) changes in an eligibility determination or redetermination process that is more stringent than what was in effect on July 1, 2008. States that fail to initially satisfy the maintenance of eligibility requirements have an opportunity to reinstate their eligibility standards, methodologies, and procedures before July 1, 2009 and become retroactively eligible for the increased FMAP. Compliance with Prompt Payment: Under federal law states are required to pay claims from health practitioners promptly. Under the Recovery Act, states are prohibited from receiving the increased FMAP for days during any period in which that state has failed to meet this requirement. Although the increased FMAP is not available for any claims received from a practitioner on each day the state is not in compliance with these prompt payment requirements, the state may receive the regular FMAP for practitioner claims received on days of non-compliance. CMS officials told us that states must attest that they are in compliance with the prompt payment requirement, but that enforcement is complicated due to differences across states in methods used to track this information. CMS officials plan to issue guidance on reporting compliance with the prompt payment requirement and are currently gathering information from states on the methods they use to determine compliance. Rainy Day Funds: States are not eligible for an increased FMAP if any amounts attributable (either directly or indirectly) to the increased FMAP are deposited or credited into any reserve or rainy day fund of the state. Percentage Contributions from Political Subdivisions: In some states, political subdivisions—such as cities and counties—may be required to help finance the state’s share of Medicaid spending. States that have such financing arrangements are not eligible to receive the increased FMAP if the percentage contributions required to be made by a political subdivision are greater than what was in place on September 30, 2008. In addition to meeting the above requirements, states that receive the increased FMAP must submit a report to CMS no later than September 30, 2011 that describes how the increased FMAP funds were expended, in a form and manner determined by CMS. In guidance to states, CMS has stated that further guidance will be developed for this reporting requirement. CMS guidance to states also indicates that, for federal reimbursement, increased FMAP funds must be drawn down separately, tracked separately, and reported to CMS separately. Officials from several states told us they require additional guidance from CMS on tracking receipt of increased FMAP funds and on reporting on the use of these funds. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of this increased FMAP may reduce the state share for their Medicaid programs. States have reported using these available funds for a variety of purposes. In our sample, individual states and the District reported that they would use the funds to maintain their current level of Medicaid eligibility and benefits, cover their increased Medicaid caseloads—which are primarily populations that are sensitive to economic downturns, including children and families, and to offset their state general fund deficits thereby avoiding layoffs and other measures detrimental to economic recovery. Ten states and the District reported using these funds to maintain program eligibility. Nine states and the District reported using these funds to maintain benefits. Specifically, Massachusetts reported that during a previous financial downturn, the state limited the number of individuals eligible for some services and reduced certain program benefits that were optional for the state to cover. However, with the funds made available as a result of the increased FMAP, the state did not have to make such reductions. Similarly, New Jersey reported that the state used these funds to eliminate premiums for certain children in its State Children’s Health Insurance Program, allowing it to retain coverage for children whose enrollment in the program would otherwise have been terminated for non-payment of premiums. Nine states and the District reported using these funds to cover increases to their Medicaid caseloads, primarily to populations that are sensitive to economic downturns, such as children and families. For example, New Jersey indicated that these funds would help the state meet the increased demand for Medicaid services. According to a New Jersey official, due to significant job losses, the state’s proposed 2010 budget would not have accommodated all the applicants newly eligible for Medicaid and that the funds available as a result of the increased FMAP have allowed the state to maintain a “safety net” of coverage for uninsured and unemployed people. In addition, 10 states and the District indicated that the increased funds made available would help offset deficits in their general funds. Pennsylvania reported that because funding for its Medicaid program is derived, in part, on state revenues, program funding levels fluctuate as the economy rises and falls. However, the state was able to use funds made available to offset the effects of lower state revenues. Arizona officials also reported that the state used funds made available as a result of the increased FMAP to pay down some of its debt and make payroll payments, thus allowing the state to avoid a serious cash flow problem. Finally, six states in our sample also reported that they used funds made available as a result of the increased FMAP to comply with prompt payment requirements. Specifically, Illinois reported that these funds will permit the state to move from a 90-day payment cycle to a 30-day payment cycle for all Medicaid providers. Three states also reported using these funds to restore or to increase provider payment rates. In our sample, many states and the District indicated that they need additional guidance from CMS regarding eligibility for the increased FMAP funds. Specifically, 5 states raised concerns about whether certain programmatic changes could jeopardize the state’s eligibility for these funds. For example Texas officials indicated that guidance from CMS is needed regarding whether certain programmatic changes being considered by Texas, such as a possible extension of the program’s eligibility period, would affect the state’s eligibility for increased FMAP funds. Similarly, Massachusetts wanted clarification from CMS as to whether certain changes in the timeframe for the state to conduct eligibility re- determinations would be considered a more restrictive standard. Four states also reported that they wanted additional guidance from CMS regarding policies related to the prompt payment requirements or changes to the non-federal share of Medicaid expenditures. For example, California officials noted that the state reduced Medicaid payments for in-home support services, but that counties could voluntarily choose to increase these payments without altering the cost sharing arrangements between the counties and the state. The state wants clarification from CMS on whether such an arrangement would be allowable in light of the Recovery Act requirements regarding the percentage of contributions by political subdivisions within a state toward the non-federal share of expenditures. In response to states’ concerns regarding the need for guidance, CMS told us that it is in the process of developing draft guidance on the prompt payment provisions in the Recovery Act. One official noted that this guidance will include defining the term practitioner, describing the types of claims applicable under the provision, and addressing the principles that are integral to determining a state’s compliance with prompt payment requirements. Additionally, CMS plans to have a reporting mechanism in place through which states would report compliance under this provision. With regard to Recovery Act requirements regarding political subdivisions, CMS described their current activities for providing guidance to states. Due to the variability of state operations, funding processes, and political structures, CMS has been working with states on a case-by-case basis to discuss particular issues associated with this provision and to address the particular circumstances for each state. A CMS official told us that if there were an issue(s) or circumstance(s) that had applicability across the states, or if there were broader themes having national significance, CMS would consider issuing guidance. Of the $27.5 billion provided in the Recovery Act for highway and related infrastructure investments, $26.7 billion is provided to the 50 states for restoration, repair, construction and other activities allowed under the Federal-Aid Highway Surface Transportation Program and for other eligible surface transportation projects. Nearly one-third of these funds are required to be sub-allocated to metropolitan and other areas. States must follow the requirements for the existing program, and in addition, the Recovery Act requires that the Governor must certify that the state will maintain its current level of transportation spending, and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. The certifications must include a statement of the amount of funds the state planned to expend from state sources as of the date of enactment, during the period beginning on the date of enactment through September 30, 2010, for the types of projects that are funded by the appropriation. The U.S. Department of Transportation is reviewing the Governors’ certifications regarding maintaining their level of effort for highways. According to the Department, of the 16 states in our review and the District of Columbia, three states have submitted a certification free of explanatory or conditional language—Arizona, Michigan, and New York. Eight submitted “explanatory” certifications—certifications that used language that articulated assumptions used or stated the certification was based on the “best information available at the time,” but did not clearly qualify the expected maintenance of effort on the assumptions proving true or information not changing in the future. Six submitted a “conditional” certifications, which means that the certification was subject to conditions or assumptions, future legislative action, future revenues, or other conditions. Recovery Act funding for highway infrastructure investment differs from the usual practice in the Federal-aid Highway Program in a few important ways. Most significantly, for projects funded under the Recovery Act, the federal share is 100 percent; typically projects require a state match of 20 percent while the federal share is typically 80 percent. Under the Recovery Act, priority is also to be given to projects that are projected to be completed within three years. In addition, within 120 days after the apportionment by the Department of Transportation to the states (March 2, 2009), and specifically before June 30, 2009, 50 percent of the apportioned funds must be obligated. Any amount of this 50 percent of apportioned funding that is not obligated may be withdrawn by the Secretary of Transportation and redistributed to other states that have obligated their funds in a timely manner. Furthermore, one year after enactment the Secretary will withdraw any remaining unobligated funds and redistribute them based on states’ need and ability to obligate additional funds. These provisions are applicable only to those funds apportioned to the state and not those funds required by the Recovery Act to be suballocated to metropolitan, regional and local organizations. Finally, states are required to give priority to projects that are located in economically distressed areas as defined by the Public Works and Economic Development Act of 1965, as amended. In March 2009, FHWA directed its field offices to provide oversight and take appropriate action to ensure that states gave adequate consideration to economically distressed areas in selecting projects. Specifically, field offices were directed to discuss this issue with the states and to document its review and oversight of this process. States are undertaking planning activities to identify projects, obtain approval at the state and federal level and move them to contracting and implementation. However, because of the steps necessary before implementation, states generally had not yet expended significant amounts of Recovery Act funds. States are required to reach agreement with the Department of Transportation (DOT) on a list of projects reimbursement from DOT for these projects. States will then request reimbursement from DOT as the state makes payments to contractors working on approved projects. As of April 16, 2009, the U.S Department of Transportation reported that nationally $6.4 billion of the $26.6 billion in Recovery Act highway infrastructure investment funding provided to the states had been obligated – meaning Transportation and the states had reached agreements on projects worth this amount. As shown in Table 4 below, for the locations that GAO reviewed, the extent to which the Department of Transportation had obligated funds apportioned to the states and Washington D.C. ranged from 0 to 65 percent. For two of the states, the Department of Transportation had obligated over 50 percent of the states’ apportioned funds, for 4 it had obligated 30 to 50 percent of the states’ funds, for 9 states it had obligated under 30 percent of funds, and for three it had not obligated any funds. In most states we visited, while they had not yet expended significant funds, they were planning to solicit bids in April or May. They also stated that they planned to meet statutory deadlines for obligating the highway funds. A few states had already executed contracts. As of April 1, 2009, the Mississippi Department of Transportation (MDOT), for example, had signed contracts for 10 projects totaling approximately $77 million. These projects include the expansion of State Route 19 in eastern Mississippi into a four-lane highway. This project fulfills part of MDOT’s 1987 Four- Lane Highway Program which seeks to link every Mississippian to a four- lane highway within 30 miles or 30 minutes. Similarly, as of April 15, 2009, the Iowa Department of Transportation had competitively awarded 25 contracts valued at $168 million. Most often, however, we found that highway funds in the states and the District have not yet been spent because highway projects were at earlier stages of planning, approval, and competitive contracting. For example, in Florida, the Department of Transportation (FDOT) plans to use the Recovery Act funds to accelerate road construction programs in its preexisting 5- year plan which will result in some projects being reprioritized and selected for earlier completion. On April 15, 2009, the Florida Legislative Budget Commission approved the Recovery Act-funded projects that FDOT had submitted. For the most part, states were focusing their selection of Recovery Act- funded highway projects on construction and maintenance, rather than planning and design, because they were seeking projects that would have employment impacts and could be implemented quickly. These included road repairs and resurfacing, bridge repairs and maintenance, safety improvements, and road widening. For example, in Illinois, the Department of Transportation is planning to spend a large share of its estimated $655 million in Recovery Act funds for highway and bridge construction and maintenance projects in economically distressed areas, those that are shovel-ready, and those that can be completed by February 2012. In Iowa, the contracts awarded have been for projects such as bridge replacements and highway resurfacing—shovel-ready projects that could be initiated and completed quickly. Knowing that the Recovery Act would include opportunities for highway investment, states told us they worked in advance of the legislation to identify appropriate projects. For example, in New York, the state DOT began planning to manage anticipated federal stimulus money in November 2008. A key part of New York’s DOT’s strategy was to build on existing planning and program systems to distribute and manage the funds. The states and D.C. must apply to the Department of Education for SFSF funds. Education will award funds once it determines that an application contains key assurances and information on how the state will use the funds. As of April 20, applications from three states had met that determination-South Dakota, and two of GAO’s sample states, California and Illinois. The applications from other states are being developed and submitted and have not yet been awarded. The states and the District report that SFSF funds will be used to hire and retain teachers, reduce the potential for layoffs, cover budget shortfalls, and restore funding cuts to programs. The applications to Education must contain certain assurances. For example, states must assure that, in each of fiscal years 2009, 2010, and 2011, they will maintain state support at fiscal year 2006 levels for elementary and secondary education and also for public institutions of higher education (IHEs). However, the Secretary of Education may waive maintenance of effort requirements if the state demonstrates that it will commit an equal or greater percentage of state revenues to education than in the previous applicable year. The state application must also contain (1) assurances that the state is committed to advancing education reform in increasing teacher effectiveness, establishing state-wide education longitudinal data systems, and improving the quality of state academic standards and assessments; (2) baseline data that demonstrates the state’s current status in each of the education reform areas; and (3) a description of how the state intends to use its stabilization allocation. Within two weeks of receipt of an approvable SFSF application, Education will provide the state with 67 percent of its SFSF allocation. Under certain circumstances, Education will provide the state with up to 90 percent of its allocation. In the second phase, Education intends to conduct a full peer review of state applications before awarding the final allocations. After maintaining state support for education at fiscal year 2006 levels, states are required to use the education portion of the SFSF to restore state support to the greater of fiscal year 2008 or 2009 levels for elementary and secondary education, public IHEs, and, if applicable, early childhood education programs. States must distribute these funds to school districts using the primary state education formula but maintain discretion in how funds are allocated to public IHEs. If, after restoring state support for education, additional funds remain, the state must allocate those funds to school districts according to the funding formula found in Title I, Part A, of the Elementary and Secondary Education Act of 1965 (ESEA), commonly known as the No Child Left Behind Act. However, if a state’s education stabilization fund allocation is insufficient to restore state support for education, then a state must allocate funds in proportion to the relative shortfall in state support to public schools and IHEs. Education stabilization funds must be allocated to school districts and public IHEs and cannot be retained at the state level. Once stabilization funds are awarded to school districts and public IHEs, they have considerable flexibility over how they use those funds. School districts are allowed to use stabilization funds for any allowable purpose under the Elementary and Secondary Education Act (ESEA), (commonly known as the No Child Left Behind Act), the Individuals with Disabilities Education Act (IDEA), the Adult Education and Family Literacy Act, or the Perkins Act, subject to some prohibitions on using funds for, among other things, sports facilities and vehicles. In particular, because allowable uses under the Impact Aid provisions of ESEA are broad, school districts have discretion to use Recovery Act funding for things ranging from salaries of teachers, administrators, and support staff to purchases of textbooks, computers, and other equipment. The Recovery Act allows public IHEs to use SFSF funds in such a way as to mitigate the need to raise tuition and fees, as well as for the modernization, renovation, and repair of facilities, subject to certain limitations. However, the Recovery Act prohibits public IHEs from using stabilization funds for such things as increasing endowments, modernizing, renovating, or repairing sports facilities, or maintaining equipment. According to Education officials, there are no maintenance of effort requirements placed on local school districts. Consequently, as long as local districts use stabilization funds for allowable purposes, they are free to reduce spending on education from local-source funds, such as property tax revenues. States have broad discretion over how the $8.8 billion in SFSF funds designated for basic government services are used. The Recovery Act provides that these funds can be used for public safety and other government services and that these services may include assistance for education, as well as for modernization, renovation, and repairs of public schools or IHEs, subject to certain requirements. Education’s guidance provides that the funds can also be used to cover state administrative expenses related to the Recovery Act. However, the Act also places several restrictions on the use of these funds. For example, these funds cannot be used to pay for casinos (a general prohibition that applies to all Recovery Act funds), financial assistance for students to attend private schools, or construction, modernization, renovation, or repair of stadiums or other sports facilities. States’ expected that SFSF uses by school districts and public IHEs would include retaining current staff and spending on programmatic initiatives, among other uses. Some states’ fiscal condition could affect their ability to meet maintenance of effort (MOE) requirements in order to receive SFSF monies, but they are awaiting final guidance from Education on procedures to obtain relief from these requirements. For example, due to substantial revenue shortages, Florida has cut their state budget in recent years and the state will not be able to meet the maintenance-of-effort requirement to readily qualify for these funds. The state will apply to Education for a waiver from this requirement; however, they are awaiting final instructions from Education on submission of the waiver. Florida plans to use SFSF funds to reduce the impact of any further cuts that may be needed in the state education budget. In Arizona, generally, state officials expect that SFSF recipients, such as local school districts, will use their allocations to improve the tools they use to assess student performance and determine to what extent performance meets federal academic standards, rehire teachers that were let go because of prior budget cuts, retain teachers, and meet the federal requirement that all schools have equal access to highly qualified teachers, among other things. Funds for the state universities will help them maintain services and staff as well as avoid tuition increases. Illinois officials stated that the state plans to use all of the $2 billion in State Fiscal Stabilization funds, including the 18.2 percent allowed for government services, for K-12 and higher education activities and hopes to avert layoffs and other cutbacks many districts and public colleges and universities are facing in their fiscal year 2009 and 2010 budgets. State Board of Education officials also noted that U.S. Department of Education guidance allows school districts to use stabilization funds for education reforms, such as prolonging school days and school years, where possible. However, officials said that Illinois districts will focus these funds on filling budget gaps rather than implementing projects that will require long-term resource commitments. While planning is underway, most of the selected states reported that they have not yet fully decided how to use the 18.2 percent of the SFSF which is discretionary. In addition to funds for Medicaid, transportation, and SFSF which flow primarily directly to the states, the Recovery Act provided funds for other program areas ranging from housing to training to alternative energy. Localities’ planning for the use of Recovery Act education funds varied according to both the status of federal guidance in place at the time of our review and individuals states’ and localities’ own planning process. New Jersey state education officials said they were initially limited in their ability to provide guidance to local institutions because they were awaiting guidance from the U.S. Department of Education. As a result, school district officials we interviewed in Newark and Trenton said they are waiting for state officials to tell them what their allocations are for each of the federal Recovery Act education programs. The timing of the federal and state guidelines for these funds are important as the local schools districts are planning their upcoming fiscal year budgets and would like to know how the Recovery Act funds would complement their upcoming school spending. According to the governor’s chief of staff, the state already funds local school districts with $8.8 billion in state funds, so ensuring accountability for the use of state funds to so many school districts is not a new challenge to the state oversight agencies. On April 1, 2009, the U.S. Department of Education issued guidance to the states on how Recovery Act funds could be used for education. State officials are continuing to review the guidance, and on April 16, 2009, issued guidance to local school districts outlining each district’s allocation of additional funds made available under the Recovery Act for programs authorized under Title I of the Elementary and Secondary Education Act (ESEA) and the Individuals with Disabilities Education Act. In Arizona, Tempe School District No. 3 plans to use the vast majority of the Recovery Act funding for ESEA Title I for existing programs, but it has tentative plans to use portions of it each year to hire two temporary regional facilitators and to fund five existing preschool programs, among other uses. Officials from the selected states and the District said there were plans in place to apply for and use Recovery Act funds. For example, Michigan plans to apply for $67 million in Recovery Act funds for crime control and prevention activities under the Department of Justice’s Edward Byrne Memorial Justice Assistance Grants. Michigan Department of Community Health officials told us that about $41 million of these funds will support, among other things, state efforts to reduce the crime lab backlog, funding for multi-jurisdictional courts, and localities’ efforts regarding law enforcement programs, community policing, and local correctional resources. An additional $26 million in Recovery Act funds will go directly to localities to support efforts against drug-related and violent crime. On April 13, 2009, Michigan began accepting grant applications for the Byrne program and will continue to accept them until May 11, 2009. In another example, officials in the District told us that as of April 3, 2009, the District Department of Employment Services had received about $1.5 million for adult Workforce Investment Act (WIA) programs, about $3.8 million for dislocated workers programs, and almost $4 million for youth programs. They said that D.C. plans to use these Recovery Act funds in accordance with the U.S. Department of Labor’s guidance stating the intent of the Recovery Act to use WIA Adult funds to provide the necessary services to substantially increased numbers of adults to support their entry or reentry into the job market, and that WIA Dislocated Worker funds be used to provide the necessary services to dislocated workers to support their reentry into the job market. Officials in all of the selected states indicated they were able to reduce or eliminate expected budget shortfalls through the inclusion of Recovery Act funds in their budget projections. In Texas, some representatives told us that absent the availability of Recovery Act funds, state agencies likely would have been asked to make cuts of about 10 percent for the state’s fiscal year 2010-2011 biennial budget, in addition to the state drawing upon the rainy day fund. However, other officials representing the Texas Office of the Governor said that budget deficit situations do not necessarily result in the state using its rainy day fund. The officials stressed that—to meet the requirement to pass a balanced budget—a variety of other solutions could be considered, such as budget reallocations among state agencies and programs, as well as spending cuts. Colorado officials said Recovery Act funds will help prevent cuts to state programs such as transportation. Illinois officials said the state hopes to avert layoffs and create new jobs with Recovery Act funds. Officials in Massachusetts also said that federal Recovery Act funds are critical to addressing the Commonwealth’s immediate fiscal pressures. State officials expect to use a significant portion of funds made available as a result of their state-projected $8.7 billion in Recovery Act funds (over 2 years) for budget stabilization. As of April 2009, the Commonwealth is addressing a budget shortfall of approximately $3.0 billion, driven largely by lower-than-anticipated revenues. The combination of funds made available as a result of the increased FMAP and state rainy day funds— a reserve fund built up during more favorable economic conditions to be used during difficult economic times—will help the state avoid cuts in several areas, including health care, education, and public safety. Faced with declining revenue projections since fiscal year 2008, Pennsylvania officials believe that funds made available as a result of the Recovery Act are critical to help alleviate the immediate fiscal pressure and help balance the state budget. Based on February 2009 projections, Pennsylvania faces a $2.3 billion shortfall in fiscal year 2009, largely because of lower-than- expected revenues. Despite the infusion of Recovery Act funds into state budgets, some state officials reported that the current fiscal situation still requires action to maintain balanced budgets. These actions include budget reductions, fee increases and scaling back of state rebates of local property taxes. In Georgia, officials amended the state budget by reducing revenue estimates, using reserves, and cutting program funding. These actions were necessary despite the inclusion of additional Medicaid funds made available as a result of the Recovery Act. The largest budget cuts in New Jersey come from scaling back of state rebates of local property taxes by $500 million, and reducing state payments to the pension funds by $895 million. Officials in the selected states acknowledged the Recovery Act’s contributions to easing immediate fiscal pressures but remain wary of continued fiscal pressures likely to remain after federal assistance ends. Officials in several states reported that their planning efforts focused on maintaining existing services rather than creating new programs or staff positions which could extend their state’s financial liabilities beyond the end date for Recovery Act funds. Officials generally expected to use Recovery Act funds to fill gaps in existing programs rather than funding new initiatives. In the midst of program budget cuts, state officials acknowledged the challenge of ensuring that, where required to do so, they use Recovery Act funds to supplement and not supplant current state program funds. For example, in Arizona, programs receiving Recovery Act funds may have a share of the state general fund reduced to help balance the fiscal year 2010 budget, thus demonstrating the state has met the prohibition on supplanting state funds could be a challenge. The Arizona Treasurer’s Office estimated that even with Recovery Act funding, Arizona’s expenditures were expected to exceed revenues through about 2014, and the state’s “rainy day” fund has been depleted. In California, even when the state Legislative Analyst’s Office factors in the state’s anticipated Recovery Act funding and a package of state budget solutions that will be voted on in a May 19, 2009 special election, it estimates an $8 billion deficit in fiscal year 2009-10. Further, since the release of the governor’s budget in January 2009, the state’s economic condition continues to deteriorate, and the state legislature and governor may need to develop additional budgetary solutions to rebalance the 2009-10 budget following an update of the budget in May. All of the 16 selected states and the District reported taking action to plan for and monitor the use of Recovery Act funding. Some states reported that Recovery Act planning activities for funds received by the state are directed primarily by the governor’s office. In New York, for example, the governor provides program direction to the state’s departments and offices, and he established a Recovery Act Cabinet comprised of representatives from all state agencies and many state authorities to coordinate and manage Recovery Act funding throughout the state. In North Carolina, Recovery Act planning efforts are led by the newly created Office of Economic Recovery and Investment, which was established by the governor to oversee the state’s economic recovery initiatives. Other states reported that their Recovery Act planning efforts were less centralized. In Mississippi, the governor has little influence over the state Departments of Education and Transportation, as they are led by independent entities. In Texas, oversight of federal Recovery Act funds involves various stakeholders, including the Office of the Governor, the Office of the Comptroller of Public Accounts, and the State Auditor’s Office as well as two entities established within the Texas legislature specifically for this purpose—the House Select Committee on Federal Economic Stabilization Funding and the House Appropriations’ Subcommittee on Stimulus. Several states reported that they have appointed “Recovery Czars” or identified a similar key official and established special offices, task forces or other entities to oversee the planning and monitor the use of Recovery Act funds within their states. In Michigan, the governor appointed a recovery czar to lead a new Michigan Economic Recovery Office, which is responsible for coordinating Recovery Act programs across all state departments and with external stakeholders such as GAO, the federal OMB, and others. Some states began planning efforts before Congress passed the Recovery Act. For example, the state of Georgia recognized the importance of accounting for and monitoring Recovery Act funds and directed state agencies to take a number of steps to safeguard Recovery Act funds and mitigate identified risks. Georgia established a small core team in December 2008 to begin planning for the state’s implementation of the Recovery Act. Within 1 day of enactment, the governor appointed a Recovery Act Accountability Officer, and she formed a Recovery Act implementation team shortly thereafter. The implementation team includes a senior management team, officials from 31 state agencies, an accountability and transparency support group comprised of officials from the state’s budget, accounting, and procurement offices, and five cross- agency implementation teams. At one of the first implementation team meetings, the Recovery Act Accountability Officer disseminated an implementation manual to agencies, which included multiple types of guidance on how to use and account for Recovery Act funds, and new and updated guidance is disseminated at the weekly implementation team meetings. In contrast, officials in some states are using existing mechanisms rather than creating new offices or positions to lead Recovery Act efforts. For example, a District official stated that the District would not appoint a Recovery Czar, and instead would use its existing administrative structures to distribute and monitor Recovery Act funds to ensure quick disbursement of funds. In Mississippi, officials from the Governor’s Office said that the state did not establish a new office to provide statewide oversight of Recovery Act funding, in part because they did not believe that the act provided states with funds for administrative expenses— including additional staff. The Governor did designate a member of his staff to act as a stimulus coordinator for Recovery Act activities. All 16 states we visited and the District have established Recovery Act web sites to provide information on state plans for using Recovery funding, uses of funds to date, and, in some instances, to allow citizens to submit project proposals. For example, Ohio has created www.recovery.Ohio.gov, which represents the state’s efforts to create an open, transparent, and equitable process for using Recovery Act funds. The state has encouraged citizens to submit proposals for use of Recovery Act funds, and as of April 8, 2009, individuals and organizations from across Ohio submitted more than 23,000 proposals. Iowa officials indicated they want to use the state’s recovery web site (www.recovery.Iowa.gov) to host a “dashboard” function to report updated information on Recovery Act spending that is easily searchable by the public. Also in Colorado, the state plans to create a web-based map of projects receiving Recovery Act funds to help inform the public about the results of Recovery Act spending in Colorado. In many states we spoke to, officials reported that their planning efforts were affected by the need for the state legislature to approve state agencies’ use of Recovery Act funds. For example, in Florida, the state legislature must authorize the use of all Recovery Act funds received by the state; including those passed on to local governments. In Colorado, some Recovery Act funds, including those going to Child Care Development Block Grants (CDBG) and the Temporary Assistance to Needy Families (TANF) Emergency Contingency Fund, must be allocated by the Colorado General Assembly, which is in session only through early May. Mississippi officials also plan to use Recovery Act funds to address the state’s fiscal challenges. Mississippi legislative officials we met with told us that the state legislature was considering adding escalation language to the current fiscal year’s appropriations bills that would authorize state agencies to spend any Recovery Act funds received. The legislature normally conducts its regular session between the beginning of January and the end of March. However, the legislature recessed early during the 2009 regular session in part because of uncertainty regarding how Recovery Act funds that the state will receive should be spent. The legislature plans to reconvene in early May 2009 to complete its work on the state’s fiscal year 2010 budget. The selected states’ and localities’ tracking and accounting systems are critical to the proper execution and accurate and timely recording of transactions associated with the Recovery Act. OMB has issued guidance to the states and localities that provides for separate “tagging” of Recovery Act funds so that specific reports can be created and transactions can be traced. Officials from all 16 of the selected states and the District told us they have established or were establishing methods and processes to separately identify (i.e., tag), monitor, track, and report on the use of the Recovery Act funds they receive. The states and localities generally plan on using their current accounting system for recording Recovery Act funds, but many are adding identifiers to account codes to track recovery act funds separately. Many said this involved adding digits to the end of existing accounting codes for federal programs. In California for instance, officials told us that while their plans for tracking, control, and oversight are still evolving, they intend to rely on existing accountability mechanisms and accounting systems, enhanced with newly created codes, to separately track and monitor Recovery Act funds that are received by and pass through the state. Several officials told us that the state’s accounting system should be able to track Recovery Act funds separately. In one state, Arizona, officials told us that state agencies will primarily be responsible for administering, tracking, reporting on and overseeing Recovery Act funds for their respective programs because the state government is highly decentralized. The state’s existing accounting system will have new accounting codes added in order to segregate and track the Recovery Act funds separately from other funds that will flow through the state government. Under Arizona’s decentralized government, some larger agencies, and program offices within them, have their own accounting systems that will need to code and track Recover Act funds as well. The Arizona General Accounting Office has issued guidance to state agencies on their responsibilities, including how they were to receive, disburse, tag or code in their accounting systems, track separately, and to some extent report on these federal resources. A concern expressed by state officials is that agencies within the state often use different accounting software making it difficult to ensure consistent and timely reporting. For example, Georgia officials stated that the majority of state agencies use the same software; however, some agencies do not use this software and others have greatly customized the software. Similarly, officials from the Illinois Office of the Internal Auditor said that the state is assessing an issue that could affect reporting — specifically that there are currently more than 100 separate financial systems used throughout the Illinois state government. Furthermore, Colorado state officials are concerned that their accounting system is outdated and said they faced challenges in meeting federal reporting requirements. Some state departments do not use the state financial system grant module and therefore manually post aggregate revenue and expenditure data. As a result, they may have to compile a list of Recovery Act funding received outside of their central financial management system. State officials are determining what approach they will use in tracking funds, and told us they plan to create an accounting fund and a centrally defined budget coding structure through which to track state agencies’ use of Recovery Act funds. State officials reported a range of concerns regarding the federal requirements to identify and track Recovery Act funds going to sub- recipients, localities and other non-state entities. These concerns include their inability to track these funds with existing systems, uncertainty regarding state officials’ accountability for the use of funds which do not pass through state government entities, and their desire for additional federal guidance to establish specific expectations on sub-recipient reporting requirements. Officials from many of the 16 selected states and the District told us that they had concerns about the ability of sub-recipients, localities, and other non-state entities to separately tag, monitor, track, and report on the Recovery Act funds they receive. For example, in New Jersey officials noted that certain towns and cities, as well as regional planning organizations, can apply for and directly receive federal funds under the terms of the Recovery Act. According to the state Inspector General, the risk for waste, fraud and abuse increases the farther removed an organization is from state government controls. While some state officials said that they have statewide investigative authority, they would not be able to readily track the funding going directly to local and regional government organizations and nonprofits as a result of the funding delivery and reporting requirements set up in the Recovery Act. In addition, staff from the State Auditor’s office noted that some smaller cities and towns in New Jersey are not used to implementing guidance from the state or federal government on how they are using program funds and this could result in the localities reporting using funds for ineligible purposes. Officials in many states expressed concern about being held accountable for funds flowing directly from federal agencies to localities or other recipients. For example, officials in Colorado expressed concern that they will be held accountable for all Recovery Act funds flowing to the state, including those funds for which they do not have oversight or even information about, because some funds flow directly to non-state entities within Colorado (such as school districts and transportation districts). Officials in some states said they would like to at least be informed about funds provided to non-state entities in order to facilitate planning for the use of these funds and so they can coordinate Recovery Act activities. For example, Georgia officials do not expect to track and report on funds going directly to localities, but would like to be informed about these funds so that the state can coordinate with localities. They cited Recovery Act-funded broadband initiatives and health funding to nonprofit hospitals as areas where a lack of coordination could result in a duplication of services or missed opportunities to leverage resources. Officials at the Colorado Department of Public Safety told us that, because Colorado and other states expressed interest in receiving data on localities’ grant funding, the federal Bureau of Justice Assistance in the U.S. Department of Justice began providing data to the states on localities’ funding. In another example, officials told us that the Ohio Administrative Knowledge System (OAKS) will allow the state to tag Recovery Act funding. However, they said in many cases state agencies will rely on grantees and contractors to track the funds to their end use. Because the state intends to code each Recovery Act funding stream separately and recipients typically manage more than one funding stream at a time, state officials said recipients should be able to track Recovery Act funds separately from other funding sources. However, state and local officials we interviewed raised concerns about the capacity of grantees and contractors to track funds spent by sub-recipients. For example, officials with the Ohio Department of Education said they can track Recovery Act funds to school districts and charter schools, but they have to rely on the recipients’ financial systems to be able to track funds beyond that. An official with the Columbus City Schools said that while they could provide assurances that Recovery Act funds were spent in accordance with program rules; they could not report back systematically how each federal Recovery Act dollar was spent. Officials with the Columbus Metropolitan Housing Authority also noted limitations in how far they could reasonably be expected to track Recovery Act funds. They said they could track Recovery Act dollars to specific projects but could not systematically track funds spent by subcontractors on materials and labor. These officials added, however, that if they required the contractors to collect this information from their subcontractors, they would be able to report back with great detail. Still, they said, without additional guidance from the federal government on specific reporting requirements, they were hesitant to specify requirements for their contractors to collect the data. Pennsylvania officials said that the state will rely on sub-recipients to meet reporting requirements at the local level. Recipients and sub-recipients can be local governments or other entities such as transit agencies. For example, about $367 million in Recovery Act money for transit capital assistance and fixed guideway (such as commuter rails and trolleys) modernization was allocated directly to areas such as Philadelphia, Pittsburgh, and Allentown. State officials also told us that the state would not track or report Recovery Act funds that go straight from the federal government to localities and other entities, such as public housing authorities. Officials in several states indicated that either their states would not be tracking Recovery Act funds going to the local levels or that they were unsure how much data would be available on the use of these funds. For example, Massachusetts officials told us that the portion of recovery funds going directly to recipients other than Massachusetts state government agencies, such as independent state authorities, local governments, or other entities, will not be tracked through the Office of the Comptroller. While state officials acknowledged that the Commonwealth lacks authority to ensure adequate tracking of these funds, they also are concerned about the ability of smaller entities to manage Recovery Act funds, particularly smaller municipalities that traditionally do not receive federal funds and who are not familiar with Massachusetts tracking and procurement procedures, and recipients receiving significant increases in federal funds. In order to address this concern, the state administration introduced emergency legislation that, according to state officials, includes a provision requiring all entities within Massachusetts that receive Recovery Act money to provide information to the state on their use of Recovery Act funds. Nevertheless, two large non-state government entities we spoke with—the city of Boston and the Massachusetts Bay Transportation Authority (an independent authority responsible for the metropolitan Boston’s transit system)—believe that their current systems, with some modifications, will allow them to meet Recovery Act requirements. For example, the city of Boston hosted the Democratic National Convention in 2004 and officials said that their system was then capable of segregating and tracking a sudden influx of temporary funds. This response was common among the selected states. For example, officials in Florida told us that the state’s accounting system will not track the portion of Recovery Act funds that flow directly to local entities from federal agencies. Officials in Michigan’s Auditor General’s Office told us that their oversight responsibilities do not include most sub-recipients that receive direct federal funding, so any upfront safeguards to track or ensure accountability have not been determined. Mississippi officials also said that although special accounting codes will be added to the Statewide Automated Accounting System in order to track the expenditure of Recovery Act funds, the system would not track Recovery Act fund allocated directly to local and regional government organizations and nonprofit organizations. In Arizona, the portion of recovery funds going directly to recipients other than Arizona government agencies, such as independent state authorities, local governments, or other entities, may not be tracked by the state. State officials expressed concern that they may not be able to attest to localities’ ability to tag, track, and report on Recovery Act funds when these entities receive the moneys directly from federal agencies rather than through state agencies. Department heads and program officials generally expected that they could require sub-recipients receiving funds from the state, through agreements, grant applications, and revised contract provisions, to separately track and report Recovery Act funding. For example, unemployment program managers said they were issuing new intergovernmental agreements with localities to cover new reporting requirements. However, several of the state officials did raise questions about the ability of some local organizations to do this, such as small, rural entities, boards or commissions, or private entities not used to doing business with the federal government. Furthermore, several of the state department officials acknowledged that either some state agency information systems have data reliability problems, which will have to be resolved, or they had sub-recipients who in the past had problems providing timely and accurate reporting, but said that they would work with these entities to comply, and also had sanctions to use as a last resort. Officials in Arizona, Florida, Georgia, and New York, also expressed concern that the new requirement to provide reports on use of Recovery Act funds within 10 days after a quarter ends may be challenging to meet by both state and local entities. In some program areas, some state officials raised concerns that the Recovery Act requirement will create much shorter deadlines for processing financial data that local areas will have difficultly meeting. The selected states and the District are taking various approaches to ensure that internal controls are in place to manage risk up-front, rather than after problems develop and deficiencies are identified after the fact, and have different capacities to manage and oversee the use of Recovery Act funds. Many of these differences result from the underlying differences in approaches to governance, organizational structures, and related systems and processes that are unique to each jurisdiction. A robust system of internal control specifically designed to deal with the unique and complex aspects of the Recovery Act funds will be key to helping management of the states and localities achieve the desired results. Effective internal control can be achieved through numerous different approaches, and, in fact, we found significant variation in planned approaches by state. For example, New York’s Recovery Act cabinet plans to establish a working group on internal controls; the Governor’s office plans to hire a consultant to review the state’s management infrastructure and capabilities to achieve accountability, effective internal controls, compliance and reliable reporting under the act; and, the state plans to coordinate fraud prevention training sessions. Michigan’s Recovery Office is developing strategies for effective oversight and tracking of the use of Recovery Act funds to ensure compliance with accountability and transparency requirements. Ohio’s Office of Internal Audit plans to assess the adequacy and effectiveness of the current internal control framework and test whether state agencies adhere to the framework. Florida’s Chief Inspector General established an enterprise-wide working group of agency program Inspectors General who are updating their annual work plans by including the Recovery Act funds in their risk assessments and will leave flexibility in their plans to address issues related to funds. Massachusetts’s Joint Committee on Federal Recovery Act Oversight will hold hearings regarding the oversight of Recovery Act spending. Georgia’s State Auditor plans to provide internal control training to state agency personnel in late April. The training will discuss basic internal controls, designing and implementing internal controls for Recovery Act programs, best practices in contract monitoring, and reporting on Recovery Act funds. Internal controls include management and program policies, procedures, and guidance that help ensure effective and efficient use of resources; compliance with laws and regulations; prevention and detection of fraud, waste, and abuse; and the reliability of financial reporting. Because Recovery Act funds are to be distributed as quickly as possible, controls are evolving as various aspects of the program become operational. Effective internal control is a major part of managing any organization to achieve desired outcomes and manage risk. GAO’s Standards for Internal Control include five key elements: control environment, risk assessment, control activities, information and communication, and monitoring. The control environment should create a culture of accountability by establishing a positive and supportive attitude toward improvement and the achievement of established program outcomes. Control environment includes the integrity and ethical values maintained and demonstrated by management, the organizational structure, and management’s philosophy and operating style. As detailed earlier in this report, although the implementation has varied, many locations we reviewed have attempted to enhance their control environment through the appointment of a Recovery czar or the establishment of boards or working groups that focus on the Recovery Act. Also, as noted earlier, state officials expressed concerns about the reliability and accuracy of data coming from localities. The second feature of strong internal controls is risk assessment—that is, performing comprehensive reviews and analyses of program operations to determine if risks exist and the nature and extent of risks have been identified. Some states told us that they are conducting such risk assessments and the existing body of work by state auditors and others provide a good roadmap for states to use to pinpoint key areas of concern and to strengthen internal controls and subsequent oversight. For example, the Illinois Office of Internal Audit is performing a risk assessment of all programs related to the Recovery Act, and North Carolina’s Office of Internal Audit is assessing the risk of the state department’s financial management system and internal controls. Michigan’s major state departments are conducting self assessments of controls, including identification of internal control and programmatic weaknesses. In Georgia, the budget office is requiring state agencies to complete a tool that assesses risk as part of the budget process for the Recovery Act funds. Selected states have thus far identified various risks that the Recovery Act funds and programs face, including Georgia officials identifying three state departments with increased risk—the Georgia Department of Labor that is on a different accounting system than other state departments, the Georgia Department of Transportation which had previously identified accounting problems and is currently being reorganized, and the Georgia Department of Human Resources, which is currently being divided into three parts, which increases risk. Additionally, Massachusetts’ fiscal year 2007 Single Audit report also identified deficiencies, especially in the Department of Education’s sub-recipient monitoring. Officials in several of the selected states told us that risk assessment is being conducted to look at programs receiving Recovery Act funds. Officials in Texas’ State Auditor’s Office noted that relatively high risks generally can be anticipated with certain types of programs such as new programs with completely new processes and internal controls; programs that distribute significant amounts of funds to local governments or boards, and programs that rely on sub-recipients for internal controls and monitoring. Officials from New York, North Carolina, and Pennsylvania commented that the weatherization program was an example of a program at increased risk. The results of recent audits are a readily available source of information to use in the risk assessment process. Material weaknesses and other conditions identified in an audit represent potential risks that can be analyzed for their significance and occurrence that will allow management and others to decide how to manage the risk and what actions should be taken. A readily available source of information on internal control weaknesses and other risks present in the states and other jurisdictions receiving Recovery Act funding is the Single Audit report, prepared to meet the requirements of the Single Audit Act, as amended (Single Audit Act) and OMB’s implementing guidance in OMB Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations. The Single Audit Act adopted a single audit concept to help meet the needs of federal agencies for grantee oversight and accountability as well as grantees’ needs for single, uniformly structured audits. The Single Audit Act requires states, local governments and nonprofit organizations expending over $500,000 in federal awards in a year to obtain an audit in accordance with requirements set forth in the Act. A single audit consists of (1) an audit and opinions on the fair presentation of the financial statements and the Schedule of Expenditures of Federal Awards (SEFA); (2) gaining an understanding of and testing internal control over financial reporting and the entity’s compliance with laws, regulations, and contract or grant provisions that have a direct and material effect on certain federal programs (i.e., the program requirements), and (3) an audit and an opinion on compliance with applicable program requirements for certain federal programs. The audit report also includes the auditor’s schedule of findings and questioned costs, and the auditee’s corrective action plans and a summary of prior audit findings that includes planned and completed corrective actions. Auditors are also required to report on significant deficiencies in internal control and on compliance associated with the audit of the financial statements. For example, in California, the most recent single audit conducted by the State Auditor for fiscal year 2007 identified 81 material weaknesses, 27 of which were associated with programs we reviewed for purposes of this report. The State Auditor plans to use past audit results to target state agencies and programs with a high number and history of problems, including data reliability concerns, and is closely coordinating with us on these efforts. For example, the fiscal year 2007 State Single Audit Report identified 8 material weaknesses pertaining to the ESEA Title I program and the Individuals with Disabilities Education Act programs. The audit findings included a material weakness in the California Department of Education’s management of cash because it disbursed funds without assurances from LEAs that the time between the receipt and disbursement of federal funds was minimized, contrary to federal guidelines. Education officials told us that they have addressed some of these material weaknesses and, in other cases, they are still working to correct them. If these and other material weaknesses are not corrected, they may affect the state’s ability to appropriately manage certain Recovery Act funds. The State Auditor’s Office told us that it is in the process of finalizing the fiscal year 2007 State Single Audit Report and plans to issue the report within the next 30 days. In addition, the State Auditor’s Office is summarizing the results of the single audit to identify those programs that continue to have material weaknesses. Finally, the State Auditor’s Office plans to use the results of other audits it has conducted in conjunction with the single audit to develop its approach for determining the state’s readiness to receive the large influx of federal funds and comply with the requirement regarding the use of those funds under the Recovery Act. Arizona’s fiscal year 2007 Single Audit report identified a number of material weaknesses related to the state Department of Education. The report identified a material weakness involving IDEA where the state department had not reviewed sub-recipients to ensure that federal awards were used for authorized purposes in compliance with laws, regulations, and the provisions of contracts or grant agreements. The Audit report also identified one financial reporting material weaknesses related to the state Department of Administration’s ability to prepare timely financial statements, including its Comprehensive Annual Financial Report (CAFR). In fiscal year 2007, the CAFR was issued in June 2008, approximately 6 months after the scheduled deadline. According to the Auditor General’s Office, the fiscal year 2008 CAFR will also be completed late as the last agency submitted its financial statement on March 9, 2008. According to the Auditor General’s Office, this control deficiency affects the timeliness of financial reporting which affects the needs of users. It is especially important that Arizona try to address the timeliness issue with regard to financial statements given the number and strict reporting timelines that are imposed on states under the Recovery Act. The third element of a comprehensive system of internal controls is that of control activities, which involve taking actions to address identified risk areas and help ensure that management’s decisions, directives, and plans are carried out and program objectives met. Various control activities already exist and are also being put in place in the states related to the Recovery Act. Control activities for states and localities consist of the policies, procedures, and guidance that enforce management’s directives and achieve effective internal control over specific program activities. Examples of such policies and procedures particularly relevant to the Recovery Act spending are (1) proper execution and accurate and timely recording of transactions and events, (2) controls to help ensure compliance with program requirements, (3) establishment and review of performance measures and indicators, and (4) appropriate documentation of transactions and internal control. Documented policies, procedures and guidance that are effectively implemented will be critical tools for states and localities management and staff as well as program recipients for achieving good management of Recovery Act programs. Control activities are also key in helping to achieve accurate, reliable reporting of information and results. Effective control activities and monitoring are key to achieving this objective. Pennsylvania’s Auditor General also found potential weaknesses and vulnerabilities in programs expected to receive Recovery Act funds. For example, a recent Auditor General report found, among other things, weak internal controls, weaknesses in contracting, and inconsistent verification and inspection of subcontractor work in the state’s Weatherization Assistance Program. States and localities that receive and administer the Recovery Act funds will be expected to minimize fraud, waste, and abuse in contracting. According to Florida state officials, the state completed an initiative to strengthen contracting requirements several years ago. For example, the majority of state contracts greater than $1 million are required to be reviewed for certain criteria by the Department of Financial Services’ Division of Accounting and Auditing before the first payment is processed. The contract must also be negotiated by a contract manager certified by the Florida Department of Management Services, Division of State Purchasing Training and Certification Program. In another example of efforts to enhancing contracting processes and oversight, officials in New Jersey told us that the controls and reports will be put into place by the state’s centralized purchasing department, the Division of Purchase and Property (DPP). The current accounting system will be able to account for and control the use of Recovery Act funds used for procurement because DPP will create special accounting codes for these funds. New Jersey officials stated that their accounting systems had the capability to track funds using special accounting codes and that they were confident no special enhancements were needed to their accounting software, although they would monitor the accounting system to ensure it was functioning properly. DPP will also publicly advertise bids for projects funded with Recovery Act funds, include terms and conditions in each request for proposals and contract for these projects stating detailed reports required by the Act, and will post contract award notices for Recovery Act-funded projects. Information should be communicated to management and within the entity to enable accountable officials and others throughout the entity to carry out their responsibilities and determine whether they are meeting their goals of accountability and efficient use of resources. The states have undertaken a variety of information and communication methods. For the Recovery Act, internal state communication is being conducted through newly created task forces or working groups such as those in California and the District, implementation teams such as in Florida and Georgia, and state offices such as in North Carolina. Texas also uses a periodic forum of the internal audit staff of Texas state agencies for another statewide communication method. Various officials are developing guidance related to the Recovery Act and dispensing the information to state agencies. Monitoring activities include the systemic process of reviewing the effectiveness of the operation of the internal control system. These activities are conducted by management, oversight boards and entities, and internal and external auditors. Monitoring enables stakeholders to determine whether the internal control system continues to operate effectively over time. It also improves the organization’s overall effectiveness and efficiency by providing timely evidence of changes that have occurred, or might need to occur, in the way the internal control system addresses evolving or changing risks. Many of the boards or offices discussed in the control environment above have responsibilities related to monitoring the Recovery Act funds. States have undertaken various other activities to monitor Recovery Act funds, including Arizona’s budget director meeting with the heads of programs potentially receiving Recovery Act funds to gauge each programs’ preparedness; Arizona’s Comptroller conducting a survey to inventory current internal controls at state agencies to help ensure controls are in place to limit the risk of fraud, waste, abuse and mismanagement of Recovery Act funds; California’s Governor appointing the state’s first Inspector General specifically to oversee Recovery Act funds as they are disbursed in the state; Massachusetts’ legislature creating the Joint Committee on federal Recovery Act Oversight with the goals of ensuring compliance with federal regulations and reviewing current state laws, regulations and policies to ensure they allow access to Recovery Act funds and streamline the processes to quickly stimulate the economy; and Texas State Auditor’s Office plans to hire 10 additional staff. An important aspect of monitoring Recovery Act funding includes sub- recipient monitoring. As noted, significant concerns exist regarding sub- recipient monitoring, as this is an area where limited experience and known vulnerabilities exist. Some state auditors do not have authority to monitor local operations of internal controls. For example, in Pennsylvania, officials from the Auditor General’s office have different views about what authority they have to audit federal money that flows directly to localities, such as housing authorities and municipalities. In Texas, the State Auditor’s Office made a recommendation regarding the monitoring of sub-recipients in its most recent audit of the Texas Education Agency. The audit report did not find that sub-recipients were improperly spending federal funds or were not meeting federal requirements, however the report did note that the agency had “a limited number of resources available to monitor fiscal compliance.” The audit report recommended that the Texas Education Agency continue to add resources, within its budget constraints, to increase the amount of federal fiscal compliance performed. According to the State Auditor, following the audit in February 2009, the Texas Education Agency created a comprehensive correction plan to address this resource issue, which the agency is implementing. OMB’s Circular No. A-133 sets out implementing guidelines for the single audit and defines roles and responsibilities related to the implementation of the Single Audit Act, including detailed instructions to auditors on how to determine which federal programs are to be audited for compliance with program requirements in a particular year at a given grantee. The Circular No. A-133 Compliance Supplement is issued annually to guide auditors on what program requirements should be tested for programs audited as part of the single audit. OMB has stated that it will use its Circular No. A-133 Compliance Supplement to notify auditors of program requirements that should be tested for Recovery Act programs, and will issue interim updates as necessary. Both the Single Audit Act and OMB Circular No. A-133 call for a “risk- based” approach to determine which programs will be audited for compliance with program requirements as part of a single audit. In general, the prescribed approach relies heavily on the amount of federal expenditures during a fiscal year and whether findings were reported in the previous period to determine whether detailed compliance testing is required for a given program that year. Under the current approach for risk determination in accordance with Circular No. A-133, certain risks unique to the Recovery Act programs may not receive full consideration. Recovery Act funding carries with it some unique challenges. The most significant of these challenges are associated with (1) new government programs (2), the sudden increase in funds or programs that are new for the recipient entity, and (3) the expectation that some programs and projects will be delivered faster so as to inject funds into the economy. This makes timely and efficient evaluations in response to the Recovery Act’s accountability requirements critical. Specifically, new programs and recipients participating in a program for the first time may not have the management controls and accounting systems in place to help ensure that funds are distributed and used in accordance with program regulations and objectives; Recovery Act funding that applies to programs already in operation may cause total funding to exceed the capacity of management controls and accounting systems that have been effective in past years; the more extensive accountability and transparency requirements for Recovery Act funds will require the implementation of new controls and procedures; and risk may be increased due to the pressures of spending funds quickly. In response to the risks associated with Recovery Act funding, the single audit process needs adjustment to put appropriate focus on Recovery Act programs to provide the necessary level of accountability over these funds in a timely manner. The single audit process could be adjusted to require the auditor to perform procedures such as the following as part of the routine single audit: provide for review of the design and implementation of internal control over compliance and financial reporting for programs under the Recovery Act; consider risks related to Recovery Act-related programs in determining which federal programs are major programs; and specifically, test Recovery Act programs to determine whether the auditee complied with laws and regulations. The first two items above should preferably be accomplished during 2009 before significant expenditures of funds in 2010 so that the design of internal control can be strengthened prior to the majority of those expenditures. We further believe that OMB Circular No. A-133 and/or the Circular No. A-133 Compliance Supplement could be adjusted to provide some relief on current audit requirements for low-risk programs to offset additional workload demands associated with Recovery Act funds. OMB told us that it is developing audit guidance that would address the above audit objectives. OMB also said that it is considering reevaluating potential options for providing relief from certain existing audit requirements in order to provide some balance to the increased requirements for Recovery Act program auditing. Officials in several states expressed concerns regarding the lack of funding provided to state oversight entities in the Recovery Act given the additional federal requirements placed on states to provide proper accounting, and ensure transparency. Due to fiscal constraints, many states reported significant declines in the number of management and oversight staff—limiting states’ ability to ensure proper implementation and management of Recovery Act funds. To the extent that states’ management infrastructures were already strained due to resource issues, risks will be exacerbated by increased workloads and new program implementation. While the majority of states indicated that they lack the necessary resources to conduct additional management and oversight related to the Recovery Act, some states indicated that they are taking measures to either hire new staff or reallocate existing staff to ensure adequate oversight of Recovery Act funds. Officials we interviewed in several states said the lack of funding for state oversight entities in the Recovery Act presents them with a challenge, given the increased need for oversight and accountability. According to state officials, state budget and staffing cuts have limited the ability of state and local oversight entities to ensure adequate management and implementation of the Recovery Act. For example, Colorado’s state auditor reported that state oversight capacity is limited, noting that the Department of Health Care Policy and Financing has had 3 controllers in the past 4 years and the state legislature’s Joint Budget Committee recently cut field audit staff for the Department of Human Services in half. In addition, the Colorado Department of Transportation’s deputy controller position is vacant, as is the Department of Personnel & Administration’s internal auditor position. Colorado officials noted that these actions are, in part, due to administrative cuts during a past economic downturn in an attempt to maintain program delivery levels. In Massachusetts, the task forces the Governor convened in December 2008 concluded that it is critical the Inspector General and State Auditor have resources to audit Recovery Act contracts and management of Recovery Act funds, as well as recommended that the Attorney General’s office be provided with the resources to promptly and effectively pursue fraud and abuse. Massachusetts officials explained that the oversight community is facing budget cuts of about 10 percent at a time when increased oversight and accountability is critically needed. To illustrate the impact of the impending budget situation, the Inspector General stated that his department does not have the resources to conduct any additional oversight related to Recovery Act funds. This significantly affects the Inspector General’s capacity to conduct oversight since the budget is almost entirely comprised of salaries, and any cuts in funding would result in fewer staff available to conduct oversight. In addition, the Massachusetts State Auditor described how their department has had to resort to staff being furloughed already for 6 days and is anticipating further layoffs before the end of fiscal year 2009. Similarly, 94 percent of their department’s budget is labor and any cuts in funding generally result in cuts in staff. Much like Colorado and Massachusetts, Arizona and Florida state officials report significant declines in oversight staff. The Florida Auditor General told us that the office has not been hiring new staff for over a year and has about 10 percent of the office’s positions unfilled. In addition, the Office of Policy Analysis and Government Accountability officials also told us their respective staffs have decreased by 10 percent in the past two years. State officials stated that these staff resource constraints may lead them to reassesses priorities and reallocate staff to ensure adequate oversight of Recovery Act funds. Officials within Arizona state executive offices that are coordinating oversight activities—such as the Office of Strategic Planning and Budgeting, the Office of Economic Recovery, and the Comptroller’s Office—stated that they will need additional people to help ensure compliance with Recovery Act funding requirements, but that the state has a hiring freeze to help address budget deficits. For example, the General Accounting Office within the state Department of Administration has experienced a reduction from 74 to 50 staff, posing challenges to its increased oversight responsibilities, and the state Department of Economic Security that manages workforce investment programs had 8,214 staff on furloughs of five or nine days, depending on pay grade, and has laid off about 800 staff members as well. Similarly, a state Department of Housing official stated that the office currently has a vacancy rate of about 15 percent because of the hiring freeze. Furthermore, the state Auditor General reported that its staffing levels are nearly 25 percent below the authorized staffing level of 229 full time equivalents. Although most states indicated that they lack the resources needed to provide effective monitoring and oversight, some states indicated they will hire additional staff to help ensure the prudent use of Recovery Act funds. For example, according to officials with North Carolina’s Governor’s Crime Commission, the current management capacity in place is not sufficient to implement the Recovery Act. Officials explained that the Recovery Act funds for the Edward Byrne Memorial Justice Assistance Grant program have created an increase in workload that the department will have to hire additional staff to handle over the next 3 years. Officials explained that these staff will be hired for the short term since the money will run out in 3 years. Additionally, officials explained that they are able to use 10 percent of the Justice Assistance Grants funding to pay for the administrative positions that are needed. In addition, officials from Ohio’s Office of Budget and Management (OBM) stated that its Office of Internal Audit plans to increase its internal audit staff from 9 (current) to 33 by transferring internal audit personnel from other state agencies and hiring new staff by July 2009. OBM officials say that the increase in Office of Internal Audit staff will provide the needed resources to implement its objectives and ensure that current safeguards are in place and followed as the state manages its Recovery Act funded programs. Additionally, some Georgia state officials that directly administer programs stated that overseeing the influx of funds could be a challenge, given the state’s current budget constraints and hiring freeze. For example, the State Auditor, whose fiscal year 2009 budget was cut by 11 percent, expressed concerns about the lack of additional funds for Recovery Act oversight. The Georgia State Auditor noted that, if state fiscal conditions do not improve or federal funding does not become available for audit purposes, additional budget and staffing cuts may occur within the department. In some cases, state officials told us that they planned to use Recovery Act funds to cover their administrative costs. Meanwhile, other state officials want additional clarity on when they could use program funds to cover such costs. A number of states expressed concerns regarding the ability to track Recovery Act funds due to state hiring freezes, resulting from budget shortfalls. For instance, New Jersey has not increased its number of state auditors or investigators, nor has there been an increase in funding specifically for Recovery Act oversight. In addition, the state hiring freeze has not allowed many state agencies to increase their Recovery Act oversight efforts. For example, despite an increase of $469 million in Recovery Act funds for state highway projects, no additional staff will be hired to help with those tasks or those directly associated with the Recovery Act, such as reporting on the number of jobs created. While the state’s Department of Transportation has committed to shift resources to meet any expanded need for internal Recovery Act oversight, one person is currently responsible for reviewing contractor-reported payroll information for disadvantaged business enterprises, ensuring compliance with Davis-Bacon wage requirements, and development of the job creation figures. State education officials in North Carolina also said that greater oversight capacity is needed to manage the increase in federal funding. However, due to the state’s hiring freeze, the agency will be unable to use state funds to hire the additional staff needed to oversee Recovery Act funds. The North Carolina Recovery Czar said that his office will work with state agencies to authorize hiring additional staff when directly related to Recovery Act oversight. Michigan officials reported that the state’s hiring freeze may not allow state and local agencies to hire the additional staff needed to increase Recovery Act oversight efforts. For example, an official with the state’s Department of Community Health said that because it has been downsizing for several years through attrition and early retirement, it does not have sufficient staff to cover its current responsibilities and that further reductions are planned for fiscal year 2010. However, state officials told us that they will take the actions necessary to ensure that state departments have the capacity to provide proper oversight and accountability for Recovery Act funds. In contrast, two states indicated that they have or will have sufficient levels of existing personnel to track funds. Texas state officials noted that state agencies plan on using existing staff to manage the stimulus funds. Agency officials will monitor the situations and, as need arises, will determine whether additional staff should be hired to ensure adequate oversight of the state Recovery Act funds. Additionally, in preparation of the infusion of Recovery Act funds, the Illinois Governor is seeking approximately 350 additional positions state-wide in the fiscal year 2010 budget to help implement Recovery Act programs, according to officials from the Governor’s Office of Management and Budget. With respect to oversight of Recovery Act funding at the local level, varying degrees of preparedness were reported by state and local officials. While the California Department of Transportation (Caltrans) officials stated that extensive internal controls exist at the state level, there may be control weaknesses at the local level. Caltrans is collaborating with local entities to identify and address these weaknesses. Likewise, Colorado officials expressed concerns that effective oversight of funds provided to Jefferson County may be limited due to the recent termination of its internal auditor and the elimination of its internal control audit function. Arizona state officials expressed some concerns about the ability of rural, tribal, and some private entities such as; boards, commissions, and nonprofit organizations to manage, especially if the Recovery Act does not provide administrative funding for some programs. As recipients of Recovery Act funds and as partners with the federal government in achieving Recovery Act goals, states and local units of government are expected to invest Recovery Act funds with a high level of transparency and to be held accountable for results under the Recovery Act. As a means of implementing that goal, guidance has been issued and will continue to be issued to federal agencies, as well as to direct recipients of funding. To date, OMB has issued two broad sets of guidance to the heads of federal departments and agencies for implementing and managing activities enacted under the Recovery Act. OMB has also issued for public comment detailed proposed standard data elements that federal agencies will require from all (except individuals) recipients of Recovery Act funding. When reporting on the use of funds, recipients must show the total amount of recovery funds received from a federal agency, the amount expended or obligated to the project, project specific information including the name and description of the project, an evaluation of its completion status, the estimated number of jobs created and retained by the project, and information on any subcontracts awarded by the recipient, as specified in the Recovery Act. In addition, the Civilian Acquisition Council and Defense Acquisition Regulations Council have issued an interim rule revising the Federal Acquisition Regulation (FAR) to require a contract clause that implements these reporting requirements for contracts funded with Recovery Act dollars. State reactions vary widely and often include a mixture of responses to the reporting requirements. Some states will use existing federal program guidance or performance measures to evaluate impact, particularly for on- going programs. Other states are waiting for additional guidance from federal departments or from OMB on how and what to measure to assess impact. While Georgia is waiting on further federal guidance, the state is adapting an existing system (used by the State Auditor to fulfill its Single Audit Act responsibilities) to help the state report on Recovery Act funds. The statewide web-based system will be used to track expenditures, project status, and job creation and retention. The Georgia governor is requiring all state agencies and programs receiving Recovery Act funds to use this system. Some states indicated that they have not yet determined how they will assess impact. Preserving existing jobs and stimulating job creation and promoting economic recovery are among the Recovery Act’s key objectives. Officials in 9 of the 16 states and the District expressed concern about the definitions of jobs retained and jobs created under the Recovery Act, as well as methodologies that can be used for estimation of each. Officials from several of the states we met with expressed a need for clearer definitions of “jobs retained” and “jobs created.” Officials from a few states expressed the need for clarification on how to track indirect jobs, while others expressed concern about how to measure the impact of funding that is not designed to create jobs. Mississippi state officials suggested the need for a clearly defined distinction for time-limited, part-time, full-time, and permanent jobs; since each state may have differing definitions of these two categories. Officials from Massachusetts expressed concern that contractors may overestimate the number of jobs retained and created. Some existing programs, such as highway construction, have methodologies for estimating job creation. But other programs, existing and new, do not have job estimation methodologies. State officials that we spoke with are pursuing a number of different approaches for measuring the effects of Recovery Act funding. For example, Florida’s state workforce agency is encouraging recipients of Recovery Act funds throughout the state to list jobs created with the funds in the state’s existing online job bank. The Iowa Department of Transportation tracks the number of worker hours by highway project on the basis of contractor reports and will use these reports to estimate jobs created. In New Jersey, state and local agencies will collect or estimate data on the number of jobs created or retained as a result of Recovery Act funds in different ways. For example, the Newark Housing Authority will use payroll data to keep track of the exact number of union tradesmen and housing authority residents employed to turn damaged vacant units into rentable ones. In contrast, New Jersey Transit is using an academic study that examined job creation from transportation investment to estimate the number of jobs that are created by contractors on its Recovery Act-funded construction projects. Beyond employment issues, some Michigan state universities and the state’s economic development department are expected to participate in analyses of the potential impact of Recovery Act funds. Some of the questions that states and localities have about Recovery Act implementation may have been answered in part via the guidance provided by OMB for the data elements and in the Federal Acquisition Regulation, as well as by guidance issued by federal departments. For example, OMB provided definitions for employment, as well as for jobs retained and jobs created via Recovery Act funding. However, OMB did not specify methodologies for estimating jobs retained and jobs created, which has been a concern for some states. Data elements were presented in the form of templates with section by section data requirements and instructions. OMB provided a comment period during which it is likely to receive many questions and requests for clarifications from states, localities, and other direct recipients of Recovery Act funding. OMB plans to update this guidance again within 30 to 60 days of its April 3, 2009 issuance. Some federal agencies have also provided guidance to the states. The U.S. Departments of Education, Housing and Urban Development, Justice, Labor, Transportation, the Corporation for National and Community Service, the National Institutes of Health, and the Centers for Medicare & Medicaid Services have provided guidance for program implementation, particularly for established programs. Although guidance is expected, some new programs, such as the Broadband Deployment Grants, are awaiting issuance of implementation instructions. It has been a little over two months since enactment of the Recovery Act and OMB has moved out quickly. In this period, OMB has issued two sets of guidance, first on February 18 and next on April 3, with another round to be issued within 60 days. OMB has sought formal public comment on its April 3 guidance update and before this, according to OMB, reached out informally to Congress, federal, state, and local government officials, and grant and contract recipients to get a broad perspective on what is needed to meet the high expectations set by Congress and the Administration. In addition, OMB is standing up two new reporting vehicles, Recovery.gov, which will be turned over to the Recovery Accountability and Transparency Board and is expected to provide unprecedented public disclosure on the use of Recovery Act funds, and a second system to capture centrally information on the number of jobs created or retained. As OMB’s initiatives move forward and it continues to guide the implementation of the Recovery Act, OMB has opportunities to build upon its efforts to date by addressing several important issues. These issues can be characterized broadly in three categories: (1) Accountability and Transparency Requirements, (2) Administrative Support and Oversight, and (3) Communications. Recipients of Recovery Act funding face a number of implementation challenges in this area. The Act includes many programs that are new or new to the recipient and, even for existing programs; the sudden increase in funds is out of normal cycles and processes. Add to this the expectation that many programs and projects will be delivered faster so as to inject funds into the economy and it becomes apparent that timely and efficient evaluations are needed. The following are our recommendations to help strengthen ongoing efforts to ensure accountability and transparency. The single audit process is a major accountability vehicle but should be adjusted to provide appropriate focus and the necessary level of accountability over Recovery Act funds in a timelier manner than the current schedule. OMB has been reaching out to stakeholders to obtain input and is considering a number of options related to the single audit process and related issues. We Would Recommend: To provide additional leverage as an oversight tool for Recovery Act programs, the Director of OMB should adjust the current audit process to: focus the risk assessment auditors use to select programs to test for compliance with 2009 federal program requirements on Recovery Act funding; provide for review of the design of internal controls during 2009 over programs to receive Recovery Act funding, before significant expenditures in 2010; and evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act. Responsibility for reporting on jobs created and retained falls to non- federal recipients of Recovery Act funds. As such, states and localities have a critical role in determining the degree to which Recovery Act goals are achieved. Senior Administration officials and OMB have been soliciting views and developing options for recipient reporting. In its April 3 guidance, OMB took an important step by issuing definitions, standard award terms and conditions, and clarified tracking and documenting Recovery Act expenditures. Furthermore, OMB and the Recovery Accountability and Transparency Board are developing the data architecture for the new federal reporting system that will be used to collect recipient reporting information. According to OMB, state chief information officers commented on an early draft and OMB expects to provide an update for further state review. We Would Recommend: Given questions raised by many state and local officials about how best to determine both direct and indirect jobs created and retained under the Recovery Act, the Director of OMB should continue OMB’s efforts to identify appropriate methodologies that can be used to: assess jobs created and retained from projects funded by the Recovery Act; determine the impact of Recovery Act spending when job creation is indirect; identify those types of programs, projects, or activities that in the past have demonstrated substantial job creation or are considered likely to do so in the future. Consider whether the approaches taken to estimate jobs created and jobs retained in these cases can be replicated or adapted to other programs. There are a number of ways that the needed methodologies could be developed. One option would be to establish a working group of federal, state and local officials and subject matter experts. Given that governors have certified to the use of funds in their states, state officials are uncertain about their reporting responsibilities when Recovery Act funding goes directly to localities. Additionally, they have concerns about the capacity of reporting systems within their states, specifically, whether these systems will be capable of aggregating data from multiple sources for posting on Recovery.gov. Some state officials are concerned that too many federal requirements will slow distribution and use of funds and others have expressed reservations about the capacity of smaller jurisdictions and non-profits to report data. Even those who are confident about their own systems are uncertain about the cost and speed of making any required modifications for Recovery.gov reporting or further data collection. Problems also have been identified with federal systems that support the Recovery Act as well. For example, questions have been raised about the reliability of www.USAspending.gov (USAspending.gov) and the ability of Grants.gov to handle the increased volume of grant applications. OMB is taking concerted actions to address these concerns. It plans to reissue USAspending guidance shortly to include changes in operations that are expected to improve data quality. In a memorandum dated March 9, OMB said that it is working closely with federal agencies to identify system risks that could disrupt effective Recovery Act implementation and acknowledged that Grants.gov is one such system. A subsequent memorandum on April 8, offered a short-term solution to the significant increase in Grants.gov usage while longer-term alternative approaches are being explored. GAO has work underway to review differences in agency policies and methods for submitting grant applications using Grants.gov and will issue a report shortly. OMB addressed earlier questions about reporting coverage in its April 3 guidance. According to OMB there are limited circumstances in which prime and sub recipient reporting will not be sufficient to capture information at the project level. OMB stated that it will expand its current model in future guidance. OMB guidance described recipient reporting requirements under the Recovery Act’s section 1512 as the minimum which must be collected, leaving it to federal agencies to determine whether additional information would be required for program oversight. We Would Recommend: In consultation with the Recovery Accountability and Transparency Board and States, the Director of OMB should evaluate current information and data collection requirements to determine whether sufficient, reliable and timely information is being collected before adding further data collection requirements. As part of this evaluation, OMB should consider the cost and burden of additional reporting on states and localities against expected benefits. At a time when states are experiencing cutbacks, state officials expect the Recovery Act to incur new regulations, increase accounting and management workloads, change agency operating procedures, require modifications to information systems, and strain staff capacity, particularly for contract management. Although federal program guidelines can include a percentage of grants funding available for administrative or overhead costs, the percentage varies by program. In considering other sources, states have asked whether the portion of the State Fiscal Stabilization Fund that is available for government services could be used for this purpose. Others have suggested a global approach to increase the percentage for all Recovery Act grants funding that can be applied to administrative costs. As noted earlier, state auditors also are concerned with meeting increased audit requirements for Recovery Act funding with a reduced number of staff and without a commensurate reduction in other audit responsibilities or increase in funding. OMB and senior administration officials are aware of the states’ concerns and have a number of options under consideration. We Would Recommend: The Director of OMB should timely clarify what Recovery Act funds can be used to support state efforts to ensure accountability and oversight, especially in light of enhanced oversight and coordination requirements. State officials expressed concerns regarding communication on the release of Recovery Act funds and their inability to determine when to expect federal agency program guidance. Once funds are released, there is no consistent procedure for ensuring that the appropriate officials in states and localities are notified. According to OMB, agencies must immediately post guidance to the Recovery Act web site and inform to the “maximum extent practical, a broad array of external stakeholders.” In addition, since nearly half of the estimated spending programs in the Recovery Act will be administered by non-federal entities, state officials have suggested opportunities to improve communication in several areas. For example, they wish to be notified when funds are made available to prime recipients that are not state agencies. Some of the uncertainty can be attributed to evolving reports and timing of these reports at the federal level as well as the recognition that different terms used by federal assistance programs add to the confusion. A reconsideration of how best to publicly report on federal agency plans and actions led to OMB’s decision to continue the existing requirement to report on the federal status of funds in the Weekly Financial and Activity Reports and eliminate a planned Monthly Financial Report. The Formula and Block Grant Allocation Report has been replaced and renamed the Funding Notification Report. This expanded report includes all types of awards, not just formula and block grants, and is expected to better capture the point in the federal process when funds are made available. We Would Recommend: To foster timely and efficient communications, the Director of OMB should develop an approach that provides dependable notification to (1) prime recipients in states and localities when funds are made available for their use, (2) states, where the state is not the primary recipient of funds, but has a state-wide interest in this information, and (3) all non-federal recipients, on planned releases of federal agency guidance and, if known, whether additional guidance or modifications are expected. We provided the Director of the Office of Management and Budget with a draft of this report for comment on April 20, 2009. OMB staff responded the next day, noting that in its initial review, OMB concurred with the overall objectives of our recommendations. OMB staff also provided some clarifying information, adding that OMB will complete a more thorough review in a few days. We have incorporated OMB’s clarifying information as appropriate. In addition, OMB said it plans to work with us to define the best path forward on our recommendations and to further the accountability and transparency of the Recovery Act. The Governors of each of the 16 states and the Mayor of the District were provided drafts for comment on each of their respective appendixes in this report. Those comments are included in the appendixes. We are sending copies of this report to the Office of Management and Budget and relevant sections to the selected states and the District. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-5500. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III-XX. The Recovery Act specifies several roles for GAO, including conducting bimonthly reviews of selected states’ and localities’ use of funds made available under the act. As a result, our objectives for this report were to describe (1) selected states’ and localities’ uses of and planning for Recovery Act funds, (2) approaches taken by the selected states and localities to ensure accountability for Recovery Act funds, and (3) states’ plans to evaluate the impact of the Recovery Act funds they have received to date. To address our objectives, we selected a core group of 16 states and the District that we will follow over the next few years to provide an ongoing longitudinal analysis of the use of funds provided in conjunction with the Recovery Act. The selected states are Arizona, California, Colorado, Florida, Georgia, Iowa, Illinois, Massachusetts, Michigan, Mississippi, New Jersey, New York, North Carolina, Ohio, Pennsylvania, and Texas. We selected these states and the District on the basis of outlay projections, percentage of the U.S. population represented, unemployment rates and changes, and a mix of states’ poverty levels, geographic coverage, and representation of both urban and rural areas. These states and D.C. contain about 65 percent of the U.S. population and are estimated to receive about two-thirds of the intergovernmental grant funds available through the Recovery Act. Furthermore, they strike a balance between covering a significant portion of Recovery Act funding and obtaining a mix that reflects the breadth of circumstances facing states and localities throughout the country. To focus our analysis, we examined a set of programs receiving Recovery Act funding that are administered by states and localities. To do this, we reviewed analysis and estimates of Recovery Act funds flowing to states and localities that were done by state and local associations including the National Governors Association, the National Conference of State Legislatures, and the Federal Funds Information for States (FFIS). We also analyzed data from congressional appropriations committees and the Congressional Budget Office (CBO) on the distribution, allocation, and spend out rates of Recovery Act funding. The programs we selected were streams of Recovery Act funding flowing to states and localities through increased Medicaid Federal Medical Assistance Percentage (FMAP) grant awards, funding for highway infrastructure investment, and the State Fiscal Stabilization Fund (SFSF). Together, they are expected to account for about 91 percent of fiscal year 2009 Recovery Act spending by states and localities. For the FMAP grant awards, we conducted a web-based inquiry, asking the 16 states and D.C. to provide data and information on enrollment, expenditures, and changes to their Medicaid programs and to report their plans to use state funds made available as a result of the increased FMAP. We reviewed states’ responses for internal consistency and conducted follow-up with the states as needed. We also spoke with individuals from the U.S. Department of Health and Human Services regarding the changes to the FMAP and the disbursement of increased FMAP funds. In addition, we spoke with individuals from the Centers for Medicare & Medicaid regarding their oversight and guidance to states. For highways infrastructure investment, we reviewed status reports and guidance to the states and discussed these with the U.S. Department of Transportation (DOT) and Federal Highways Administration (FHWA) officials. To understand how the U.S. Department of Education is implementing the SFSF, we reviewed relevant laws, guidance, and communications to the states and interviewed Education officials. Our review of related documents and interviews with federal agency officials focused on determining and clarifying how states, school districts, and public Institutions of Higher Education would be expected to implement various provisions of the SFSF. We considered programs with large amounts of funding, programs receiving significant increases in funding, new programs, and those with known risks. For example, the Medicaid program is on the GAO high risk list. In addition, we consulted with our internal program experts and outside experts including federal agency inspectors general, state and local auditors, and state and local government associations. Our teams visited the 16 selected states, localities within those states, and D.C. during March and April 2009 to collect documentation on the plans, uses, and tracking of Recovery Act funds and to conduct interviews with state and local officials. The teams met with a variety of state and local officials from executive-level offices including Governors and their key staff, Comptrollers’ Offices, Treasurers’ Offices, State Auditors’ Offices, Recovery Czars, Inspectors Generals, senior finance and budget officials, and local officials such as from housing authorities, school districts, police departments, and other key audit community stakeholders to determine how they planned to conduct oversight of Recovery Act funds. The teams also met with state and local agencies administering programs receiving Recovery Act funds, including state Departments of Education, Transportation, and Health and Human Services, and with selected legislative offices in the states. In support of these interviews, we developed a series of program review and semi-structured interview guides that addressed state plans for management, tracking, and reporting of Recovery Act funds and activities. These guides focused on identification of risk, risk mitigation, contracting, the internal control environment and safeguards against fraud, waste, and abuse. While in the 16 states and D.C., the teams also met with and interviewed a number of local government officials, whose offices are identified in Appendix 2. To determine how states and localities plan to track the receipt of, planning for, and use of Recovery Act funds, the state and D.C. teams asked cognizant officials to describe the accounting systems and conventions that would be used to execute transactions and to monitor and report on expenditures. In addition, to assist in the planning of the audit work and for inclusion in their risk assessment framework, we provided the state and D.C. teams with fiscal year 2007 single audit summary information, which was the most recent single audit information available. Single audit information was obtained from the Federal Audit Clearinghouse (FAC) single audit data collection forms and the single audit reports. The single audit summary information provided included : (1) total federal awards expended; (2) whether there were questioned costs; (3) the financial statement audit opinion, number of material weaknesses, and a brief description of each material weakness; and (4) major federal program audit opinion, number of material weaknesses, and a brief description of each material weakness. We examined the Single Audit reports to identify these issues and used that information when interviewing state officials in order to ascertain how they have addressed or plan to address the weaknesses. We also asked auditors to address how they planned to monitor and oversee the Recovery Act funds and whether or not they felt their offices had sufficient capacity to handle any new or increased responsibilities related to the Recovery Act. To understand the reporting requirements of the Recovery Act, we reviewed the guidance issued by OMB on February 18 and April 3, 2009 and selective federal agency guidance related to grants and to states and localities. We also reviewed an interim rule amending the Federal Acquisition Regulation containing interim reporting requirements for the Recovery Act, issued March 31, 2009. Additionally we studied the OMB issued Information Collection Requirements: Proposed Collection (April 1, 2009) that contains the data elements for the quarterly recipient reports specified in Section 1512 of the Recovery Act. Each of the states and D.C. provided information on its plans to provide assessment data required by Section 1512. We conducted this performance audit from February 17, 2009, through April 20, 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Data on states’ and localities’ plans, uses, and tracking of Recovery Act funds was provided during interviews and follow-up meetings with state and local officials. Given that much of the Recovery Act funding had not yet reached the states and localities, we could not validate nor test the accuracy of the statements made by these officials regarding their accounting and tracking systems. Overall, we determined that the data were sufficiently reliable for the purposes of providing the background information on Recovery Act funding for this report. Our sample of selected states is not a random selection and therefore cannot be generalized to the total population of state and local governments. Appendix II: Localities Visited by GAO in Selected States Localities (or Associations Representing Localities) Use of funds: An estimated 90 percent of fiscal year 2009 Recovery Act funding provided to states and localities will be for health, transportation and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, the Centers for Medicare & Medicaid Services (CMS) have made about $534.6 million in Medicaid FMAP grant awards to Arizona. As of April 1, 2009, the state has drawn down about $286.3 million, or almost 54 percent of its initial increased FMAP grant awards. Officials plan to use a significant portion of funds made available as a result of the increased FMAP to offset statewide general fund shortfalls. Arizona was apportioned about $522 million for highway infrastructure investment on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had obligated $148.1 million for 26 Arizona As of April 20, 2009, the Arizona Department of Transportation (ADOT) had selected 41 highway transportation projects worth almost $350 million and had advertised competitive bids on 27 of these projects totaling about $190 million. The earliest bids will close on April 24, 2009, with projects expected to begin work later this spring. These projects include activities such as preserving pavement, widening lanes and adding shoulders, and repairing bridges and interchanges. Arizona will request reimbursement from the Federal Highway Administration as the state makes payments U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) Arizona was allocated about $681.4 million from the initial release of these funds on April 2, 2009, by the U.S. Department of Education. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance of effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. The state plans to submit its application by April 24, 2009, once officials review the latest estimates for the state’s fiscal year 2010 budget situation. The state expects funds to be used to improve student assessments, obtain more teachers, and meet federal standards, among other things, in compliance with federal requirements. Arizona is also receiving additional Recovery Act funds under other programs, such as programs under Title I, Part A of the Elementary and Secondary Education Act (ESEA), (commonly known as No Child Left Behind); programs under the Individuals with Disabilities Education Act (IDEA); several housing programs such as the Low-Income Housing Tax Credit (LIHTC) Assistance program; and programs under the Workforce Investment Act to help provide employment-related services, among other things. Plans to use these funds are discussed throughout this appendix. Safeguarding and transparency: The state government created a new Office of Economic Recovery within the Office of the Governor, the purpose of which is to coordinate the use of Recovery Act funds across state agencies and to ensure accountability for and transparency in the use of these funds. In addition, to meet Recovery Act requirements, the state comptroller noted that Arizona intends to add new codes to its central accounting system to track Recovery Act funds separately and work with state agencies that have their own accounting systems to ensure that they can also track funds separately. The state has issued guidance on managing the funds, and has plans to publicly report its Recovery Act spending, although officials have said that the state may not be aware of all funds sent directly by federal agencies to other entities, such as municipalities and independent authorities. The officials also identified other challenges, such as ensuring that recipients can report on their use of funds and that, where applicable, funds are used to supplement and not supplant state funds that support relevant affected programs. State and local officials noted that they expect to use existing internal controls and monitoring techniques to safeguard Recovery Act funds, but are concerned about having enough resources to do so. State departments were in the early stages of addressing some of these challenges, and are awaiting further guidance from the federal government on these issues. Assessing the effects of spending: Arizona state agencies and select localities that we met with expect to use or enhance existing performance metrics to assess the results achieved through Recovery Act funding, unless the federal government requires new metrics that will need to be developed. State officials were unclear, however, on how to determine the number of jobs created and saved by certain Recovery Act funds and were awaiting further guidance from the federal government. Arizona has begun to use some of its Recovery Act funds as follows: Increased Federal Medical Assistance Percentage Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for federal fiscal year 2009 through the first quarter of federal fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs; (2) a general across-the- board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of the increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. As of April 1, 2009, Arizona has drawn down $286.3 million in increased FMAP grant awards, which is almost 54 percent of its total awards of $534.5 million. Officials plan to use a significant portion of funds made available as a result of the increased FMAP to offset shortfalls created by reductions implemented to balance the budget. The state used the initial funds made available as a result of the increased FMAP to meet payroll and to avoid serious cash-flow problems. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and to undertake other surface transportation projects. States must follow the requirements for the existing programs, and in addition, the governor must certify that the state will maintain its current level of transportation spending, and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Arizona has provided this certification. As of April 20, 2009, the Arizona Department of Transportation (ADOT) had selected 41 highway transportation projects to be funded with Recovery Act dollars. These projects are worth approximately $350 million of the state’s total $521.9 million apportionment. These include projects such as pavement preservation, widening lanes and adding shoulders, and bridge and interchange repair. As of April 20, 2009, the state had advertised 27 projects worth about $190 million with the earliest bids to close on April 24, 2009, and projects expected to begin work this spring. Among the projects that have been advertised for bid are the widening of Interstate 10 in Maricopa County, repaving of state routes, making safety improvements to a state route, and improving intersections. Among the first advertisements to close will be the widening of a shoulder within the Tonto National Forest, on State Route 87. The cost of this project is estimated at approximately $6.8 million, and is estimated to take 150 days to complete. Bids will close on April 24, 2009. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education (Education). The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures it will take action to meet certain educational requirements, such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. Arizona’s initial SFSF allocation is $681.4 million. The state plans to submit its application for funds by May 4, 2009, but according to state education officials, they are waiting for the legislature to propose a 2010 budget for their programs before they can definitely decide how they will spend the funds. Generally, the state expects that recipients, such as local school boards, will use their allocations to improve the tools they use to assess student performance and determine to what extent performance meets federal academic standards, rehire teachers that were let go because of prior budget cuts, retain teachers, and meet the federal requirement that all schools have equal access to highly qualified teachers, among other things. Funds for the state universities will help them maintain services and staff as well as avoid tuition increases. In addition to stabilization funding to support education through the state fiscal stabilization fund, a senior official from the Arizona Department of Education noted that, as of April 3, 2009, Arizona had received $97.5 million for programs under Title I, Part A of ESEA. The funds will be used to improve assessments to meet federal standards, enrich teacher qualifications, avoid more teacher layoffs, improve poorer performing schools, and ultimately improve student performance, among other things. The state had also received about $89.2 million for programs under IDEA, Part B, which provides funds for public education to children with disabilities. According to state Department of Education officials, these funds will be used to hire more teachers to serve students with special needs, among other things. se programs, The state education officials said that they had prepared estimated allocations for the No Child Left Behind Recovery Act funds to the local school districts, which in turn will prepare and submit applications before they can use the funds. Arizona is also eligible to receive Recovery Act funds for several housing programs including the Low-Income Housing Tax Credit (LIHTC) Assistance program. The Arizona Department of Housing received notice that it will receive approximately $32 million to provide gap financing for LIHTC projects which provide funding for development of low income housing. Finally, the state Department of Economic Security had received approximately $43 million in Recovery Act funding anticipated for Workforce Investment Act programs to be used for adult, youth (including a summer youth program), and dislocated worker services. Faced with deteriorating revenue projections, declining consumer confidence, a depressed real estate market, and a requirement to balance its budget, Arizona officials believe that much of the money the state will receive in Recovery Act funds will relieve some of the state’s immediate fiscal pressures. State officials envision that funds made available as a result of the Recovery Act will be used to support program budgets that had been reduced in the state’s efforts to balance the budget. Arizona has about $7 billion in its General Fund with a current budget of about $10 billion. State officials are working to close a budget gap of about $1.3 billion for fiscal year 2008, an estimated budget gap of about $2.1 billion for state fiscal year 2009 and about $2.8 billion for fiscal year 2010 through reductions and other strategies. These strategies were limited to some extent, because voter propositions protect major programs from significant cuts, including Medicaid, education, and corrections, meaning other programs must absorb the cuts. The state’s budget imbalance has been complicated by lower-than-anticipated revenues. For example, state fiscal year 2009 revenue is significantly lower than estimated and has left the state unable to support previously approved spending levels. Arizona’s Budget Office has estimated its future revenues and expenditures for each fiscal year through 2014. It projects an increasing deficit in each fiscal year, from $2.1 billion in 2009 to $4.1 billion in 2014, a situation which most likely would mean continued cuts. The state’s Budget Stabilization Fund, known as its “rainy day” fund—a reserve fund built up during more favorable economic conditions to be used during difficult economic times—has been depleted. As of April 13, 2009, decisions about finalizing the fiscal year 2010 budget were still in flux in part because Governor Brewer—only in office since January after the former Governor, Janet Napolitano, became Secretary of the U.S. Department of Homeland Security—has not issued a formal budget proposal. The Governor recognized that further reductions in government services may be necessary to help close the significant deficit between state revenues and expenditures. Given this, in early March, the Governor certified that the state would accept the funds made available by the Recovery Act and use certain funds to create jobs and promote economic growth within the state. Because of the state’s economic and budgetary challenges, some state agency and local officials we met with expected to use the funds as they had been using them under their existing programs and did not expect to use Recovery Act funding on new initiatives. They also were confident recipients had sufficient critical uses for the funds and could use them immediately. However, state officials expressed concerns that using Recovery Act funds to make longer term operational and program commitments would mean higher future state spending that would not be sustainable once Recovery Act funds were no longer available, given the state of the economy. As a result, officials from one state agency explained that they are advising subrecipients to spend their funds on shorter term projects. Furthermore, with program budgets being cut to help relieve fiscal pressures, some state officials have said it may be challenging to ensure compliance with provisions requiring certain Recovery Act funds to be used to supplement and not supplant FY 2010 program funds. Officials with the state Department of Education, however, had one concern about passing the supplanting test. They said that it was unclear whether states could treat Recovery Act funds provided under the fiscal stabilization program as “state” funds versus “federal” funds. If they could use the funds as state resources, they would be able to meet the supplanting restrictions, but if not, they would have serious challenges in complying, jeopardizing the use of the funds. On the other hand, some state officials and program managers did not think it would be difficult to demonstrate they were not supplanting state funds in part because state funding for the programs had already been cut so significantly—in other words, there were few state funds to supplant. For example, they did not think it would be difficult to show that activities supported with Recovery Act resources, such as keeping teachers, could only be accomplished with federal support. One issue raised by officials in the Office of the Governor and within some state and local program offices was covering the costs to oversee and track the use of the Recovery Act funds, given past budget cuts, staff reductions, and increasing workloads—for example, increasing numbers of unemployed individuals who want services. These officials noted that their service delivery capacity will be challenged to administer funds flowing into eligible programs. Some of the officials wondered what flexibility they had to use some of the Recovery Act funds to cover administrative costs. On the other hand, some state agency officials said that they expected to be able to oversee and track Recovery Act funds with existing resources because funding to current programs that had administrative processes in place would be increased. In still other cases, Recovery Act funds will be disbursed through existing grant programs that may provide for a certain percentage of funds to be used for administration. The state comptroller told us that the state’s existing accounting system will have new accounting codes added in order to segregate and track the Recovery Act funds separately from other funds that will flow through the state government. Because some larger agencies and program offices maintain their own accounting systems, the Arizona General Accounting Office has issued guidance to state agencies on their responsibilities, including how they are to receive, disburse, tag, or code funds in their accounting systems; track funds separately; and, to some extent, report on these federal resources. State officials we spoke with noted that they do not foresee that it will be difficult to track Recovery Act funds separately from other funds. However, an official in the state Department of Economic Security noted that the Recovery Act funds will stress the tracking and reporting capacity of the financial management systems they use because the systems are old, are not very flexible, and were not designed for these purposes. The official said that the systems must be enhanced to provide the capacity needed for Recovery Act funds and that they are working to design a solution for this problem. Department heads and program officials generally expect that they will require subrecipients, through agreements, grant applications, and revised contract provisions, to track and report Recovery Act funding separately. For example, unemployment program managers said they were issuing new intergovernmental agreements with localities to cover new reporting requirements. However, several of the state officials raised questions about the tracking and reporting abilities of some local organizations, such as small, rural entities, boards or commissions, or private entities not used to doing business with the federal government. Furthermore, several of the state department officials acknowledged that either some state agency information systems have data reliability problems that will have to be resolved, or they had subrecipients that in the past had problems providing timely and accurate reporting, but said that they would work with these entities to comply, and also had sanctions to use as a last resort. Furthermore, state officials expressed some concern that the new requirement to provide financial reports on subrecipients’ use of funds within 10 days after a quarter ends may be challenging to meet by both state and local entities, because they may not have actual data in time to meet this reporting time frame. Finally, the state may lack the ability to track the portion of Recovery Act funds going directly to recipients other than Arizona government agencies, such as independent state authorities, local governments, or other entities. State officials expressed concern that they may not be able to track and report Recovery Act funds when these entities receive the monies directly from federal agencies rather than through state agencies. Overall, the state agency and local officials that we spoke with expect that their existing internal controls and techniques to manage any potential risks posed to Recovery Act funding will be sufficient and effective to safeguard Recovery Act funds, unless additional requirements are mandated by the federal government that generate the need to change business processes. These controls and techniques include submitting financial and performance reports for review, as well as conducting supervisory and compliance reviews, on-site inspections, external audits, and audits by the state Auditor General. Although Arizona is largely decentralized—state agencies and localities have responsibility for monitoring and are accountable for their respective Recovery Act funds— the state executives are reaching out to the state agencies to help ensure they are ready. For example, the state budget director met with the heads of the programs potentially receiving Recovery Act funds to gauge each program’s preparedness. In addition, a number of state agencies were conducting or had plans to conduct meetings, training, and outreach to funding recipients to help them understand the goals and objectives of the act and their responsibilities for managing the funding it would provide. Similarly, in early April 2009, the state’s General Accounting Office released a technical bulletin, the purpose of which was to establish consistent policies and procedures that all state agencies receiving Recovery Act funds must “immediately implement in order to effectively manage activities under the act.” A senior official in the state comptroller’s office said that office plans to conduct a survey to inventory current internal controls at state agencies to help ensure controls are in place to limit the risk of fraud, waste, abuse, and mismanagement of Recovery Act funds. Several risks still to be addressed have been identified as a result of using audits as an internal control. For example, Arizona’s fiscal year 2007 Single Audit report identified a number of material weaknesses related to the state Department of Education. The report identified a material weakness involving IDEA in which the state department had not reviewed subrecipients to ensure that federal awards were used for authorized purposes in compliance with laws, regulations, and the provisions of contracts or grant agreements. The audit report also identified one financial reporting material weakness related to the state Department of Administration’s ability to prepare timely financial statements, including its Comprehensive Annual Financial Report (CAFR). This is mostly because many of the larger state agencies maintain separate accounting systems and submit financial data to the Department of Administration for inclusion in its consolidated financial statements. In fiscal year 2007, the CAFR was issued in June 2008, approximately 6 months after the scheduled deadline. According to the Auditor General’s Office, the fiscal year 2008 CAFR will also be completed late, as the last agency submitted its financial statement on March 9, 2008. According to the Auditor General’s Office, this control deficiency affects the timeliness of financial reporting, which affects the needs of users. It is especially important that Arizona try to address the timeliness issue with regard to financial statements given the number and strict reporting timelines that are imposed on the state under the Recovery Act. For most of the other programs, managers stated that they had no outstanding material weaknesses and that any past weaknesses had been brought into compliance. According to state officials, another area of risk that the state agency is trying to manage is that some Recovery Act funds, particularly in the transportation area, are reimbursable, meaning that either ADOT or localities will have to spend funds from their own budgets until they are reimbursed by Recovery Act funds. Because of the state’s challenging financial situation, it may be a challenge for some state and local government entities to spend the funds up front with the limited cash they have on hand. This is particularly true for rural transit projects. According to an ADOT official, to address this risk, they are vetting applications for rural transit funds closely, with an eye toward granting funds only to those localities that have shown they have the cash on hand to pay up front for the costs of the rural transit projects. Representatives of a number of state executive offices, state agencies, and select localities reported that they would at a minimum continue to monitor Recovery Act funding as they had monitored federal funding provided to these same programs in the past. They expected to meet the financial monitoring, performance measurement, and accountability requirements using existing systems and reports, unless the federal government institutes any new requirements that would require changes to their systems and processes. The entities were still waiting for further guidance from the federal government to determine any needed changes. In some cases, agencies had plans to increase monitoring. For example, according to officials for the Arizona Division of the Federal Highway Administration (FHWA), they plan on increasing the number of site visits on projects that use Recovery Act funds. Similarly, state transportation officials will require that contractors report the Recovery Act dollars spent and the jobs they created as part of their regular reports to the state. To some extent, Arizona is providing the public an opportunity to monitor how the state is using Recovery Act funding and what it is achieving with these funds through a Web site, azrecovery.gov, where the state has posted links to program funding levels, guidance, and intended uses of Recovery Act money, and intends to post reports on the use of funds, among other things. However, several state officials expressed concern that the Recovery Act did not provide funding specifically for state oversight activities, despite their importance in ensuring that the Recovery Act funds are used appropriately and effectively. Officials within state executive offices that are coordinating oversight activities—such as the Office of Economic Recovery and the Comptroller’s Office—stated that they will be challenged to oversee compliance with Recovery Act funding requirements within their existing staffing levels, given that the state currently has a hiring freeze to help relieve its budget deficits. For example, the Arizona General Accounting Office within the state Department of Administration has experienced a reduction of staff from 74 to 50, posing challenges to its increased oversight responsibilities. The Department of Economic Security, which manages workforce investment programs and human services programs, among other responsibilities, has an estimated 8,214 staff on furloughs and has laid off about 800 staff members as well. Similarly, a Department of Housing official stated that the office currently has a vacancy rate of about 15 percent because of the hiring freeze. Furthermore, the state Auditor General reported that its staffing levels are nearly 25 percent below the authorized staffing level of 229 full time equivalents. State agencies and the select localities that we spoke with expected to use existing performance metrics to assess results achieved through Recovery Act funding, but were also looking for more guidance from the federal government on how to comply with new assessment requirements under the act. Agency officials generally stated that because the Recovery Act funds are for pre-existing programs, they will continue to use their existing performance metrics to assess impacts. For example, the Arizona Criminal Justice Commission, which oversees among other things the Edward Byrne Memorial Justice Assistance Grants, tracks a wide list of both short- term and long-term performance measures that assess the effectiveness of law enforcement projects funded by the grants. Short-term measures include increasing the number of units that report high program quality, while long-term measures include changing crime rate percentages in communities. Commission officials stated that they will continue to track these measures for Recovery Act funding, in addition to any new measures required under the act. Likewise, administrators at a local school district we visited stated that they have a department that uses a system to track the performance for every school and every student in the school district. The officials stated that they will use the same measures to track school and student performance improvements using Recovery Act funds. However, officials were unclear as to how to determine the number of jobs created and saved by certain Recovery Act funds, new measures required by the act. State education officials noted that the act is vague about determining the number of teachers who would have been laid off in the absence of Recovery Act funding. Although a state housing official expected that her office would have the capabilities to assess results, such as job creation and economic output, local housing officials stated they may have difficulty doing so. State and local officials were waiting for additional guidance from the federal government on how to implement measures for jobs created and saved, as well as any new measures required under the act. We provided the Governor of Arizona with a draft of this appendix on April 17, 2009. The Director of the Office of Economic Recovery responded for the Governor on April 20, 2009. In general, the state agreed with our draft and provided some clarifying information which we incorporated. The state also provided technical suggestions that were incorporated, as appropriate. In addition to the contacts named above, Kirk Kiester, Assistant Director; Joseph Dewechter, analyst-in-charge; Lisa Brownson; Aisha Cabrer; Alberto Leff; Jeff Schmerling; and Margaret Vo made major contributions to this report. Use of funds: An estimated 90 percent of fiscal year 2009 Recovery Act funding provided to states and localities will be for health, transportation and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, Centers for Medicare & Medicaid Services (CMS) had made about $3.331 billion in increased Federal Medical Assistance Percentage (FMAP) grant awards to California. As of April 1, 2009, the state has drawn down about $1.5 billion, or 45.4 percent of its initial increased FMAP grant awards. Funds made available as a result of increased FMAP will help offset the state’s general fund budget deficit, according to California officials. California was apportioned about $2.570 billion for highway infrastructure investment on March 2, 2009 by the U.S. Department of Transportation. Under a state law enacted in late March 2009, 62.5 percent of funds ($1.606 billion) will go to local governments for projects of their selection. Of the remaining 37.5 percent ($964 million), $625 million will go to State Highway Operation and Protection Program (SHOPP) projects for highway rehabilitation, eligible maintenance and repair; $29 million will fund Transportation Enhancement projects; and $310 million will be loaned to fund stalled capacity expansion projects. As of April 16, 2009, the U.S. Department of Transportation had obligated $261.4 million for 20 California California will request reimbursement from the U.S. Department of Transportation as the state makes payments to contractors. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) California was allocated about $3.993 billion from the initial release of these funds on April 2, 2009 by the U.S. Department of Education. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance of effort requirements (or that they will be able to comply with waiver provisions) and they will implement strategies to meet certain educational requirements, including teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. California’s application was approved by the U.S. Department of Education on April 17, 2009 and the state is now eligible to draw funds for local school districts and universities. Approximately $3.266 billion of the $3.993 billion (81.8 percent) must be spent on education. The remaining $727 million (18.2 percent) can be spent at the Governor’s discretion and is expected to be directed to public safety. Of the funds devoted to education, the majority will be spent on primary and secondary education. California is receiving additional Recovery Act funds under other programs, such as Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA), (commonly known as No Child Left Behind); the Individuals with Disabilities Education Act, Part B, and workforce training programs under the Workforce Investment Act (WIA). Safeguarding and transparency: The Governor established the California Federal Economic Stimulus Task Force to ensure both accountability and transparency in how funds are spent, consistent with the Recovery Act and the state’s own goals. The Task Force will also manage California’s recovery Web site (www.recovery.ca.gov), the state’s principal vehicle for reporting on the use and status of Recovery Act funds. In addition, on April 3, 2009, California appointed a Recovery Act Inspector General to make sure Recovery Act funds are used as intended and to identify instances of waste, fraud, and abuse. California intends to use its existing accounting system to track funds flowing through the state government. Although California will publicly report its Recovery Act spending, officials have said that the state may not be aware of all federal funds sent directly to other entities, such as municipalities and independent authorities. The California State Auditor has raised concerns about internal controls at various state agencies that could affect accountability for Recovery Act funds, and will take this into account when assessing risk during her current audit planning efforts. Assessing the effects of spending: According to state officials, California has begun to develop plans to assess the effects of Recovery Act spending. However, they are waiting for further guidance from the federal government, particularly related to measuring job creation. California has begun to use some of its Recovery Act funds, as follows: Increased Federal Medical Assistance Percentage Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for federal fiscal year 2009 through the first quarter of federal fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs; (2) a general across-the- board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of the increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. Under the Recovery Act, California will receive increased FMAP grant awards of at least 61.6 percent, up from 50 percent. As of April 1, 2009, California has drawn down $1.5 billion, or 45.4 percent of its initial FMAP grant awards. Initially, the state could not obtain increased FMAP funds because the state reduced its eligibility period for children from 12 months of continuous eligibility to 6 months, effective January 1, 2009. However, because this change was suspended on March 27, 2009 and eligibility was restored to any children affected, the state has been able to draw down increased FMAP funds. Officials plan to use funds made available as a result of the increased FMAP to offset the state’s general fund budget deficit. Transportation—Highway Infrastructure Investment: The Recovery Act provides funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and for other surface transportation projects. States must follow the requirements for the existing programs, and in addition, the governor must certify that the state will maintain its current level of transportation spending, and the governor or other appropriate chief executive must certify that the state or local government to which the funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. California provided these certifications but noted that the state’s level of funding was based on the best information available at the time of the state’s certification. According to state sources, under a state law enacted in late March 2009, 62.5 percent of funds ($1.606 billion) will go to local governments for projects of their selection. Of the remaining 37.5 percent ($964 million), $625 million will go to State Highway Operation and Protection Program (SHOPP) projects for highway rehabilitation, eligible maintenance and repair; $29 million will fund transportation enhancement projects; and $310 million will be loaned to fund stalled capacity expansion projects. As of April 16, 2009, the U.S. Department of Transportation had obligated $261.4 million for 20 California projects. These projects consist of rehabilitating roadways, pavement, and rest areas as well as upgrading median barriers and guardrails. For example, a $33 million project is being funded to rehabilitate a road in San Jose. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education (Education). The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures it will take action to meet certain educational requirements, such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. California’s initial SFSF allocation is $3.993 billion. Approximately $3.266 billion of this money (81.8 percent) must be spent on education. The remaining $727 million (18.2 percent) can be spent on public safety and other government services (including education). California officials told us that the Governor plans to recommend to the State Legislature that the funds be spent on the Department of Corrections. Like other states, California will receive its SFSF funds in two phases. California’s application was approved by the U.S. Department of Education on April 17, 2009, and the state is now eligible to draw funds for local school districts and universities. Of the $3.266 billion for education, the state plans to spend the maximum amount possible under Recovery Act formulas--approximately $2.57 billion on primary and secondary education and $537 million on higher education, for the purpose of restoring funding to 2008-2009 levels. The remaining $164 million will be used to restore education funding in future years. These funds will help ensure that primary and secondary schools and institutions of higher education have the resources they need to avert cuts and retain teachers and professors. The Governor and his administration are setting the overall policy for coordination of and accountability for Recovery Act funds. Prior to the enactment of the Recovery Act, the Governor’s office formed nine working groups organized around broad program areas (e.g., transportation, environment, etc.) and comprising representatives of the Department of Finance, program departments, the legislative branch, and California’s Washington, D.C. office. The working groups worked with the California congressional delegation to estimate the effects of the Recovery Act and to lobby for changes helpful to the state. The Recovery Act was enacted on February 17, 2009, and California signed a state certification letter on March 5 stating that the state would request and use certain Recovery Act funds to create jobs and promote economic growth (California was the first state to do so). Initially, the Department of Finance, the Director of which is appointed by the Governor, was the focal point for working with state agencies to prepare to meet Recovery Act accountability and reporting requirements. In late March 2009, the Governor’s office established the California Federal Economic Stimulus Task Force, which is responsible both for tracking Recovery Act funds that come into the state and ensuring that those funds are spent efficiently and effectively. The task force is chaired by the Deputy Chief of Staff to the Governor and Director of the Governor’s Office of Planning and Research, and will include one representative from the administration for each of the main program areas that will receive funds. The Chief Deputy Director of Finance will serve as deputy coordinator of the task force and will be responsible for, among other things, tracking the funds coming into the state. The Chief Operating Officer of the Department of Finance will oversee the accountability and auditing functions of the task force. In total, as of March 27, 2009, the state of California estimates that the state and its localities will receive approximately $48.3 billion for various programs, including health, education, and infrastructure. (see figure 4.) Of this, about $14 billion will go directly to local governments and the other $34 billion will go to the state. The extent to which spending decisions have been made varies by program in California, with some uses determined while others are still unknown. For example, for some funding, like the $10 billion made available as a result of the increased FMAP, all or most is formula driven, and the application of funds is already determined. Likewise, for public transit investment grants and fixed-guideway infrastructure programs (due to receive approximately $1.019 billion in Recovery Act funds, according to Federal Transit Administration officials), all or most of the funding is formula driven, but local priority-setting processes will determine which projects will be funded. For education (receiving about $11.8 billion in Recovery Act funds), while the majority of allocations to school districts are based on formulas, education officials told us that spending decisions will largely be made at the local level. Officials from the Sacramento Housing and Redevelopment Agency (SHRA)—one of the state’s 55 public housing authorities hoping to receive a portion of Recovery Act funding from the formula-based Public Housing Capital Fund—stated that they have begun to prioritize how funds will be used. Contracts will be awarded by SHRA for bids received within 120 days on projects listed in its 5-year Capital Fund Plan. State officials from the Department of Housing and Community Development are not sure how much funding another program, the Neighborhood Stabilization Program, will receive. Officials told us that their plans for spending the money will be determined by the amount received. In some instances, state officials have sought federal guidance on the use of certain funds. For example, California Employment Development Department (EDD) officials told us that they hoped to receive additional federal guidance clarifying whether California, through its legislative budget process, can use all discretionary Workforce Investment Act funding through Recovery Act funds to offset employment and training program general fund costs in either the California Department of Corrections and Rehabilitation or the California Conservation Corps. EDD officials noted that using the discretionary funds in this way might contradict recent U.S. Department of Labor guidance, which only allows funds to be used for new programs and not to replace state or local funding for existing programs. State officials are also seeking guidance from CMS regarding policies on payments for in home support services funded by Medicaid. State officials are also uncertain whether Recovery Act funds can help pay for the increased costs of administering, overseeing, and auditing Recovery Act program funds and stated that federal guidance, thus far, has not addressed these questions. In some cases, state agencies face deadlines for using their funds. Caltrans must obligate at least half of certain Recovery Act funds within 120 days of when the funds were apportioned by the Department of Transportation or the funds will be redistributed to other states. Caltrans did not foresee problems meeting this deadline. Caltrans officials further stated that most projects could be completed within 1 year; however, project completion time lines and specific project funding outlays by year have not been finalized. Caltrans officials stated that some project construction may begin by early-May 2009. In another case, the Tax Credit Allocation Committee (TCAC) must commit at least 75 percent of the $325.9 million in Recovery Act’s Tax Credit Assistance Program funds by February 17, 2010. TCAC did not foresee problems meeting this deadline. TCAC officials told us that they have a system in place to quickly identify recipients and that they are planning to make sure to comply with the timeline as reflected in regulations. The state’s economy and California state revenues have been severely affected by the national recession and financial market credit crunch. In March 2009, California’s unemployment rate rose to 11.2 percent, 2.7 percentage points higher than the national average. In February, according to RealtyTrac, California posted the nation’s third highest state foreclosure rate, behind Nevada and Arizona, with 1 in every 165 housing units in foreclosure. On March 19, Fitch Investor Services downgraded California General Obligation bonds to an “A” rating, the lowest current rating of any state. State general fund revenues are projected to fall in state fiscal year 2008- 2009 by $15.1 billion, or 14.7 percent, from fiscal year 2007-2008. In January 2009, the fiscal year 2009-2010 Governor’s Budget projected that the state would end the state fiscal year with a $41.6 billion deficit if no corrective actions were taken. In response, the State Legislature and the Governor agreed to a $42 billion package of solutions. As described by state sources, this package includes reducing spending, temporarily increasing taxes, using funds made available as a result of the Recovery Act, and borrowing from future lottery profits. The budget package depends, in part, on voter approval of six different propositions at a May 19, 2009, special election. If three of these propositions are approved, the state Legislative Analyst’s Office (LAO) estimates the package will reduce the state’s budget deficit by $6 billion. Unfortunately, the state’s economic condition since the release of the Governor’s budget in January 2009 has continued to deteriorate. Even if the May 19, 2009, propositions pass, and the state uses $8.2 billion in funds made available as a result of the Recovery Act, the LAO estimates an $8 billion deficit in 2009-2010. Consequently, the State Legislature and the Governor may need to work on additional budgetary solutions to rebalance the 2009-2010 budget following the May 2009 budget update. On February 3, 2009, the California State Auditor added the state’s budget condition to its list of high-risk issues facing the state. State officials are working to get the necessary guidance and systems up and running that will allow for a comprehensive and accurate accounting of California Recovery Act funds. As previously mentioned, the California Federal Economic Stimulus Task Force is responsible for tracking Recovery Act funds and ensuring that they are spent efficiently and effectively. The state’s new recovery Web site (www.recovery.ca.gov) will serve as the primary tool to fulfill federal reporting and accountability requirements consistently throughout the state. A representative from each state agency is tasked with ensuring that data required by federal Recovery Act reporting requirements are available on the state Web site. Development of the related processes and procedures to accumulate and consolidate the spending data is underway. State officials also plan to use the Web site to provide the public with up-to-date information about federal funds received by the state, how those dollars are being spent, and, through the use of digital mapping, the geographic distribution of expenditures. The state intends to rely heavily on existing systems to track and account for Recovery Act funds. State agency officials generally told us that their existing accounting systems, enhanced with newly created codes for Recovery Act funds, will enable them to separately track and monitor how state and local agencies spend Recovery Act funds that pass through the state. For example, California Department of Education officials told us that the department already has a consistent accounting structure in place for tracking and reporting on how federal funds are used. The department plans to create separate accounting codes within that structure to track and report how the different programmatic funds received through the Recovery Act are used. According to the officials, the department will provide those codes to the local education agencies (LEA), as well as instruct them on what the codes mean. However, some officials still expressed concerns about the ability of LEAs to consistently maintain accountability for funds. For example, a Department of Finance official with responsibility for education program budgets stated that there are over 1,000 school districts in California, and they possess varying levels of sophistication in their accounting systems. While the state will be providing guidance to help ensure proper accountability, this official expects some districts may face challenges complying. Most state program officials told us that they will apply the same controls and oversight processes that they currently apply to other program funds. For example, the California Employment and Development Department has an independent division that conducts monitoring, audits, and evaluations to guard against mismanagement, waste, fraud, and abuse. The effectiveness of internal controls at the local level, however, is unknown for some programs. Caltrans officials, for example, stated that while extensive internal controls exist at the state level, there may be control weaknesses at the local level. Caltrans is collaborating with local entities to identify and address these weaknesses. Additionally, Caltrans has conducted workshops and other outreach activities to ensure that regions and localities are fully informed regarding requirements for the tracking and expenditure of Recovery Act funds, and would like to increase its capacity to provide oversight, particularly at the local level. California intends to use existing internal and independent audit functions and a new inspector general to oversee Recovery Act funds received by the state. The Office of State Audits and Evaluations (OSAE) is an internal audit function within the Department of Finance which performs audits of various state funds and programs, including those receiving Recovery Act funds. According to state officials, OSAE is also responsible for ensuring compliance with the state’s Financial Integrity and State Manager’s Accountability Act of 1983 (FISMA) and oversees the activities of internal audit functions within most state agencies. According to state sources, FISMA requires each state agency to maintain effective systems of internal accounting and administrative control, to evaluate the effectiveness of these controls on an ongoing basis, and to review and report biennially on the adequacy of the agency’s systems of internal accounting and administrative control. OSAE has not yet determined the scope or approach for its review of Recovery Act funds or the extent to which it can utilize FISMA in assessing compliance with Recovery Act requirements. In addition, the State Controller audits claims for payment submitted by state agencies and provides internal audit services to some state agencies, such as Caltrans, for Recovery Act funds. The State Auditor, California’s independent audit and evaluation office, conducts financial and performance audits as authorized or required by law and requested by the State Legislature. The State Auditor is also annually responsible for conducting California’s statewide single audit of numerous federal programs administered in California. Based on the State Auditor’s initial analysis of Recovery Act funds the state expects to receive and the formula for determining which programs require an audit, the State Auditor anticipates it will likely need to expand single audit coverage to capture additional programs receiving Recovery Act funds. Finally, on April 3, 2009, the Governor appointed the nation’s first Recovery Act Inspector General, whose role is to make sure Recovery Act funds are used as intended and to identify instances of waste, fraud, and abuse. The most recent single audit, conducted by the State Auditor for fiscal year 2007, identified 81 material weaknesses, 27 of which were associated with programs we reviewed for purposes of this report. The State Auditor plans to use past audit results to target state agencies and programs with a high number and history of problems, including data reliability concerns, and is closely coordinating with us on these efforts. For example, the fiscal year 2007 State Single Audit Report identified eight material weaknesses pertaining to the ESEA Title I program and the Individuals with Disabilities Education Act programs. The audit findings included a material weakness in the California Department of Education’s management of cash because it disbursed funds without assurances from LEAs that the time between the receipt and disbursement of federal funds was minimized, contrary to federal guidelines. Education officials told us that they have addressed some of these material weaknesses and, in other cases, they are still working to correct them. If these and other material weaknesses are not corrected, they may affect the state’s ability to appropriately manage certain Recovery Act funds. The State Auditor’s Office told us that it is in the process of finalizing the fiscal year 2008 State Single Audit Report and plans to issue the report within the next 30 days. In addition, the State Auditor’s Office is summarizing the results of the single audit to identify those programs that continue to have material weaknesses. Finally, the State Auditor’s Office plans to use the results of other audits it has conducted in conjunction with the single audit to assess risk and develop its approach for determining the state’s readiness to receive the large influx of federal funds and comply with the requirement regarding the use of those funds under the Recovery Act. State officials with whom we spoke have not yet established plans or processes for assessing the impacts of Recovery Act funds. According to Department of Finance officials, the newly created California Federal Economic Stimulus Task Force will assume this responsibility. Several state agency officials and a local public housing authority believe that additional guidance is needed from the U.S. Office of Management and Budget (OMB) before they can fully address the issue of impact assessments. State officials told us that assessing the impact of Recovery Act funds on job creation in particular will be difficult. That is, while they believe that tracking the impact for contracts, grants, or discrete projects is possible, it is extremely difficult to separate out the specific impact of Recovery Act funds when they are combined with other federal, state, or local funds, as they will be in many situations. The state program officials with whom we spoke raised a number of specific concerns about their ability to measure the impact of Recovery Act funds. For example, California education officials told us they did not yet know how the state will measure the impact of the Recovery Act funds spent on education. The officials said that, although it should be possible to track Recovery Act education spending separately from non-Recovery Act money, this does not mean that they will be able to report on specific outcomes that result from this spending. One concern mentioned by several officials is that it may not be possible to link the spending categories used in the accounting system to specific outcomes. Furthermore, even if such links could be made, another difficulty would be determining the extent to which an outcome was the result of the Recovery Act funds received in April 2009 versus the non-Recovery Act funds received earlier in the year for the same program. Finally, officials expressed concern about the incompatibility between desired Recovery Act outcomes and Recovery Act funding. One of the Recovery Act’s desired outcomes is job creation and preservation, which requires ongoing funds, but the Recovery Act provides only temporary funds. According to Caltrans officials, measuring the full economic impact of highway funds presents challenges. Caltrans officials told us that since Recovery Act funds may be combined with other funds to complete projects, isolating the number of jobs created using just the Recovery Act funds may be difficult. In addition, Caltrans officials told us that guidance on measuring and reporting the effect of Recovery Act funds for transit and fixed-guideway investments has not yet been issued, however they anticipate it will be difficult to report on jobs preserved or created. California Employment Development Department officials told us that its existing accounting system can report output, such as how many more participants are registered and enrolled in Workforce Investment Act programs and the level of program services increased due to the Recovery Act. They also said that the existing system can track certain performance indicators for program participants, such as successful employment, wage increases, and job retention. However, these officials noted that they anticipate challenges determining whether such outcomes are specifically due to services supported by the additional Recovery Act funds versus services previously or currently provided to program participants through existing Workforce Investment Act funds. We provided the Governor of California with a draft of this appendix on April 17, 2009. Members of the California Federal Economic Stimulus Task Force responded for the Governor on April 20, 2009. These officials provided clarifying and technical comments that we incorporated where appropriate. In addition to the contacts named above, Paul Aussendorf, Candace Carpenter, Joonho Choi, Brian Chung, Nancy Cosentino, Kerry Dunn, Michelle Everett, Chad Gorman, Richard Griswold, Bonnie Hall, Delwen Jones, Brooke Leary, Jeff Schmerling, Steve Secrist, and Eddie Uyekawa made major contributions to this report. Use of funds: An estimated 90 percent of Recovery Act funding provided to states and localities nationwide in fiscal year 2009 (through Sept. 30, 2009) will be for health, transportation and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, the Centers for Medicare & Medicaid Services had made about $227 million in increased FMAP grant awards to Colorado. As of April 16, 2009, the state had not drawn down any of its increased FMAP grant awards. State officials noted they are working to ensure that the state is in compliance with Recovery Act provisions governing eligibility for the increased FMAP. Colorado was apportioned about $404 million for highway infrastructure investment on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had obligated $118.4 million for 19 projects; the Colorado Department of Transportation had advertised 17 of these projects, and 5 of the 17 had been awarded. Colorado’s Recovery Act transportation funds are being directed to projects that can be advertised within 90 to 180 days of the passage of the act, can be completed within 3 years, and will result in job creation. Projects include resurfacing roads and replacing highway bridges in the Denver metropolitan area, as well as improvements to mountain highways. Colorado will request reimbursement from the U.S. Department of Transportation as the state makes payments to contractors. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) Colorado was allocated about $509 million from the initial release of these funds on April 2, 2009, by the U.S. Department of Education. Before receiving the funds, states are required to submit an application that provides several assurances to the U.S. Department of Education. These include assurances that they will meet maintenance of effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. The Governor is working with the state legislature on a plan for spending the fiscal stabilization funds Colorado will receive to support education. Once legislative concurrence is obtained, the plan will be submitted to the U.S. Department of Education. A state official estimated that could happen as early as the week of April 20, 2009. Colorado is also receiving additional Recovery Act funds under other programs, such as those under Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA) (commonly known as No Child Left Behind); programs under the Individuals with Disabilities Education Act (IDEA), Part B; programs under the Workforce Investment Act; and Edward Byrne Memorial Justice Assistance Grants. These are described throughout this appendix. Safeguarding and transparency: As the state makes its plans, some officials raised concerns about how well the state is positioned to track and oversee Recovery Act expenditures and identified general areas of vulnerability in spending Recovery Act funds. For example, Colorado’s accounting system is 18 years old, which will make it challenging for the state to tag and track Recovery Act funds, according to state officials. State officials are determining what approach they will use in tracking funds and told us they currently plan to create an accounting fund to track state agencies’ use of Recovery Act funds, employing a centrally defined budget-coding structure to distinguish between Recovery Act and non- Recovery Act federal funds. State officials were also concerned about tracking funds that bypass the state and flow directly to local entities. Assessing the effects of spending: The state is making plans to assess the effects of Recovery Act spending on Colorado’s economy. Some agencies plan to use their existing performance indicators to assess the effects of recovery, while others have received guidance including new indicators. Some officials identified concerns with recipients’ ability to submit reports more quickly or more frequently than normal, while some questioned how precisely economic effects can be measured. Colorado has begun to use some of its Recovery Act funds, as follows: Increased Federal Medical Assistance Percentage Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 percent to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for fiscal year 2009 through the first quarter of fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs; (2) a general across-the-board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of this increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. As of April 3, 2009, CMS had made about $227 million in increased FMAP grant awards to Colorado. As of April 16, 2009, state officials had not drawn down any of the state’s increased FMAP grant awards. State officials noted they are working to ensure that the state is in compliance with Recovery Act provisions governing eligibility for the increased FMAP. Officials also indicated that, in order to account for the increased FMAP funds available through the Recovery Act, the state has created unique codes that will calculate the additional federal reimbursement. The state will use these codes to assist with the proper drawing down and reporting of these expenditures on quarterly Medicaid reports. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and to undertake other surface transportation projects. States must follow the requirements for the existing programs, and in addition, the Governor must certify that the state will maintain its current level of transportation spending, and the Governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Colorado provided this certification but noted that the state’s level of funding was based on “planned nonbond state expenditures” and represented the best information available at the time of the state’s certification. Colorado was apportioned about $404 million in Highway Infrastructure Investment Recovery Act funds by the U.S. Department of Transportation on March 2, 2009. As of April 16, 2009, the U.S. Department of Transportation had obligated $118.4 million for 19 Colorado projects. Seventeen of these projects, which include resurfacing roads and replacing highway bridges in the Denver metropolitan area and improvements to mountain highways, had been advertised for bid, and 5 of the 17 projects had been awarded. According to Colorado Department of Transportation officials, the department has a well-established process for distributing funds and contracting projects and has already begun to use this process in applying for Recovery Act funds. In order to spend funds quickly and create jobs, Colorado is directing Recovery Act transportation funds to projects that can be advertised within 90 to 180 days of the passage of the Recovery Act, can be completed within 3 years, and will result in job creation. Department officials told us they are emphasizing construction projects rather than projects in planning or design phases, in order to maximize job creation. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education (Education). The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures it will take action to meet certain educational requirements such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. The Governor has proposed a plan for spending the majority of the $760 million in stabilization funds Colorado will receive to support education, focusing on offsetting current and planned reductions in state funding for higher education. Officials told us that funding cuts were directed primarily toward higher education rather than kindergarten through 12th grade education because of a state constitutional provision requiring guaranteed annual increases in state funding of kindergarten through 12th grade education—and as a result, SFSF funds are more urgently needed in higher education. The state will receive its first allocation of funds— $509 million or 67 percent of the total—after it has applied to Education, which it plans to do once the Governor’s office and legislature agree on the plan and the state’s budget. As of April 20, 2009, the state’s General Assembly was negotiating the final budget and a school finance bill that could affect the specific use of the SFSF funds. A Colorado official said that if the state approves a budget the week of April 20, 2009, the propo could go to Education soon after that date. The Governor is also developing a plan for the Government Services Fund, a component of the SFSF, which will provide $138 million of SFSF funds that may be used for public safety and other government services. Following passage of the Recovery Act, Colorado’s Governor established an oversight board, the Colorado Economic Recovery Accountability Board, to oversee Colorado’s Recovery Act funding and ensure funds are spent effectively and transparently. The board is chaired by the Director of the Colorado Office of Economic Development, who has also been charged with being Colorado’s recovery coordinator. The board is composed of 12 public- and private-sector leaders from across the state, including the state treasurer, a state senator and a state representative, and a number of business leaders. To date, the board has held three public meetings during which members discussed the short time frames for disbursing Recovery Act funds and a lack of federal guidance, among other issues. The board has also developed a Web site to publicize information about the Recovery Act. Management of and decisions about Recovery Act funds are the responsibility of the Governor, according to state officials. The Governor’s office is directly responsible for exercising discretion with regard to certain funds such as portions of the SFSF. The Governor is working in consultation with the executive directors of Colorado’s state departments and agencies to develop plans for spending Recovery Act funds, which are to be publicly available on the state’s Web site. Officials told us the Governor has directed that all departmental decisions on spending Recovery Act funds are to be made in line with the original charge of the Recovery Act to promote job creation or preservation and economic development, as well as the Governor’s agenda. The decision process for using Recovery Act funds depends on the program, consistent with federal and state statutes and guidance. Officials from several departments, such as the Departments of Public Safety, Labor and Employment, and Local Affairs, told us they have made initial programmatic decisions for Recovery Act funds. Other programs have not made such decisions; for example, Colorado Department of Education officials told us the department will distribute funds such as those under the ESEA and IDEA programs directly to local school districts to make programmatic decisions about the funds. Many Colorado officials said the Recovery Act would increase their departments’ workloads and said they would like to add personnel and perhaps systems to manage the funds, but the overall extent to which Recovery Act funds are permitted to be used for those costs is uncertain. While some officials we interviewed said their departments had received or would receive Recovery Act funds to cover administrative or management activities, officials in other departments did not know whether they would receive funds for that purpose. Officials at the Colorado Department of Labor and Employment, for example, said they can spend about $1.5 million in Recovery Act funding to cover administrative costs associated with Workforce Investment Act programs, consistent with their normal procedures for administration of the programs, while officials from the Colorado Department of Education said they were uncertain what, if any, funds they were going to receive to administer and manage recovery programs. State officials told us they believe the government services portion of the SFSF can be used by the Colorado Department of Education and other state departments to cover administrative costs. Colorado officials identified general areas of vulnerability in spending Recovery Act funds, as well as specific concerns about their ability to oversee Recovery Act funds coming into the state. Areas of vulnerability include new programs and localities that may be ill-equipped to manage the influx of new funds. In addition, state officials are concerned about their ability to oversee Recovery Act funds because of three primary challenges: (1) the state’s accounting system is 18 years old, which may make it challenging to tag and track Recovery Act funds; (2) adequate resources to administer and audit expenditures of Recovery Act funds may not be available; and (3) state officials are still determining what they will be required to track and report on and are particularly concerned about tracking funds that bypass the state and flow directly to local entities. The state’s departments have begun to identify potential areas of vulnerability in spending Recovery Act funds, according to officials. One area that officials identified is the influx of new Recovery Act funds that must be adequately managed as they are spent quickly. For example, some programs, such as Medicaid, already have known weaknesses in managing existing funds (identified, for example, in audits conducted by the Colorado state auditor) and may be challenged in managing large amounts of additional funds. A second vulnerable area, according to officials, involves new programs that do not have well-established processes, or programs that will need to establish additional processes, to accommodate significant funding increases, such as the state’s energy program, which will receive funds for weatherization and other energy projects. Funds that go directly to localities are a third area that may be vulnerable because, according to officials, the state does not currently oversee these funds and cannot provide assistance to local entities, some of which may not be well- equipped to manage the increased funds. State officials were concerned that Colorado’s accounting system—the Colorado Financial Reporting System (COFRS)—is 18 years old, which may make it difficult for the state to use and track Recovery Act funds. For example, state officials are concerned about Colorado’s ability to report quickly on Recovery Act expenditures. Because of limitations associated with COFRS, officials told us the state will have difficulties meeting reporting requirements established for certain Recovery Act expenditures, such as the requirement in section 1512 of Title I, Division A of the Recovery Act calling for recipient reports within 10 days of the end of the calendar quarter. In addition, some individual state departments do not use the COFRS grant module and therefore must manually post aggregate revenue and expenditure data to COFRS. Consequently, given the state’s current capabilities, data on total Recovery Act funding received by the state may not be able to be drawn from COFRS and may have to be compiled through a manual exercise outside of the central financial management system, raising internal control concerns among some officials we talked with. These concerns include inadequate audit documentation on how the information is compiled, potential human error in inputting and aggregating information, and potentially inconsistent or duplicative reporting from various agencies on the extent and nature of Recovery Act funding received and used. Finally, state officials also voiced concerns that COFRS uses Catalog of Federal Domestic Assistance numbers to track grants from each federal agency, but some federal departments are not establishing unique Catalog of Federal Domestic Assistance numbers for some Recovery Act funds, which will make automated reporting difficult. Officials with the Colorado Department of Personnel & Administration were concerned that vacancies in procurement positions posed an impediment to effective tracking and control over the state’s Recovery Act funds. Many Colorado state agencies have vacancies for procurement officers, which have been left unfilled due to the state budget shortfall and a consequent hiring freeze. For example, the Department of Personnel & Administration, which administers statewide contracts and supports several state agencies that have little or no purchasing authority, currently has three vacancies in its purchasing agent and contracting positions. Filling these vacancies would enable this department to better assist state agencies receiving Recovery Act funds, according to department officials. Similar purchasing agent vacancies exist, according to these officials, in the Colorado Departments of Corrections, Education, Human Services, Labor and Employment, and Local Affairs. Colorado Department of Personnel & Administration officials hope to hire former or retired state employees with procurement experience on a 6-month basis to alleviate this problem, but additional funding—and possibly legislative and budgetary approval—may be needed in order to hire temporary procurement personnel, which could potentially delay hiring if the state needs to await legislative action. State officials were also concerned with the amount of audit coverage throughout the state. For example, officials with the Colorado state auditor’s office told us their office would have difficulty absorbing additional work associated with the Recovery Act, and believed that state oversight capacity was limited. For example, according to these officials, the Department of Health Care Policy and Financing (the state’s Medicaid agency) has had three controllers in the past 4 years; these officials also told us the state legislature’s Joint Budget Committee recently cut field audit staff levels for the state Department of Human Services in half. Officials with the Department of Personnel & Administration told us their department’s internal auditor position is vacant, while officials with the Colorado Department of Transportation told us that two of their department’s financial management positions, including the deputy controller position, are vacant. At the county level, Jefferson County recently terminated its internal auditor and eliminated its internal control audit office. The reduced number of staff in oversight positions resulted in part from budget cuts and staffing decisions during the state’s last economic downturn, and state officials told us certain positions would be difficult to fill because of the state’s current hiring freeze. Officials said because the “ratchet effect” of Colorado’s constitutional and legislative requirements limits the growth of spending, it can be difficult to re-establish and fill positions that are eliminated during economic downturns. Officials told us, for example, that some state agencies have not refilled all of the staff positions they lost to budget cuts during Colorado’s 2001-2003 downturn. Colorado officials said they have not received state-specific guidance on Recovery Act reporting from the federal Office of Management and Budget. They said the guidance provided in February and April 2009 was addressed to federal departments and agencies, and it was necessary to determine whether and how this guidance applied to state governments. Officials wondered, for example, whether the state would be required to report centrally on all funds coming through the state or whether state agencies will report as normal through federal departments, or both; what the frequency and form of reports will be; and the level to which funds will need to be tracked and reported (e.g., at the recipient level, subrecipient level, etc.). Officials were especially concerned that a substantial portion of funds provided to Colorado will go directly to local entities, making it difficult for state officials to be aware of and track all funds within the state. In the absence of state-specific guidance, state officials were taking some steps on their own to track the use of Recovery Act funds. Department of Personnel & Administration officials said they anticipated that statewide reporting on the use of Recovery Act funds will be necessary, in addition to having individual state departments and agencies reporting directly to their respective federal granting agencies. The department discussed various tracking and reporting methodologies with state department controllers to determine what tracking method would be the most effective and least disruptive; the department determined that the state would create an accounting fund through which it could track state agencies’ use of Recovery Act funds and would employ a centrally defined budget-coding structure for Recovery Act funds, which should be able to distinguish between Recovery Act funds and other federal non-Recovery Act funds. This accounting process would capture only those funds flowing through state agencies. State officials said they are still determining how they will capture funds that do not flow through the state and said that guidance will be important in order to prevent duplicate reporting of Recovery Act funds by state and federal agencies. Although they are moving forward, state officials are hesitant to establish statewide reporting requirements for fear they could waste state resources developing and implementing an approach that is not consistent with the federal guidance ultimately established. Colorado’s state departments with responsibility for the funds we examined described a range of approaches to assess and report on the effects of recovery spending in the state. Some agencies plan to use their existing performance indicators to assess the effects of Recovery Act funding, as they have not yet received reporting guidance from the federal departments involved. For example, Colorado Housing and Finance Authority officials said they plan to use existing indicators, such as the number of affordable housing units created and the relative income levels of populations served by those units, to assess the effects of Recovery Act funding for the Low-Income Housing Tax Credit. Other agencies, such as the Colorado Department of Transportation, have received guidance to report on existing and new indicators, such as direct jobs associated with Recovery Act projects; the indicators will involve a significant increase in data collection and reporting by the department, including gathering data from more entities and reporting more frequently than the department has reported in the past, according to department officials. In another example, the Colorado Department of Public Safety, which did not report on jobs in the past, will report on the jobs created or retained with the spending of justice assistance grants. In addition, it will report on a set of new performance measures being developed by the federal Department of Justice Bureau of Justice Assistance. Department of Public Safety officials are concerned about the timing of reporting job creation and retention data, however, because the Recovery Act requires states to report 10 calendar days after the end of each quarter, which is faster than the normal reporting time frames and, according to officials, will necessitate that recipients report to the department within 5 calendar days of the end of the quarter. Some grantees will have difficulty reporting within such short time frames, according to one department official, because they still mail or hand deliver their reports. State and local officials raised other concerns about tracking the economic effects of Recovery Act funds. Officials with the state auditor’s office, for example, said that tying specific funding to the creation of particular jobs is problematic. One state official pointed out that increased FMAP available under the Recovery Act would reduce the amount of funds that Colorado will need to spend on its Medicaid program, allowing the state to use these funds for other purposes and avoid cutting other programs to balance the state budget. However, because specific program cuts were not determined, identifying the preserved programs and their economic effects is impossible. While some state departments have received guidance on counting jobs created or retained, officials from at least one local department said they needed more guidance about how to measure the number of new jobs created. Another official said that her department will report jobs created or retained but questioned how indirect jobs would be counted. According to this official, spending Recovery Act funds to purchase items such as equipment or vehicles will have substantial economic effects, particularly the creation of indirect jobs, but she was not certain how these jobs would be counted and asked whether clarification would come through Office of Management and Budget or other guidance. To measure such impacts for the state, an economic impact assessment would need to be conducted, according to a member of the Colorado Economic Recovery Accountability Board. The board is considering contracting for such an assessment, according to the member, but has not yet decided on whether or when to do it. We provided the Governor of Colorado with a draft of this appendix on April 17, 2009. State officials from the Governor’s office responded for the Governor on April 20, 2009. In general, they agreed with this summary of Colorado’s recovery efforts to date. The officials also provided technical comments that were incorporated, as appropriate. In addition to the contacts named above, Steve Gaty, Susan Iott, Tony Padilla, Ellen Phelps Ranen, Lesley Rinner, Glenn Slocum, and Mary Welch made significant contributions to this report. Use of funds: An estimated 90 percent of Recovery Act funding provided to states and localities nationwide in fiscal year 2009 (through Sept. 30, 2009) will be for health, transportation and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, the Centers for Medicare & Medicaid Services (CMS) had made about $1.4 billion in increased FMAP grant awards to Florida. As of April 1, 2009, Florida has drawn $817 million, or 58.6 percent of its increased FMAP grant awards to date. From January 2008 to January 2009, the state’s Medicaid enrollment increased from 2,151,917 to 2,391,569, with most enrollment changes attributable to two population groups: (1) children and families and (2) other individuals, including those with disabilities. While funds are made available as a result of the increased FMAP, the state legislature is still determining how to make use of these funds. Florida was apportioned about $1.3 billion for highway infrastructure investment on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had not obligated any Recovery Act funds for On April 1, 2009, the Florida Department of Transportation (FDOT) prepared a final listing of potential Recovery Act funded projects and on April 15, 2009, the Florida Legislative Budget Commission approved the list of projects. The U.S. Department of Transportation, Federal Highway Administration must also approve the final listing of projects before the state can advertise bids for contracts. These projects include activities such as resurfacing roads, expanding existing highways, repairing bridges and installing sidewalks. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) Florida was allocated about $1.8 billion from the initial release of these funds on April 2, 2009, by the U.S. Department of Education. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance-of-effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. According to Florida officials, Florida plans to apply for a waiver to obtain these funds after the Department of Education issues final instructions for waiver applications. Florida is also receiving Recovery Act funds under other programs, such as programs under Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA) (commonly known as No Child Left Behind); programs under the Individuals with Disabilities Education Act (IDEA); and Workforce Investment Act employment and training programs. The status of plans for using these funds is described throughout this appendix. Safeguarding and transparency: The Governor has created the Florida Office of Economic Recovery to oversee, track and provide transparency in how Recovery Act funds are spent. In addition, according to Florida officials, Florida’s accounting system will be able to separately track the Recovery Act funds flowing through the state government. Florida plans to publicly report its Recovery Act spending on a state Web site. Florida state accountability organizations have identified areas where Recovery Act funds may be at greater risk of fraud, waste, and abuse, such as Medicaid, and have begun to collaborate in developing plans for oversight. Assessing the effects of spending: Florida state officials are in the early stages of developing plans to assess the effects of Recovery Act spending and told us that guidance from the federal government would be instrumental in developing their plans. On April 3, 2009, the U.S. Office of Management and Budget (OMB) issued guidance indicating that it will be developing a comprehensive system to collect information, including jobs retained and created, on Recovery Act funds sent to all recipients. Florida state officials told us that they will ask OMB to allow the state to obtain data from this system on local entities in Florida that receive Recovery Act funds directly from federal agencies. Florida has begun to use some of its funds made available as a result of the Recovery Act, as follows: Increased Federal Medicaid Assistance Percentage Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 percent to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008 and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for federal fiscal year 2009 through the first quarter of federal fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provide for: (1) the maintenance of states’ prior year FMAPs; (2) a general across-the- board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of this increased FMAP may reduce the funds that the state must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. As of April 1, 2009, Florida has drawn down $817 million in increased FMAP grant awards, which is about 58.6 percent of its awards to date. The state is determining how to make use of the state funds made available as a result of the increased FMAP grant awards. Officials told us that each state agency with a budget impact resulting from Recovery Act funding has prepared budget amendments for the current state fiscal year (July 1, 2008, to June 30, 2009) for consideration by the Executive Office the Governor and the Legislative Budget Commission (LBC). On April 15, 2009, the LBC approved 17 amendments to the 2008-2009 state appropriation to authorize the use of Recovery Act funds. The state has drawn down funds that are for Medicaid expenditures retroactive to October 1, 2008. Florida officials told us they require additional guidance from CMS on the prompt payment requirements, and for CMS to provide of the state guidance, if applicable, on any additional reporting requirements. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and for other surface transportation projects. States must follow the requirements for the existing programs, and in addition, the governor must certify that the state will maintain its current level of transportation spending, and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Florida provided this certification, but conditioned it, noting that state funding for the transportation programs is provided from dedicated funding sources that are subject to fluctuations resulting from economic conditions. On April 15, 2009, the Florida LBC approved the Recovery Act funded projects that the FDOT had submitted. As of April 16, 2009, the U.S. Department of Transportation had not obligated any Recovery Act funds for Florida projects. The Federal Highway Administration must approve this final listing of projects before the FDOT can advertise bids or request reimbursement from the Federal Highway Administration. The state’s projects include activities such as resurfacing roads, expanding existing highways, repairing bridges, and installing sidewalks. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education (Education). The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures, among other things, it will take actions to meet certain educational requirements such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. Florida’s initial SFSF allocation is about $1.8 billion. However, according to Florida officials, the state will not be able to meet the maintenance-of-effort requirement to readily qualify for these funds because revenue declines led to cuts in the state’s education budget in recent years. The state will apply to Education for a waiver from this requirement; however, they are awaiting final instructions from Education on submission of the waiver. Florida plans to use SFSF funds to reduce the impact of any further cuts that may be needed in the state’s education budget. Florida state officials began preparing for the use of Recovery Act funds prior to the receipt of the funds. Florida officials believe that Recovery Act funds are critical to addressing the state’s budgetary crisis and maintain necessary services to its citizens. According to state officials, the state plans to use about $3 billion of Recovery Act funds to reduce the state’s $6 billion budget shortfall for state fiscal year 2009-2010. One reason for this shortfall is the significant declines in revenue Florida has faced in recent years—23 percent since state fiscal year 2005-2006, from about $27.1 billion to $20.9 billion in state fiscal year 2008-2009—due to such factors as the recession and housing crisis. State officials estimate that Florida will receive about $15 billion in Recovery Act funds over 3 state fiscal years. Florida estimates that approximately $14.1 billion of this amount will flow through state agencies, with at least $4.7 billion of this amount allocated to local entities. In addition, approximately $1.2 billion in funding will be directly allocated to local entities from federal agencies. On March 3, 2009, the Governor established the Florida Office of Economic Recovery that is responsible for overseeing, tracking and providing transparency of Florida’s Recovery Act funds. The office is headed by the Special Advisor to the Governor for the Implementation of the American Recovery and Reinvestment Act (Recovery Czar) and includes three other staff members on loan from state agencies. The Florida Office of Economic Recovery also established an implementation team that meets twice a week and includes representatives from each of the state’s program agencies and administrative offices, such as the Office of Policy and Budget, the Chief Inspector General, the State Auditor General, the Department of Financial Services, as well as representatives from the Florida Association of Counties and the Florida League of Cities. On March 17, 2009, pursuant to Section 1607 of division A, title XVI of the Recovery Act, the Governor certified that the state would request and use funds provided by the act. Additional certifications for transportation, energy, and unemployment compensation have also been submitted. According to state officials, before Florida agencies can use the Recovery Act funds, the Florida legislature must authorize the use of all funds received by state agencies, including those passed-through to local governments. On April 15, 2009, the joint Legislative Budget Commission met and approved 17 amendments to the 2008-2009 state budget authorizing appropriations totaling almost $4 billion in Recovery Act funds. The Florida state legislature is still in session and developing the state’s fiscal year 2009-2010 budget. As explained by state officials, if the legislature does not pass the authorization for the Recovery Act funds before the end of the session (May 1, 2009), a joint legislative budget committee can later amend the Appropriation Act and authorize the use of the Recovery Act funds or the legislature can reconvene. To promote transparency, the Florida Office of Economic Recovery implemented a state Recovery Act Web site that became operational on March 19, 2009. The Web site is intended to provide information to the public on the amount and uses of Recovery Act funds the state receives and on resources being made available to citizens, such as unemployment compensation and workforce training. Officials from Florida’s Department of Financial Services said that the state’s accounting system—Florida Accounting Information Resource (FLAIR)—will be used to track Recovery Act funds that will flow through the state government. The state agencies will record the Recovery Act funds separately from other state and federal funds using selected identifiers in FLAIR such as grant number or project number. Officials in some Florida state program agencies raised concerns that local areas will not be able to provide timely data to enable state agencies to meet financial reporting deadlines for the quarterly reports required by the Recovery Act. These reports on the uses of Recovery Act funds are due 10 days after the end of each quarter. In addition, Florida officials and a group representing local school superintendents were particularly concerned about the ability of school districts to meet these deadlines after having experienced reductions in administrative staff due to recent budget cuts. Florida officials submitted feedback to OMB suggesting that OMB consider providing guidance on reconciling the information provided in the Recovery Act quarterly reports with other federal reporting requirements to avoid confusion. According to Florida officials, quarterly reports on many federal grants are due 45 days after the end of the quarter and reporting systems are currently oriented towards these requirements. Florida officials added that it is likely that meeting the Recovery Act quarterly reporting requirement will necessitate the submission of preliminary reports. Some state agencies have issued or are developing guidance to assist local areas in planning for the use of Recovery Act funds that will be passed through the state to local areas. For example, on April 1, 2009, Florida received about $580 million for Title I, Part A of ESEA and for IDEA, which will be passed through to local school districts. In anticipation of these funds, the Florida Department of Education provided guidance to school districts on strategies for using education funds, such as assigning high-performing teachers to low-performing schools, providing reading coaches to schools, and investing in intensive professional development for teachers. On March 19, 2009, Florida received almost $143 million for the Workforce Investment Act Adult, Youth, and Dislocated Worker employment and training programs and made $121 million available to regional workforce areas the next day. As of April 13, 2009, regional workforce areas had drawn down about $744,000 of these funds, according to a Florida official. Florida’s Agency for Workforce Innovation had previously established various task teams, composed of state and regional workforce officials that created action plans for implementing these funds. For example, to facilitate the rapid expansion of summer youth employment programs, the state plans to develop a local implementation checklist and a toolkit of summer youth materials. Florida has various oversight entities responsible for monitoring, tracking, and overseeing financial expenditures, assessing internal controls and ensuring compliance with state and federal laws and regulations: the Office of the Chief Inspector General, Auditor General, Office of Program Policy Analysis and Government Accountability (OPPAGA), and the Department of Financial Services. Each state agency has an Office of Inspector General (OIG) that is responsible for conducting audits, investigations, and technical assistance, and promoting accountability, integrity and efficiency in the state government. The Auditor General has broad audit authority with respect to audits of government agencies in Florida and routinely conducts Single Audits of the State of Florida reporting entities and of the state’s district school boards. The single audits include determining if federal and state expenditures are in compliance with applicable laws and regulations and assessing the effectiveness of key internal controls. Florida’s OPPAGA—the research unit of the state’s legislature—is responsible for conducting studies on the performance of state agencies and programs to identify ways to improve services and cut costs. In addition, the Florida Department of Financial Services is responsible for overseeing state expenditures and financial reporting. Independent certified public accountants also conduct annual financial audits of local governmental entities, such as counties and municipalities. According to state officials, Florida law requires that the scope of such audits encompass federal and state Single Audit requirements, as applicable. Past experience has highlighted financial management vulnerabilities in agencies that will receive Recovery Act funds. Auditor General and state OIG reports identified several high-risk areas that are vulnerable to fraud, waste, and abuse. For example, in 2008: State officials identified Medicaid as the highest risk program. The Auditor General reported breakdowns in internal controls over the Medicaid program because state Medicaid program officials failed to properly document and verify recipients’ income, which increased the risk of ineligible individuals receiving program benefits. The Auditor General reported that, for some federal programs, the Florida Department of Education failed to provide monitoring that reasonably ensured sub-recipient adherence to program requirements. The Auditor General reported that the Florida Department of Community Affairs failed to provide information that was needed to assess the success or progress of its federal low-income housing community development block grant program. The agency OIGs continue to provide oversight through audits and investigations of contracting and grant activities associated with federal funds. For instance, FDOT and Florida’s Department of Education OIG reported on contractors’ inaccurate reporting of expenditures and inadequate oversight of sub-contractors. Moreover, in July 2008, the FDOT OIG reported their review of contract files disclosed that differences between the state’s accounting system payments and the recipient expenditures were not adequately explained. State officials also expressed some broader concerns about other potential risks. For example, state officials identified new programs in the Recovery Act as potentially risky and noted that the state’s fiscal year 2009 Single Audit report that will cover such new programs will not be completed until spring 2010. State officials also expressed concern about potential risk in programs receiving large funding increases under the Recovery Act. For example, Florida Department of Law Enforcement officials stated that the amount of Recovery Act funds received for the Edward Byrne Memorial Justice Assistance Grant Program, which is designed to help prevent and control crime and improve the operations of the criminal justice system will be four to five times the amounts received in prior years. For these programs, they estimate that about $52 million will be passed through to 67 local Florida counties, which have had grants collectively totaling only $12 million to $15 million in past years. In response to the Recovery Act, Florida’s Chief Inspector General established an enterprisewide working group of agency OIG’s to evaluate risk assessments, and promote fraud prevention, awareness, and training. The group members are updating their annual work plans by including the Recovery Act funds in their risk assessments and will leave flexibility in their plans to address issues related to these funds. In preparing to conduct the Single Audits for 2008-2009 and subsequent fiscal years, the Auditor General is monitoring the state’s plans for accounting for and expending Recovery Act funds, tracking the expected changes in OMB’s Single Audit requirements, and participating in the National State Auditors Association’s efforts to provide input on Recovery Act accounting, reporting, and auditing issues. The Auditor General expects the number of major federal programs to increase as a result of the large infusion of Recovery Act funds into the state, thus increasing the number of federal programs that the Auditor General must audit as part of the state’s annual Single Audit. Officials from Florida’s OPPAGA expect an increase in the number of legislative requests for their studies—particularly those focused on education programs—as Recovery Act funds are disbursed to recipients. The OIGs are developing and refining strategies to ensure oversight of Recovery Act funds. For example, the FDOT OIG is developing plans to increase its up-front monitoring activities for transportation funds to mitigate the potential risk of fraud, waste, and abuse. Some of these activities include: Designating a team of seven auditors to monitor Recovery Act expenditures and other related activities; Developing fraud awareness training specifically for Recovery Act Conducting risk assessments of Recovery Act transportation projects; Monitoring and providing oversight for the pre-construction, advertisement, bid, award, and contract-letting activities for Recovery Act projects. Florida officials told us that separate accounts have been established for receipt of increased FMAP grant awards. The OIG in the Agency for Health Care Administration will follow established recovery protocol and processes to prevent and detect Medicaid overpayments by conducting detection analyses and audits, imposing sanctions, and making referrals to the Medicaid Fraud Control Unit and other regulatory and investigative agencies as appropriate. According to Florida state officials, the state completed an initiative to strengthen contracting requirements several years ago. For example, the majority of state contracts greater than $1 million are required to be reviewed for certain criteria by the Department of Financial Services’ Division of Accounting and Auditing before the first payment is processed. The contract must also be negotiated by a contract manager certified by the Florida Department of Management Services, Division of State Purchasing Training and Certification Program. In light of decreased state budgets that have resulted in prior staff reductions, Florida state auditing officials expressed concern about the adequacy of staff resources to provide oversight of Recovery Act funds beyond that required under existing federal Single Audit Act requirements. For example, the Auditor General told us that the office has not hired new staff for over a year and about 10 percent of the office’s positions remain unfilled. In addition, OPPAGA officials told us their staff has decreased by 10 percent in the past 2 years. State officials told us that the efficient use of existing and projected resource levels will require an ongoing assessment of risks and priorities and the allocation of staff resources to ensure the required oversight of state and federal funds, including Recovery Act funds. Florida state agencies were in the early stages of developing plans to assess the effects of the Recovery Act spending because they were waiting for guidance from OMB on how to measure jobs retained and created with Recovery Act funds. For example, Florida Department of Law Enforcement (FDLE) officials said that they could count the number of staff hired to implement a new program, but they did not know how to count the number of jobs retained or created if Recovery Act funds are used for purchases of goods such as new police cruisers. In addition, FDLE and other state officials said they needed clear OMB guidance in order to build this information upfront into the data reporting requirements. Florida’s Department of Education has created a new form that school districts will use to report quarterly Recovery Act expenditures and the number of jobs retained and created, but they need additional guidance from OMB to develop instructions for school districts on how to count these jobs. Florida’s Agency for Workforce Innovation is encouraging recipients of Recovery Act funds throughout the state to list jobs created with the funds in the state’s existing online job bank. By including tags in the system to identify the jobs linked to Recovery Act funds, the agency expects to be able to count specific jobs created with the funds. A local workforce investment board official told us that the board is publicizing the use of the job bank for Recovery Act jobs through radio and town hall appearances and mailings to potential recipients of Recovery Act funds. Because Florida is only required to collect data on jobs created with Recovery Act funds for which Florida is the recipient, Florida officials plan to include data on the state Recovery Act Web site on all jobs created with Recovery Act funds in Florida. On April 3, 2009, OMB issued guidance indicating that it will be developing a comprehensive system to collect information, including jobs retained and created, from all recipients of Recovery Act funds. The state plans to ask OMB if they can obtain data relevant to Florida collected by the national reporting system on jobs retained and created with Recovery Act funds. According to Florida officials, this will reduce duplication and increase the efficiency of their reporting. We provided the Governor of Florida with a draft of this appendix on April 17, 2009. The Special Advisor to Governor Charlie Christ, Florida Office of Economic Recovery, responded for the Governor on April 20, 2009. In general, the Florida official concurred with the information in the appendix. The official also provided technical suggestions that were incorporated, as appropriate. In addition to the contacts named above, Fannie Bivins, Carmen Harris, Kathy Peyman, Robyn Trotter, and Cherie’ Starck made major contributions to this report. Use of funds: An estimated 90 percent of Recovery Act funding provided to states and localities nationwide in fiscal year 2009 (through Sept. 30, 2009) will be for health, transportation and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, the Centers for Medicare & Medicaid Services (CMS) had made about $521 million in increased FMAP grant awards to Georgia. As of April 1, 2009, Georgia had drawn down about $312 million, or 60 percent of its initial increased FMAP grant awards. State officials plan to use funds made available as a result of the increased FMAP to address increased caseloads, offset general fund needs, and maintain current benefit levels and provider reimbursement rates in the state’s Medicaid program. Georgia was apportioned about $932 million for highway infrastructure investment on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had not obligated any Recovery Act funds for On April 7, 2009, the Governor certified that the Georgia Department of Transportation plans to spend $208 million on 67 projects throughout the state. The department plans to award contracts for most of these projects by May 22, 2009. These projects include maintenance, bridge work, and other activities. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) Georgia was allocated about $1 billion from the initial release of these funds on April 2, 2009, by the U.S. Department of Education. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance of effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. Georgia plans to submit its application in late April or early May. The state’s fiscal year 2010 budget, which passed on April 3, 2009, included $521 million in state fiscal stabilization funds for education. Georgia also is receiving Recovery Act funds under other programs, such as Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA) (commonly known as No Child Left Behind); the Individuals with Disabilities Education Act, Part B; and the Tax Credit Assistance Program. The status of plans for using these funds is discussed throughout this appendix. Safeguarding and transparency: A small core team consisting of representatives from the Office of Planning and Budget, State Accounting Office, and Department of Administrative Services (the department responsible for procurement) is taking steps to establish safeguards for Recovery Act funds and mitigate identified areas of risk. For example, the State Accounting Office has issued guidance on tracking Recovery Act funds separately, and the Office of Planning and Budget is developing a state-level strategy to monitor high-risk agencies. The State Auditor and Inspector General will monitor the use of Recovery Act funds. Assessing the effects of spending: While waiting for additional federal guidance, the state has taken some steps to assess the impact of Recovery Act funds on the state, including adapting an automated system currently used for financial management to meet Recovery Act reporting requirements. Although Georgia is still awaiting final information from the federal government, the state estimates it will receive about $7.3 billion in funding under the Recovery Act. Of that amount, about $467 million (or 6 percent) will be awarded by federal agencies directly to localities and other nonstate entities. As shown in figure 5, the majority of Recovery Act funds will support education (36 percent), health programs (35 percent, of which 23 percent will go toward Medicaid), and transportation (15 percent). The Governor completed the blanket certification for Recovery Act funds on March 25, 2009, confirming that the state will use the funds to create jobs and promote economic growth. The state has begun to use or plans to use funds for the following purposes: Increased Federal Medical Assistance Percentage Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 percent to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for fiscal year 2009 through the first quarter of fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs, (2) a general across-the-board increase of 6.2 percentage points in states’ FMAPs, and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of this increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. As of April 1, 2009, Georgia had drawn down $311.5 million in increased FMAP grant awards, which is about 59.8 percent of its awards to date. Officials noted that these funds were drawn down retroactively for the period October 1, 2008, through February 25, 2009, but funds can now be drawn down on a more frequent basis. Georgia officials reported they plan to use funds made available as a result of the increased FMAP to address increased caseloads, offset general fund deficits, and maintain current eligibility and benefit levels in the state Medicaid program. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and for other surface transportation projects. States must follow the requirements for the existing programs, and in addition, the governor must certify that the state will maintain its current level of transportation spending, and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Georgia provided these certifications, but qualified its maintenance of effort certification, noting that the Georgia General Assembly still was considering the Georgia Department of Transportation’s (GDOT) fiscal year 2010 budget, which could impact the state’s highway spending plans for that year. Georgia has been apportioned $932 million for highway infrastructure. On April 7, 2009, the Governor certified the first round of projects to be funded with Recovery Act funds. As of April 16, 2009, the U.S. Department of Transportation had not obligated any Recovery Act funds for Georgia projects. Georgia plans to spend $208 million on 67 projects throughout the state. Of that amount, $97 million will be spent in economically distressed areas. The funds will be spent on maintenance (53 percent), bridges (23 percent), capacity projects (17 percent), safety projects (6 percent), and enhancements (1 percent). The Georgia Department of Transportation plans to award contracts for the majority of these projects (73 percent) by May 22, 2009. Figure 6 illustrates the implementation time line for Recovery Act highway projects. Pre-Construction Conference Contractor assembles materials and workers U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education (Education). The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures, among other things, it will take actions to meet certain educational requirements such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. Georgia’s initial SFSF allocation was about $1 billion. According to state officials, the state’s fiscal year 2010 budget passed on April 3, 2009, and included $521 million in state fiscal stabilization funds for education and $140 million in state fiscal stabilization funds for public safety. Georgia plans to use the education funds for elementary, secondary, and public higher education. For instance, Georgia intends to use three established formulas to allocate funds to local education agencies, universities, and technical colleges. Georgia plans to use the public safety funds to help maintain safe staffing levels at state prisons, appropriately staff the state’s forensic laboratory system, and avoid cuts in the number of state troopers. Georgia plans to submit its application for fiscal stabilization funds in late April or early May. In addition to the major programs we discussed earlier, table 6 shows how Georgia and two local entities plan to use Recovery Act funds for other selected programs. Funds will be used to help with needs that were deferred as a result of budget cuts, such as bus replacement and the purchase of cleaner fuel vehicles. Funds will go to the Metropolitan Atlanta Rapid Transit Authority. State will encourage local education agencies to focus on professional learning opportunities for staff and intervention programs for students who need help with math and writing. Among other things, the state plans to encourage local education agencies to (1) provide professional development for special education teachers, (2) expand the availability and range of inclusive placement options for preschoolers, and (3) obtain state-of-the-art assistive technology devices and provide training in their use to enhance access to the general curriculum for students with disabilities. State plans to use a portion for administration, oversight of local workforce agencies, as well as rapid response during major layoffs; the majority of the funds will be allocated to the 20 local areas within the state for adult, youth, and dislocated worker programs. The Atlanta Regional Workforce Board—the local workforce board for seven counties in the Atlanta metropolitan area—is concentrating on plans for using the $3.1 million it will receive for summer youth programs. State will focus on fiscal year 2008 projects that received tax credits and those on the waiting list; for projects that received tax credits but are having difficulty using them, the state will either provide gap financing or exchange the tax credits for grants. The Atlanta Housing Authority will use $18.6 million to rehabilitate 13 public housing developments and an additional $8 million to complete the demolition of 3 public housing developments. State plans to apply, but the competition criteria have not yet been published. State is currently developing a strategy to allocate the funds that must be passed through to local governments. The anticipated funds are based on federal agency announcements as of April 17, 2009. The recent economic downturn adversely affected Georgia in a number of ways: Higher unemployment rate—as of February 2009, the state’s unemployment rate was 9.3 percent. This rate surpassed the national unemployment rate (8.1 percent) and was almost double the state unemployment rate from a year earlier (5.4 percent). Increases in Medicaid enrollment—from January 2008 to January 2009, the state’s Medicaid enrollment increased from 1,265,136 to 1,314,689, with increased enrollment attributable to three population groups: (1) children and families, (2) disabled individuals, and (3) other populations, which includes refugees and women with breast and/or cervical cancer. Declining revenue—through March 2009, the state’s net revenue collections for fiscal year 2009 were 8 percent less than they were for the same time period in fiscal year 2008, representing a decrease of approximately $1 billion in total taxes and other revenues collected. Use of reserves—to offset shortages in revenue, the state used $200 million from its Revenue Shortfall Reserve, or “rainy day” fund, in fiscal year 2009 and will use an additional $259 million in fiscal year 2010. Recent budget cuts—overall, the state’s budget was cut by 8 percent from fiscal year 2008 to fiscal year 2009. As shown in table 2, some individual agencies were cut more significantly than others. Georgia officials plan to use Recovery Act funds to limit additional budget cuts. Georgia moved quickly to implement an infrastructure to manage Recovery Act funds. A small core team was in place as of December 2008 to begin planning for implementation. Within 1 day of enactment, the Governor had appointed a Recovery Act Accountability Officer, and she formed a Recovery Act implementation team shortly thereafter. The implementation team includes a senior management team, officials from 31 state agencies, a group to support accountability and transparency, and cross-agency teams (see fig. 7). The Recovery Act Accountability Officer and senior management team are responsible for analyzing and disseminating federal and state guidance to the state agencies receiving Recovery Act funds. The accountability and transparency support group comprises representatives from the Office of Planning and Budget, State Accounting Office, and Department of Administrative Services. The State Auditor will serve as the primary auditor of the funds, and the Inspector General will provide investigative support and respond to complaints of fraud. The first implementation team meeting was held on February 24, 2009. Since then, the implementation team has met almost every week. According to state officials, each year the Governor is required to present to the General Assembly a recommended state budget for the upcoming fiscal year and an amended budget for the current fiscal year. Prior to submitting the budget for the upcoming year, the Governor sets the state’s revenue estimate, which when added to surplus and reserve funds, determines the size of the forthcoming appropriations bill. Furthermore, state officials told us that the Governor has the authority to approve the appropriations bill in its entirety or choose individual expenditure items to veto. To approve the use of Recovery Act funds, Georgia has enhanced its existing budget process. The majority of Recovery Act funds will be added into state budgets via an amendment process through the Governor’s Office of Planning and Budget. A monthly Recovery Act budgeting and amendment process has been established to account for federal dollars. The Recovery Act approval process requires that each state agency submit an action plan to the Office of Planning and Budget that includes information on the agency, funding sources, accountability measures, and details on individual projects funded (see fig. 8). For Recovery Act funds the state government receives, the budget office also is requiring state agencies to complete a tool that assesses risk. The budget office then reviews the plans submitted by the agency, provides feedback to the agency, and, in conjunction with the agency, finalizes the plans and risk assessment tool. The Governor, the Recovery Act Accountability Officer, budget office staff, and agency officials meet to vet the action plan and make a final decision on applying for funding. As of April 17, 2009, all state agencies had submitted action plans, and the budget office had begun its review of these plans. Georgia’s most recent Single Audit Act report identified a number of material weaknesses. Recognizing the risks associated with the influx of Recovery Act funds, the state has taken a number of steps to establish internal controls and safeguards for these funds. Georgia’s most recent Single Audit Act findings indicate that the state may have difficulty accounting for the use of some Recovery Act funds. In its fiscal year 2008 Single Audit report, the State Auditor identified 28 financial material weaknesses and 7 compliance material weaknesses. Three state agencies that expect to receive a substantial amount of Recovery Act funds were cited for most of the financial material weaknesses—the Department of Transportation (10), Department of Labor (4), and Department of Human Resources (2). For example, the Department of Transportation’s financial accounting system was deemed unsuitable for day-to-day management. It also did not have a system in place to correctly identify fund sources, and as a result, auditors found that $138 million of federal funds were misclassified. In addition, auditors found that the Department of Labor was unable to provide detailed account balances for the Unemployment Insurance Program because it maintained an inadequate general ledger that consisted of manually updated spreadsheets. The auditors also found that the Department of Human Resources’ process of allocating indirect costs to programs had multiple deficiencies. They noted that inadequate internal controls and failure to follow established policies increases the risk of material misstatement in the financial statements, including misstatements due to fraud and noncompliance with federal regulation. In addition, the Department of Human Resources was cited for four compliance material weaknesses, such as requesting federal funds in excess of program expenditures. To ensure that the affected state agencies will address these material weaknesses, the State Accounting Office will be monitoring corrective action plans developed in response to the Single Audit report. The office plans to issue guidance on the monitoring process by the end of April 2009 and has asked agencies to start tracking actions taken to address material weaknesses. Georgia recognizes the importance of accounting for and monitoring Recovery Act funds and, despite recent budget cuts, has directed state agencies to safeguard Recovery Act funds and mitigate identified risks. At one of the first implementation team meetings, the Recovery Act Accountability Officer disseminated an implementation manual to agencies, which included multiple types of guidance on how to use and account for Recovery Act funds. For example, the Office of Planning and Budget provided details on the budgeting process for Recovery Act funds. New and updated guidance is disseminated at the weekly implementation team meetings. At the direction of the Recovery Act Accountability Officer, the three agencies tasked with accountability support—the Office of Planning and Budget, State Accounting Office, and Department of Administrative Services—and other state agencies have instituted the following safeguards: The Office of Planning and Budget, in collaboration with the State Accounting Office and others, is developing a state-level strategy to monitor high-risk agencies. Additional risk-mitigation strategies will be developed and implemented for these agencies. The State Accounting Office issued two accounting directives to all state agencies. The first provides guidance on accounting for Recovery Act funds separately from other funds. The state plans to use Catalog of Federal Domestic Assistance numbers to track Recovery Act funds separately. Funds will also be segregated through a set of unique Recovery Act fund sources in the state’s financial accounting system. For example, the state is tracking increased FMAP funds for Medicaid through the development of a unique identifier for each grant award. The second accounting directive supplies language that should be included in all contracts issued under the Recovery Act. In addition, the office is reviewing the current accounting internal controls and assessing how they can be enhanced for Recovery Act funds. The Georgia Department of Administrative Services plans to issue a communication alert stating that any state agency planning to award contracts with Recovery Act funds should contact the department for guidance. The department has developed standard contract language that should be included in all Recovery Act contracts and plans to publicize and offer training for state agency contracting staff. Further, the department plans to continue its compliance reviews of agencies with delegated purchasing authority to ensure they are following proper policies and procedures. All of the agencies we met with that directly administer programs had monitoring processes in place that they plan to adapt or enhance for Recovery Act oversight. For example, the Georgia Department of Community Affairs’ plans for monitoring the Tax Credit Assistance Program include a front-end analysis of costs, third-party inspections prior to the release of funds, and an audit of the general contractor by a certified public accountant. The last requirement is unique to projects funded with Recovery Act tax credits. In addition, the State Auditor, Inspector General, and internal audit divisions within state agencies have taken or plan to take the following steps to mitigate risk and oversee the use of Recovery Act funds: The State Auditor issued two audit risk alerts. One urged all agency officials to include appropriate contractual provisions in Recovery Act contracts and to not rush the distribution of Recovery Act funds before adhering to proper internal control processes and understanding federal guidelines. The other alert discussed limits on the use of funds. The State Auditor also plans to provide internal control training to state agency personnel in late April. The training will discuss basic internal controls, designing and implementing internal controls for Recovery Act programs, best practices in contract monitoring, and reporting on Recovery Act funds. Currently, the State Auditor conducts routine statewide risk assessments as a means of identifying high-risk agencies and determining where to best focus audit resources. Officials plan to target future risk assessments on programs receiving Recovery Act funding and are awaiting additional audit guidance from the Office of Management and Budget (OMB). The Inspector General issued a directive requiring all state agencies to insert new contractual language in any contracts, subcontracts, grants, and bid solicitations financed with Recovery Act funds. The new language specifically gives her the right to inspect all records of outside vendors, subcontractors, and consultants. In conjunction with the State Accounting Office, the Inspector General plans to conduct unannounced visits to state agencies receiving Recovery Act funding. The Inspector General also developed a database to specifically track Recovery Act complaints and a public service announcement to alert the public of how to report fraud, waste, and abuse. Some state agencies, such as the Departments of Human Resources and Transportation, have internal audit divisions that plan to monitor the use of Recovery Act funds. For instance, the Department of Human Resources’ internal auditor has developed a plan to assess the risk of each program prior to receiving Recovery Act funding. As these actions and plans indicate, Georgia recognizes the importance of instituting safeguards for Recovery Act funds. However, state officials also stressed the costs of such efforts. Both the Governor’s Office and the State Auditor noted that they had not received additional funding for Recovery Act oversight. As shown in table 2, several agencies with oversight responsibilities experienced significant budget reductions in fiscal year 2009, including the State Accounting Office (43 percent), Inspector General (19 percent), Office of Planning and Budget (11 percent), and State Auditor (11 percent). The State Auditor noted that, if state fiscal conditions do not improve or federal funding does not become available for audit purposes, additional budget and staffing cuts may occur within the department. Directives from OMB, due by May 1, will provide guidance on the audit requirements for Recovery Act programs. Officials noted that the scope of pending audit requirements may greatly impact the State Auditor’s ability to audit Recovery Act programs on top of existing audit requirements. In addition, some state officials that directly administer programs told us that overseeing the influx of funds could be a challenge, given the state’s current budget constraints and hiring freeze. In some cases, state agencies told us that they planned to use Recovery Act funds to cover their administrative costs. Other state agencies wanted additional clarity on when they could use program funds to cover such costs. In general, Georgia is awaiting additional federal guidance on reporting requirements before making detailed plans to assess impact. However, the State Auditor is adapting an existing system (used to fulfill its Single Audit Act responsibilities) to help the state report on Recovery Act funds. The statewide Web-based system will be used to track expenditures, project status, and job creation and retention. The state will make data from this system available on its Recovery Web site. The Governor is requiring all state agencies and programs receiving Recovery Act funds to use this system. State officials do not expect to track and report on funds going directly to localities, but some said they would like to be informed of these funds so that the state can coordinate with localities. They cited broadband initiatives and health funding to nonprofit hospitals as areas where a lack of coordination could result in a duplication of services or missed opportunities to leverage resources. In addition, some state agencies appear to have more experience tracking jobs than others. For example, the Georgia Department of Community Affairs has experience tracking jobs for the Community Development Block Grant program; therefore, agency officials do not expect to have difficulty tracking jobs for the Neighborhood Stabilization Program. For another program it will administer, the Tax Credit Assistance Program, Community Affairs surveyed potential applicants in March 2009 to gain a better understanding of performance measures that could be tracked as a part of its monitoring efforts, including job creation. In contrast, officials from other programs, such as the Edward Byrne Memorial Justice Assistance Grant program and the Transit Capital Assistance Grant program expressed concerns about identifying appropriate measures of job creation and retention within the purpose of their programs and were waiting for more guidance from federal agencies and OMB. We provided the Governor of Georgia with a draft of this appendix on April 17, 2009. The Recovery Act Accountability Officer responded for the Governor on April 19, 2009. In general, she noted that the report accurately and succinctly captures the implementation status of the Recovery Act process in Georgia. In addition to the contacts named above, Paige Smith, Assistant Director; Nadine Garrick, analyst-in-charge; Stephanie Gaines; Alma Laris; Marc Molino; Barbara Roesmann; Robyn Trotter; and Mark Yoder made major contributions to this report. Use of funds: An estimated 90 percent of Recovery Act funding provided to states and localities nationwide in fiscal year 2009 (through Sept. 30, 2009) will be for health, transportation and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, highways, and the State Fiscal Stabilization Fund. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, the Centers for Medicare & Medicaid Services (CMS) had made about $992 million in increased FMAP grant awards to Illinois. As of April 1, 2009, Illinois has drawn down about $117.1 million, or about 12 percent of its initial increased FMAP grant awards. Illinois plans to use funds made available as a result of the increased FMAP in fiscal years 2009 and 2010 to fill a Medicaid budget gap, permitting the state to move from an average 90-day payment cycle to a cycle of no more than 30 days for all of its providers, including payments hospitals and nursing homes. Illinois was apportioned about $936 million for highway infrastructure investment on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had obligated $606.3 million for 214 Illinois projects. Illinois Department of Transportation officials stated that they will award most contracts based on a competitive bidding process, but they will use a quality based selection process for approximately $27 million in engineering services contracts. These projects include activities such as resurfacing highways and repairing bridge decks. Illinois will request reimbursement from the U.S. Department of Transportation as the state makes payments to contractors. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) Illinois was allocated about $1.4 billion from the initial release of these funds on April 2, 2009 by the U.S. Department of Education. On April 20, 2009, these funds became available to the state. Illinois is expecting to receive an additional $678 million by September 30, 2009. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance of effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. The state submitted its application on April 10, 2009. Illinois plans to use all of its $2 billion in State Fiscal Stabilization funds for K-12 and higher education activities to address the layoffs and other cutbacks many district and public colleges and universities are facing in their fiscal year 2009 and 2010 budgets. Illinois is also receiving additional Recovery Act funds under other programs, such as programs under Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA) (commonly known as No Child Left Behind); programs under the Individuals with Disabilities Education Act (IDEA); and two programs of the U.S. Department of Agriculture—one for administration of the Temporary Food Assistance Program and one for competitive equipment grants targeted to low income districts from the National School Lunch Program. Safeguarding and transparency: To provide accountability and transparency in how these funds are being spent, the state has established a high level Executive Committee and a separate working group to oversee Recovery Act compliance across agencies and departments. It has also developed a Web site (www.recovery.illinois.gov) that contains information about the use of Recovery Act funds. The state is in the process of performing a risk assessment of all state programs receiving Recovery Act funds to identify potential vulnerabilities. It will use the state’s Single Audit—a state-level audit of the largest programs receiving federal money—as a tool in identifying these risks. State agencies also reported that they are capable of tracking their Recovery Act funds separately from other program funds by tagging them with a special accounting or funding code. For the most part, these codes will permit agencies to then rely on existing processes to monitor and report on how these funds are being spent. Assessing the effects of spending: Officials at several state agencies indicated that they can track various performance measures for projects funded through the Recovery Act by utilizing existing systems. However, according to officials in the Governor’s office and other state agencies, more guidance is needed on definitions for job creation and retention measures to adequately measure their impact. Illinois has started to use some of its Recovery Act funds, and high level state officials we spoke with described several overarching priorities and goals that the state plans to achieve through use of these funds. These include averting layoffs and creating new jobs, concentrating resources on economically distressed areas, and funding infrastructure improvements, as described below. Increased Federal Medical Assistance Percentage Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 percent to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for federal fiscal year 2009 through the first quarter of federal fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs; (2) a general across-the- board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of the increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. From January 2008 to January 2009, Illinois’s Medicaid enrollment increased slightly from 2,184,963 to 2,298,802, with the highest share of the enrollment increase attributable to two population groups: (1) children and families and (2) non-disabled non-elderly adults. Illinois is estimated to receive a total of $2.9 billion in increased FMAP funding, of which $992 million has already been awarded to the state for the first three quarters of federal fiscal year 2009. For the second quarter of federal fiscal year 2009, Illinois received an FMAP of 60.48 percent—an increase of 10.48 percentage points over its fiscal year 2008 FMAP. As of April 1, 2009, Illinois has drawn down $117.1 million in Recovery Act funds, which is almost 12 percent of the amount awarded to Illinois to date. Illinois state officials indicated that the main focus in using funds made available as a result of the Recovery Act will be to meet financial obligations and to ensure compliance with the prompt payment provisions of the Recovery Act. Specifically, Illinois is using funds made available as a result of the Recovery Act to fill a Medicaid budget gap, permitting the state to move from a 90-day payment cycle to a 30-day cycle for all of its providers, including payments to hospitals and nursing homes. The state has also decided to include pharmacists in its prompt payment initiative. These actions will also help avoid potential layoffs in provider organizations. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and for other surface transportation projects. States must follow the requirements for the existing programs, and in addition, the governor must certify that the state will maintain its current level of transportation spending. The governor or other appropriate chief executive must also certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Illinois provided the first of these certifications but noted that the state’s level of funding was based on the best information available at the time of the state’s certification. The Illinois Department of Transportation (IDOT) is planning to spend a large share of its estimated $655 million in Recovery Act funds for highway and bridge construction and maintenance projects in economically distressed areas. Equally important criteria are that projects must be shovel-ready and can be completed by February 2012. These funds will expand the amount of money the state can invest in highway projects beyond the amounts the state had listed in its State Transportation Improvement Program. The projects will include resurfacing roads across the state, repairing bridge decks, replacing guardrail sections, and improving pavement markings. As of April 16, 2009, the U.S. Department of Transportation had obligated $606.3 million for 214 Illinois projects. IDOT officials stated that they will award most contracts based on a competitive bidding process, but they will use a quality based selection process for approximately $27 million in engineering services contracts. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education (Education). The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures, among other things, it will take actions to meet certain educational requirements, such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. The Illinois Office of the Governor submitted the state’s application for these funds to Education on April 10, 2009. On April 20, 2009, these funds became available to the state. Illinois is expecting to receive an additional $678 million by September 30, 2009. The U.S. Department of Education has allocated a total of about $2 billion in SFSF monies to Illinois. Approximately $1.4 billion of this amount was allocated in an initial release on April 2, 2009. Illinois plans to use all of the $2 billion from the SFSF for K-12 and higher education activities and hopes to avert layoffs and other cutbacks many districts and public colleges and universities are facing in their fiscal year 2009 and 2010 budgets. State Board of Education officials also noted that U.S. Department of Education guidance allows school districts to use stabilization funds for education reforms, such as prolonging school days and school years, where possible. However, officials said that Illinois districts will focus these funds on filling budget gaps rather than implementing projects that will require long-term resource commitments. The State of Illinois has been in a recession since December 2007 and continues to face financial difficulties. The state’s unemployment rate surged by 46 percent from 5.9 percent in February 2008 to 8.6 percent in February 2009. Major job losses are expected to continue in manufacturing, construction, and retail. On the housing front, foreclosure filings in February 2009 were up 62 percent over 2008. While state general fund revenue grew 4.5 and 5.7 percent in fiscal years 2007 and 2008, respectively, revenues declined by 0.5 percent in fiscal year 2009. The state estimates that it faces a projected $11.6 billion operating budget deficit for fiscal years 2009 and 2010. To address this deficit, the Governor has proposed a number of measures in the state’s 2010 budget proposal, including the following: Spending cuts, including 4 furlough days for state employees and a 2- percent spending reduction in grant programs; State employee pension reform, including provisions that would align the state’s eligible age for full benefits with that of Social Security, adjust benefit formulas, and increase contribution rates for current employees; Creation of a taxpayer board to improve accountability and efficiency across state programs; and Revenue increases, including income tax increases that would raise an estimated $2.8 billion from individuals and $350 million from corporations in fiscal year 2010; higher health care contributions from current and retired state employees; and higher vehicle registration, title, and license fees. Illinois officials expect that the state will receive at least $9 billion in direct Recovery Act funds to the state, and those local entities—such as public housing and transit authorities—will receive additional Recovery Act funds. State officials said they have identified about $4.3 billion of Recovery Act funds, including use of the previously mentioned SFSF, that could be utilized to address the operating budget shortfall for fiscal years 2009 and 2010. They noted that these funds would potentially reduce pressure on the state for further tax increases and spending cuts. In addition, the state plans to use some of the remaining Recovery Act funds to help launch the Governor’s proposed infrastructure building program— a $26.5 billion proposal to fund schools, roads and bridges, public transit, and energy and environmental capital projects during Illinois fiscal years 2010 through 2015. The $26.5 billion plan would be paid for with funds from the state ($10.6 billion), federal sources ($11.6 billion), local sources ($2.4 billion), and the Recovery Act ($2.0 billion). In addition to funds administered by state agencies, local entities will also receive funds through the Recovery Act for programs administered at the local level. We met with one local agency that will receive Recovery Act funds and will use its funds to address overdue capital improvements. The Chicago Transit Authority (CTA), an independent governmental agency that provides rail and bus service in the greater Chicago area, has already put plans in place to spend its $240 million. CTA has a backlog of $6.8 billion in unfunded capital projects necessary to update its infrastructure and fleet. The agency has begun work on an $87.8 million project that will replace rails, ties, and fasteners for one subway line. The agency also expects to complete hybrid bus purchases, a bus and rail car fleet overhaul, and numerous facility improvements by the end of 2009. Finally, reconstruction of at least one rail station is expected to be completed by late 2010. While we found examples of programs that have received Recovery Act funds and have projects that are already underway, we spoke with state officials who said they needed more guidance about how they should use, track, and report on these funds at their agencies. State Board of Education officials said that understanding the reporting requirements and eligible uses for Recovery Act funds is the biggest challenge they face as they prepare to disseminate funds to the local school districts. They also expressed concern with the Recovery Act’s dual emphases on accountability and quick expenditure of funds. The Illinois Criminal Justice Information Authority expressed similar concerns about the need for federal guidance in regard to reporting time frames that may not completely align with previous reporting procedures. During our meetings with high-level state officials, they said that efforts are underway to ensure accountability and transparency in the use of Recovery Act funds. The Governor’s office has established an Executive Committee and working group to identify concerns across state agencies and help them implement Recovery Act provisions. Also, state internal audit officials are developing a variety of internal control techniques to assure compliance with the Recovery Act’s requirements. To properly track funds, state agency officials explained that they plan to use unique identifiers or codes so that these funds can be separately tracked in their existing financial or grants management systems. To ensure accountability and transparency in the use of Recovery Act funds, the state has established an Executive Committee, a Recovery Act Working Group, and an Illinois Recovery Web site. The Executive Committee is comprised of state executives, including the Deputy Chief of Staff for Economic Recovery, the Chief Internal Auditor, the Budget Director, and the Chief Information Officer. According to state officials we spoke with, the Executive Committee is working to identify common risks to all state agencies in the use of Recovery Act funds. To address crosscutting Recovery Act issues, such as legal matters and procurement, the committee is also establishing subcommittees with agency subject matter experts to review critical information and develop policies on these subject matters. The Recovery Act Working Group consists of a contact point for each state agency for Recovery Act related matters and, according to state officials, meets to communicate requirements, guidance, and implementation related to the act. The Governor’s Office has also established an Illinois Recovery Web site at www.recovery.illinois.gov, which contains information on the programs receiving Recovery Act funds, amounts available through the act, and certifications signed by the Governor. The Web site will also include reports on Recovery Act program expenditures, and eventually users will have the ability to download raw data on project or program descriptions, budgets, spending, and job creation. Another feature of Illinois’s Web site is that it allows the public to submit suggestions for projects that the state could fund through the Recovery Act. Every state is required to have an annual Single Audit in accordance with U.S. Office of Management and Budget (OMB) requirements. This audit is required when $500,000 or more in federal funds is expended in any fiscal year. Officials from the Illinois Office of Internal Audit (OIA) stated that they will utilize the Office of the Auditor General’s (OAG) single audits to identify programs that may require additional scrutiny. In Illinois’s fiscal year 2007 Single Audit, the OAG identified four material weaknesses in internal controls over financial reporting and classified 46 findings as significant deficiencies and material weaknesses in internal controls related to compliance. Significant agency findings classified as a material weakness that are relevant to the Recovery Act and recipients of Recovery Act funds included The State Board of Education not sanctioning a Local Education Agency that did not meet the comparability of services requirement under the Title I Grants to Local Educational Agencies Program; IDOT not obtaining certifications from subrecipients for not having been suspended or debarred from participation for the Airport Improvement Program; Multiple agencies inadequately conducting or failing to conduct on-site monitoring of subrecipient awards for federal programs; and Multiple agencies inadequately monitoring subrecipient audit reports for federal programs. The OAG explained that to the extent that federal programs receiving Recovery Act funds are addressed in the OMB compliance supplement, it will be performing its required audit procedures. The OAG stated that OMB guidance will be critical for planning future audits of federal funds. Furthermore, the OAG conducted an analysis of programs receiving Recovery Act funds, and found that a few additional programs will likely be included in future single audits. OIA officials told us that they are using the Single Audit results to assist in conducting a risk assessment of all state-administered programs receiving Recovery Act funds. OIA officials said that they will use the results of this risk assessment to target their audit efforts to programs that demonstrate a high level of risk. OIA and OAG officials said that they plan to follow up on their respective prior audit findings to make sure that state agencies have taken appropriate corrective action. OIA officials said that in addition to large programs, they plan to follow up on prior internal audit findings on federal Recovery Act programs under $30 million that are not covered by the statewide single audit. Most agency officials we spoke with stated that their systems are capable of tracking Recovery Act funds separately from other funds for the same programs. For example, IDOT officials stated that Recovery Act projects are being noted in different systems, typically with special funding codes. In addition, when IDOT officials access Recovery Act funds, those transactions will have special codes and notations. Similarly, officials at the Illinois Department of Human Services told us that any funds the agency receives through the Recovery Act for the Neighborhood Stabilization Program will have accounting codes separate from any previous funds received through the program. In order to track increased FMAP funds, Illinois officials said they will use the state’s existing accounting systems and will use existing processes to review and reconcile expenditures. For example, state officials will record draw downs of increased FMAP funds separately from other Medicaid funds. State officials will also use special receipt, expenditure, and contract codes for all increased FMAP funds and related Medicaid expenditures. A CTA official we spoke with stated that his agency will use its existing financial system to track Recovery Act funds by unique project numbers or descriptions. Finally, officials from the State Comptroller’s Office told us that separate appropriation codes will likely be used to track Recovery Act expenditures statewide. One agency official indicated that while funds can easily be tagged at the state level, he was concerned that this might not be the case once funds are distributed to subrecipients. Officials at several state agencies we spoke with indicated that they can use various performance measures for projects funded through the Recovery Act by utilizing existing systems. For example, IDOT officials stated that they will track and monitor data for Recovery Act projects in the same manner as they do for regular program reporting, and should be able to report on and provide evidence regarding the status of project goals and objectives. Officials with the Illinois Housing Development Authority stated that they also track performance and goals for each project through current systems and should be able to build on these systems to customize reports as necessary for the Recovery Act. On the other hand, several state officials said that additional guidance is needed for measuring the potential impact of Recovery Act funds. According to officials from the Governor’s office and state agencies we spoke with, additional guidance is needed on definitions of “jobs saved,” “jobs created,” “jobs sustained,” and other similar terms included in the Recovery Act. Illinois Department of Commerce and Economic Opportunity officials stated that they had concerns regarding the evaluation of job retention as it relates to the Workforce Investment Act program. Specifically, they said OMB Recovery Act guidance focuses on quick job placement, but jobs created through the act may have lower retention than those under past program grants. Furthermore, while officials at most agencies we visited stated that they are considering plans to track the impact of Recovery Act funds, none of these plans have been finalized. Officials at two state agencies said that their systems do not track such specific performance measures, and they may need to develop additional mechanisms to link Recovery Act funds with their performance results. We provided the Governor of Illinois with a draft of this appendix on April 17, 2009. The Deputy Chief Of Staff responded for the Governor on April 20, 2009. In general, the state concurred with our statements and observations. The official also provided technical suggestions that were incorporated, as appropriate. In addition to the contacts named above, Paul Schmidt, Assistant Director; Tarek Mahmassani, Analyst-in-Charge; Rick Calhoon; Katherine Iritani; David Lehrer; Lisa Reynolds; and Mark Ryan made major contributions to this report. Use of funds: An estimated 90 percent of Recovery Act funding provided to states and localities nationwide in fiscal year 2009 (through Sept. 30, 2009) will be for health, transportation, and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage Funds As of April 3, 2009, Centers for Medicare and Medicaid Services (CMS) had made about $84 million in increased FMAP grant awards to Iowa. From January 2008 to January 2009, Iowa’s Medicaid enrollment increased from 358,112 to 392,813, with the highest enrollment increase attributable to two population groups: (1) children and families and (2) nondisabled nonelderly individuals. As of April 15, 2009, Iowa had drawn down about $86 million, or 63 percent of its increased FMAP grant Officials plan to use funds made available as a result of the increased FMAP to cover increased caseloads, maintain existing populations of recipients, and avoid reductions to benefits for Medicaid recipients. Iowa was apportioned about $358 million for highway infrastructure investment on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had obligated $221.2 million for 107 Iowa As of April 15, 2009, the Iowa Department of Transportation had competitively awarded 25 contracts valued at $168 million, or 47 percent of the Recovery Act funds apportioned. Contracts were awarded for projects such as bridge replacements and highway resurfacing—“shovel ready” projects that could be initiated and completed quickly. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) Iowa was allocated about $316 million from the initial release of these funds on April 2, 2009, by the U.S. Department of Education. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance of effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. Iowa plans to submit its application as soon as it can be accurately completed. Iowa’s Department of Education plans to use these funds to maintain spending for grades K-12 and postsecondary education at fiscal year 2009 levels for fiscal years 2010 and 2011. In addition, Iowa estimates that other funding will be provided to the state under the Recovery Act for the following program areas: Education—$214 million (includes programs such as those to provide grants to local education agencies and assist individuals with disabilities). Housing and infrastructure—$252 million (includes programs such as the Weatherization Assistance Program). Agriculture/natural resources—$152 million (includes programs such as the clean water state revolving fund). Economic development—$94 million (includes programs such as the unemployment insurance program). The status of plans for using these funds is discussed throughout this appendix. Safeguarding and transparency: Iowa has a foundation of safeguards and controls that could help assure proper spending of Recovery Act funds. For example, the State Auditor is responsible for audits of state and local entities, such as counties, cities, and school districts, and must provide guidelines to public accounting firms that perform such audits. In addition, many state agencies have internal audit groups that focus on programmatic and financial issues. Furthermore, according to state officials, administrative and statutory mechanisms are in place that could oversee Recovery Act funds and provide information to the public on how these funds are being spent. For example, while previous audits have shown few financial weaknesses, the State Auditor is updating its 2009 audit plan risk assessment to reflect the increased risk associated with Recovery Act funding. Iowa is also enhancing its accounting systems to track all Recovery Act funds that will flow through the state government to ensure that the state can adjust its spending plans as needed. Furthermore, Iowa is developing or planning systems to track funds provided to cities, counties, local governments, and other entities. Finally, Iowa is working to establish a framework that will provide transparency on the use of Recovery Act funds. This framework includes the state’s Recovery Act Web site, which is designed to provide up-to-date information on the use of Recovery Act funds by program, a state board to recommend improvements to existing practices to prevent fraud, waste, and abuse and oversee the spending of Recovery Act funds, and mechanisms provided through the state’s Accountable Government Act. Assessing the effects of spending: State agencies have begun to consider how to measure outcomes and assess the effect of the Recovery Act. Some agencies have mechanisms in place to collect data in order to calculate outcomes. Other state agencies are awaiting guidance such as a consistent approach to quantifying the number of jobs created and sustained. In the meantime, Iowa’s Legislative Services Agency plans to work closely with the Iowa Department of Management to create outcome measures for the Recovery Act and report results. Most Iowa state officials said they plan to follow established allocation formulas while waiting for federal guidance on the use and tracking of Recovery Act funds. For example, the Iowa Department of Economic Development, which manages the state’s Community Development Block Grants and Neighborhood Stabilization Program, intends to follow the state-established allocation formula for the Community Development Block Grants program. This formula allocates funding in thirds: one-third to affordable housing, one-third to economic development, and one-third to infrastructure. Some agencies have gone even further in their spending of Recovery Act funds. For example, the Iowa Department of Transportation has funded some “shovel ready” projects within 3 days of the enactment of the Recovery Act. Additionally, the Iowa Department of Economic Development has already established guidance for allocating Neighborhood Stabilization Program funding to eligible entities, should the state be awarded competitive grant funds. As of April 15, 2009, Iowa had drawn down about $86 million of its increased FMAP grant awards for the Medicaid program, which is 63 percent of its awards to date. The state plans to use funds made available as a result of the increased FMAP to cover increased caseloads and maintain current levels of benefits, noting that without these funds, the program would have faced budget shortfalls. Additionally, the state plans to use $110 million of funds made available as a result of the increased FMAP to fully fund Medicaid in the current fiscal year and $145 million of these funds to fully fund Medicaid in fiscal year 2010. Iowa has begun to use some of its Recovery Act funds, as follows. Increased Federal Medical Assistance Percentage Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 percent to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, CMS made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for fiscal year 2009 through the first quarter of fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs; (2) a general across-the-board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of this increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. For the first two quarters of 2009, Iowa’s FMAP rate was 68.82 percent, a 7.09 percentage point increase over fiscal year 2008. Iowa has received increased FMAP grant awards of $136 million for fiscal year 2009, and, as of April 15, 2009, Iowa had drawn down $86 million in increased FMAP grant awards, which is about 63 percent of its awards to date. Iowa officials indicated they will use funds made available as a result of the increased FMAP to cover increased caseloads, maintain existing populations of recipients, avoid cuts to eligibility, and maintain current levels of benefits. In addition, such funds will provide Iowa officials with the means to offset budget shortfalls, including shortfalls for the state’s Medicaid program. Iowa officials indicated that they expect the recession to continue longer for the state than for the nation as a whole, and if the increased FMAP funds are not available for all of federal fiscal year 2011, the resulting deficit will likely be addressed through the use of reserve funds or cuts in program funding. According to state officials, the use of FMAP funds requires an appropriation from the state legislature. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and for other surface transportation projects. States must follow the requirements for existing programs, and in addition, the Governor must certify that the state will maintain its current level of transportation spending, and the Governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Iowa’s Governor certified that the state would “maintain its efforts” for Department of Transportation programs funded under the Recovery Act. However, Iowa noted in its certification that transportation spending would be influenced by the difference in the definition of the word “expend” for different covered programs; the uncertainty of the amount collected from state user fees to fund the programs; and variables (such as weather) that may affect the state’s timeline for spending Recovery Act transportation funds. Within 3 days of the enactment of the Recovery Act, the Iowa Department of Transportation competitively awarded contracts for 19 highway and bridge projects valued at about $56 million. Contracts were awarded for projects such as bridge replacements and highway resurfacing—shovel- ready projects that could be initiated and completed quickly. As of April 15, 2009, Iowa had competitively awarded a total of 25 contracts valued at $168 million, or 47 percent of the Recovery Act funds apportioned. As of April 16, 2009, the U.S. Department of Transportation had obligated $221.2 million for 107 Iowa projects. According to Iowa transportation officials, the agency could begin spending Recovery Act funds quickly because it maintained an inventory of shovel-ready projects and its accounting system needed few changes to track the projects. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF), to be administered by the U.S. Department of Education. The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to the U.S. Department of Education that assures, among other things, that it will take actions to meet certain educational requirements, such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. On April 2, Iowa was allocated $316 million for the education portion of the SFSF. Overall, Iowa expects that the state’s total SFSF allocation will be $472 million. In April, the Governor proposed using almost 82 percent of this amount, or $386 million, to support elementary, secondary, and higher education, as required. These funds will be used for activities such as updating standards and implementing a new data system. For the remaining 18 percent of the SFSF allocation, or $86 million, the Governor proposes to fund universities and community colleges, law enforcement, and corrections in fiscal year 2010. The Governor also proposed using $600,000 of the $86 million to oversee Recovery Act funds. Iowa plans to submit its application as soon as the application can be accurately completed. Beginning in April 2008, unemployment began to rise and in October 2008, state revenues began to slow. As of February 2009, Iowa’s unemployment rate was 4.9 percent, up from 3.9 percent in February 2008. According to a March 27, 2009, report by the Rural Policy Research Institute, the nation’s rural economy is losing jobs at a rate faster than the rest of the United States. Iowa state budget officials estimated that the state’s unemployment rate could increase to 7 percent by December 2009. Regardless of this economic downturn, Iowa’s Governor and General Assembly have statutory responsibility to balance the budget and meet expenditure limitations and are required to use the revenue estimates agreed to by Iowa’s Revenue Estimating Conference, which convenes quarterly, as the basis for determining the budget for the general fund, according to state officials. If revenue estimates are revised downward for the current fiscal year, state officials explained that the law still requires the budget to be balanced. In the current fiscal year, and for the first time since fiscal year 2003, Iowa’s general fund revenues of almost $6 billion are expected to be lower than in the previous fiscal year, a decrease of 1.9 percent from fiscal year 2008 to fiscal year 2009. In response to this downturn, in December 2008, the Governor directed an across-the-board 1.5 percent reduction in the state’s general fund appropriations, effective December 22, 2008. On April 3, 2009, the Governor released a revised budget for fiscal year 2010 of $5.9 billion for the state’s general fund, representing a 7.9 percent reduction for many state programs, even with the addition of more than $535 million in Recovery Act funds. According to state officials, decisions regarding the use of Recovery Act funds require approval by the General Assembly. Since the Iowa General Assembly is scheduled to adjourn on or around May 1, 2009, it may have to develop strategies if funding decisions are necessary after adjournment. For example, the Governor may request that the General Assembly return for a special session. In March 2009, the Governor established a Recovery Act implementation working group to provide a coordinated process for (1) reporting on Recovery Act funds available to Iowa through various federal grants and (2) tracking the federal requirements and deadlines associated with those grants. The implementation working group comprises representatives from nearly two dozen state agencies, led by an executive-level working group, and assisted by groups that will focus on implementation issues such as budget and tracking, intergovernmental coordination, and communications. The implementation working group includes several issue-specific small groups focusing on key program areas: education, energy, environment, health care, housing, information technology, public safety, transportation and infrastructure, and workforce. On April 14, 2009, the working group issued a progress report on Recovery Act funds in Iowa. For example, the working group reported on the planned and spent funding of the state’s energy program to reduce per capita energy consumption, loans for wastewater infrastructure projects, and neighborhood stabilization programs to provide emergency assistance to acquire and redevelop foreclosed properties. In addition to FMAP, Transportation, and the State Fiscal Stabilization Fund programs, the Governor’s office estimates that the state will receive Recovery Act funding as follows: Education: Of $214 million, a large majority involves two formula grant programs—grants to local education agencies ($52 million) and special education grants to assist individuals with disabilities ($122 million). Housing and infrastructure: Of $252 million, 32 percent ($81 million) is for the Weatherization Assistance Program to provide energy-related improvements to homes and educate residents about energy conservation. Agriculture/natural resources: Of $152 million, more than one-third (36 percent or $54 million) is for the clean water state revolving fund. Economic development: Of $94 million, more than three-quarters (76 percent or $71 million) is to modernize the unemployment insurance program. To supplement Recovery Act funds, Iowa is considering other stimulus proposals, such as the Iowa Infrastructure Investment Initiative, or I-JOBS, and another bonding initiative. I-JOBS is designed to create jobs, strengthen the state’s economy, and rebuild the state’s infrastructure over 3 years. If approved by the General Assembly, I-JOBS, as described by state officials, is expected to provide funding for various infrastructure projects, such as transportation, public buildings, and wastewater improvements, and will be funded through 20-year tax-exempt bonds paid for by gaming revenue, current tax revenue, or both. The General Assembly is also considering another bonding initiative to provide economic stimulus. As of April 17, 2009, the Iowa General Assembly had not authorized the issuance of bonds for either of these initiatives. In the absence of OMB and program-specific guidance, associations and organizations have provided guidance and assistance to Iowa on the use and reporting of Recovery Act funds. Among these associations are the National Association of Crime Victim Compensation Boards, the National Association of Victims of Crime Act Assistance Administrators, and the Association for Stop Violence Against Women Administrators. For example, justice associations have helped the Iowa Attorney General’s Office complete grant applications. Many Iowa agencies expect that they will be able to track the Recovery Act funds they use through the state’s central accounting system. The state is also evaluating options for reporting Recovery Act funds provided to cities, counties, local governments, and other entities that will help satisfy reporting requirements for these funds. Specifically, state accounting officials are developing special codes to track Recovery Act funds and have begun to train state agencies’ accounting officials in the use of these new codes. However, Iowa’s central accounting system does not track Recovery Act funds provided directly to some agencies because they are not part of the system. For example, the central accounting system does not track Recovery Act funding provided to the Iowa Department of Transportation. In this case, Iowa transportation officials said the agency is establishing separate accounting codes to track Recovery Act funds by project. Similarly, the central accounting system does not track Recovery Act funds provided to state-funded universities. The state and Board of Regents are discussing how to track these funds. While local governing authorities are not required to report through the state, the Iowa Department of Management is in discussions with these entities to report Recovery Act spending on the state’s Web site. At the local level, some agencies can track these funds, while others are developing guidance to require such tracking, according to state officials. In order to track increased FMAP funds, Iowa is adapting its existing systems. In addition, Iowa’s state Medicaid agency uses a data warehouse for Medicaid payments made to counties, subcontractors, and medical facilities, and U.S. Health and Human Services’ Office of Inspector General has audited the state’s data warehouse. The General Assembly may also track Recovery Act spending. In particular, the assembly’s Legislative Services Agency—a nonpartisan analysis and research agency serving the Iowa General Assembly— assisted members in interpreting the Recovery Act and provided preliminary estimates of funds provided to the state. Furthermore, the Legislative Services Agency will be able to access Iowa’s central accounting system to monitor agencies’ spending in real time. Even as Iowa plans for tracking Recovery Act funds, state officials said that they continue to have some questions about how to report Recovery Act funds. For example, Iowa officials noted that they need additional guidance on reporting increased FMAP funds to CMS. Specifically, Iowa officials said that they need guidance on the timing for drawing down increased FMAP grant awards, reporting receipts and expenditures, and submitting claims for expenditures made retroactively to October 2008. There are various entities in Iowa that are responsible for monitoring, tracking, and overseeing financial expenditures, including the Iowa State Accounting Enterprise (collects and reports state financial information and processes financial transactions); the State Auditor (audits state and local entities, such as counties, cities, and school districts, and provides guidelines to public accounting firms that perform such audits); and the Attorney General (prevents and prosecutes fraud). Finally, many state agencies have internal audit groups that focus on programmatic and financial issues. Prior years’ audits indicate few weaknesses in Iowa’s financial management systems and controls. Iowa’s fiscal year 2007 single audit found one material weakness in internal controls related to a public assistance grant provided to the Iowa Department of Transportation: a computer program error resulted in a $3.6 million overpayment to the agency by the Federal Emergency Management Agency for materials related to disaster recovery. In 2009, Iowa refunded the $3.6 million. Iowa’s fiscal year 2008 single audit did not identify any material weaknesses. While prior audits indicate few financial weaknesses, the Office of the State Auditor is updating its 2009 audit plan risk assessment to reflect the increased risk associated with Recovery Act funding. Of great concern to officials of the State Auditor’s office are possible limits on the ability to charge fees for audit services. According to state officials, these limits would significantly reduce the effectiveness of the State Auditor to audit federal funds received, including those under the Recovery Act, as required by the Single Audit Act. If limits on audit fees were enacted, officials said that the state’s comprehensive annual financial report and the single audit report are likely to result in qualified opinions. The Iowa state government is working to establish a framework to provide transparency on the use of Recovery Act funds. In March 2009, the Governor’s office launched an economic Recovery Act Web site— recovery.iowa.gov—to provide information on Recovery Act funding by program. Iowa plans to add a “dashboard” feature to the Web site—a user- friendly search capability that will provide detailed information on how and where Recovery Act funds are spent. The Governor’s office expects OMB to provide guidance on how to report information on Iowa’s Recovery Act Web site, including the dashboard feature, and how to forward that information to the national Recovery Act Web site. In addition, the state is developing a system that will allow information on Recovery Act funding that does not come through the state government, such as grants federal agencies provide directly to localities, to be available on the state’s Web site. On April 14, the Governor created the Iowa Accountability and Transparency Board—which has similarities to the federal Recovery Accountability and Transparency Board—to, among other duties, assess existing practices to prevent fraud, waste, and abuse; recommend opportunities for improvement in these areas; and oversee real-time audits and reporting. The board will be made up of 14 members. Voting members include the Governor or his designee, the State Auditor or his designee, the State Treasurer or his designee, three local government members, and three citizens. Nonvoting members of the board include the Director of Iowa’s Department of Management or his designee and four members of the state’s General Assembly. The Iowa Accountability and Transparency Board will recommend improvements and oversee the spending of Recovery Act funds. Iowa’s Accountable Government Act could serve as a mechanism to safeguard Recovery Act funding. Under this act, Iowa is required to provide for the efficient and effective use of state funds. Among other things, Iowa’s Accountable Government Act requires grant recipients to certify that information on internal controls relating to processes are available for inspection by the state agency, and the Legislative Services Agency if the recipients provide a service of more than $500,000 that is paid for with local, state, or federal funds. In addition, recipients must report on financial information, reportable conditions in internal control or material noncompliance, and corrective actions taken or planned in response to these reportable conditions. State agencies can enforce this monitoring by terminating payments and recovering any expended government funds. Furthermore, the Legislative Services Agency tracks personnel services contracts—that is, contracts for consulting services or temporary hires—within all state agencies (except the Iowa Department of Transportation and the Iowa Board of Regents) regardless of the value of the contract. State officials could require a similar certification and monitoring of Recovery Act funds. Iowa officials said that they recognize the need for greater oversight and proper management of programs in light of the infusion of significant funds under the Recovery Act. According to state officials, the Recovery Act did not provide funds for oversight. For example, one state agency official in the Iowa Department of Education expressed concern about the adequacy of resources available for ensuring the appropriate use of the Recovery Act funds—an estimated $386 million from the state fiscal stabilization program for education—particularly because the agency anticipates further state-imposed staff reductions. Recognizing that the Recovery Act did not specifically provide funds for state oversight, the Governor proposed using $600,000 of the $86 million in fiscal stabilization funds in his 2010 budget to be made available for general government services to oversee Recovery Act funds. Iowa officials indicated that they are identifying ways to use the state’s internal audit functions to address Recovery Act-related issues. Iowa state audit officials indicated that state programs that receive significant Recovery Act funds while maintaining a high level of discretion over use of those funds—such as the state’s Medicaid program—present an increased risk to the state and will receive greater scrutiny during internal state audits. Iowa has just begun to consider how to measure outcomes and assess the effect of Recovery Act funding while it awaits federal guidance on a consistent approach to measuring the number of jobs created and sustained. State officials identified Iowa’s Accountable Government Act as a mechanism that has familiarized state agencies with results-oriented management and could help them assess the impact of Recovery Act funds. The Iowa Accountable Government Act requires each state agency to measure and monitor progress toward achieving program goals and report the progress toward those goals. In addition, the Iowa Department of Management, in consultation with the Legislative Services Agency, the State Auditor, and agencies, must periodically conduct performance reviews to assess the effectiveness of programs and make recommendations to improve agency performance. State agency officials said that they expect to be able to track information on the number of jobs created while others said they need further guidance. For example, the Iowa Department of Transportation tracks the number of worker hours by highway project on the basis of contractor reports. An Iowa Transportation official said that this information may be used to calculate the number of jobs created. Iowa education officials, in contrast, may need more guidance. Iowa teachers are notified by school districts in mid-March whether their jobs are guaranteed for the next school year, pending passage of school budgets. Once the budgets are passed, teachers are asked to return for the following school year. Officials said that they believed that federal guidance would help them determine how to characterize whether these jobs would be created or sustained. According to Iowa’s Department of Management, once it receives federal guidance on how to assess the impact of Recovery Act funding, it plans to disseminate the information across state agencies. It intends to measure the impact of Recovery Act funds through the state’s Recovery Act Web site and current tracking software. The Legislative Services Agency plans to work closely with the Department of Management to create outcome measures for the Recovery Act and report the results. Additionally, the Iowa Department of Economic Development has already established output and outcome measures for the Neighborhood Stabilization Program. Although most state agencies are waiting for federal guidance on how to assess results from Recovery Act funding, officials from some state agencies told us that they have accounting systems in place to measure programmatic outcomes. For example, the Iowa Department of Economic Development will monitor its Recovery Act funds by using systems adopted for tracking federal disaster recovery funds, including systems that the federal Department of Housing and Urban Development uses to monitor and report on funding spent to recover from natural disasters. The Iowa Department of Economic Development plans to put in place procedures for working with the State Auditor to leverage oversight of stimulus funds. Similar procedures have been established to oversee funding the state expects to receive to recover from disastrous floods in 2008. The Department of Economic Development expects a 20-fold increase in Community Development Block Grants in 2009 to help the recovery effort from these floods. Officials noted the potential difficulty of measuring Recovery Act outcomes separately from other recovery initiatives, such as Iowa’s proposed I-JOBS program. While state officials said that they believe there are benefits to supplementing federal efforts, the state may find it difficult to separate outcomes among the recovery programs. We provided the Governor of Iowa with a draft of this appendix on April 17, 2009. The Director, Iowa Office of State-Federal Relations and the Director for Performance Results, Department of Management responded for the Governor on April 20, 2009. In general, officials agreed with our findings and conclusions. The officials also offered several technical suggestions that we have incorporated, as appropriate. In addition to the individuals named above, Thomas Cook, Assistant Director; Christine Frye, Analyst-in-Charge; Alisa Beyninson; Gary Brown; Daniel Egan; Nancy Glover; Marietta Mayfield; Mark Ryan; and Carol Herrnstadt Shulman made key contributions to this appendix. Use of funds: An estimated 90 percent of fiscal year 2009 Recovery Act funding provided to states and localities will be for health, transportation, and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage Funds As of April 1, 2009, Centers for Medicare & Medicaid Services (CMS) had made about $1.2 billion in increased FMAP grant awards to Massachusetts. As of April 1, 2009, the state had drawn down about $273 million, or 23 percent, of its initial increased FMAP Officials plan to use funds made available as a result of the increased FMAP to avoid additional cuts in health care and social service programs, restore certain provider rates, and provide caseload mitigation for Medicaid and Commonwealth Care (an expansion of its Medicaid program). Massachusetts was apportioned about $425 million for highway infrastructure investment as of April 16, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had obligated about $63.9 million for 19 projects As of April 4, 2009, the Massachusetts Executive Office of Transportation had advertised 19 projects for competitive bids totaling more than $62 million; the earliest announcements were scheduled to close on April 14, 2009, and work on the projects is expected to begin this spring. These projects include activities such as road repaving and sign replacement. Massachusetts will request reimbursement from the U.S. Department of Transportation as project phases are completed by contractors. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) Massachusetts was allocated about $666 million from the initial release of these funds on April 2, 2009, by the U.S. Department of Education. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance of effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. In early April 2009, state officials reported that the commonwealth will file its application for this money around April 15, 2009, when it would better understand the state fiscal year 2010 budget situation. The Governor has announced that he intends to provide funds to 166 school districts to help them increase spending to prior levels and avoid program cuts and teacher layoffs in fiscal year 2010. He also intends to use some of these funds at public colleges and universities to reduce layoffs, program cuts, and student fee hikes. The commonwealth of Massachusetts is also receiving additional Recovery Act funds under programs, such as Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA, commonly known as No Child Left Behind); the Individuals with Disabilities Education Act, Part B (IDEA); and two programs of the U.S. Department of Agriculture—one for administration of the Temporary Food Assistance Program and one for competitive equipment grants targeted to low-income districts from the National School Lunch Program. The status of plans for using Recovery Act funds is discussed throughout this appendix. Safeguarding and transparency: Task forces, established by the Governor, encouraged the state to adopt accountability and transparency measures. Further, Massachusetts is expanding its accounting system to track funds flowing through the state government. Although Massachusetts has plans to publicly report its Recovery Act spending, officials have said that the state may not be aware of all funds sent directly to other entities, such as municipalities and independent authorities. The commonwealth’s oversight community has identified situations that raise concerns about the adequacy of safeguards, such as funding for larger projects and new programs, but is waiting for further information on what specific programs will receive funding before developing plans to address those concerns. Assessing the effects of spending: Massachusetts agencies are in the early stages of developing plans to assess the effects of Recovery Act spending. According to state officials, they are awaiting further guidance from the federal government, particularly related to measuring job creation. Massachusetts has begun to use some of its Recovery Act funds, as follows. Increased Federal Medical Assistance Percentage Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 percent to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for fiscal year 2009 through the first quarter of fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs; (2) a general across-the-board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act are for state expenditures for Medicaid services. However, the receipt of the increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. Under the Recovery Act, the commonwealth’s FMAP will increase to at least 56.2 percent, up from 50 percent. As of April 1, 2009, Massachusetts had drawn down $272.6 million, or 23 percent, of its increased FMAP grant awards. In fiscal years 2009 and 2010, officials plan to use a significant portion of funds made available as a result of the increased FMAP funds to avoid additional cuts in health care and social service programs, restore certain provider rates, and provide caseload mitigation for Medicaid and Commonwealth Care. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways, and for other surface transportation projects. States must follow the requirements for the existing programs, and in addition, the governor must certify that the state will maintain its current level of transportation spending, and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Massachusetts provided these certifications, but conditioned the state’s level of funding for these programs, noting that this spending will be financed through issuing bonds and may need to be decreased, depending on the state of the economy. The commonwealth’s debt affordability policy will determine the amount of debt that can be issued. As of April 4, 2009, the Massachusetts Executive Office of Transportation had advertised 19 projects for competitive bid totaling more than $62 million. These projects included, for example, replacing traffic and guide signs along sections of Route I-95 and paving Route 6 in southeastern Massachusetts. As of April 16, 2009, the U.S. Department of Transportation had obligated about $63.9 million for 19 projects in Massachusetts. Massachusetts will request reimbursement from the U.S. Department of Transportation as project phases are completed by contractors. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education (Education). The SFSF is intended to help states avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures, among other things, it will take actions to meet certain educational requirements, such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. Massachusetts’ initial SFSF allocation is $666,152,997. In early April 2009, state officials reported that the state would file its application for this money around April 15, 2009, when it would better understand the state’s revenue projections and after the Massachusetts House issues its fiscal year 2010 budget proposal. In March 2009, the Governor of Massachusetts had announced he intended to fund $168 million in SFSF to 166 school districts to help them increase funding and avoid program cuts and teacher layoffs in fiscal year 2010. He also announced he intended to provide $162 million in SFSF to public university and college campus budgets to reduce layoffs, program cuts, and student fee hikes. Massachusetts officials began preparing for receipt of federal Recovery Act funds prior to enactment of the act. Faced with deteriorating revenue projections, the potential for expanding caseloads in some safety net programs, such as Medicaid, and a requirement to balance the budget, Massachusetts officials believe that funds made available as a result of the Recovery Act are critical to addressing the commonwealth’s immediate fiscal pressures. State officials envision a sizable portion of the state- projected $8.7 billion in Recovery Act funds (over 2 years) going directly toward budget stabilization. According to state officials, as of April 2009, the state is addressing a budget gap of approximately $3.0 billion. This gap is driven largely by lower-than-anticipated revenues. State fiscal year 2009 revenue is significantly lower than budgeted and has left the state unable to support previously approved spending levels, and revenues are expected to fall short of planned expenditures for 2010, as well. In December 2008, anticipating a major infusion of federal funding, especially for infrastructure projects, the Governor established task forces to identify “shovel-ready” projects and address obstacles to project implementation. Ten task forces were created—seven focused on specific types of infrastructure investment, such as transportation, energy, and information technology, and three focused on crosscutting issues like workforce mobilization and procurement. In conducting their work, the task forces were guided by several principles, including investing for the long term and limiting investments to those that would not add to the state’s operating budget. Although other program areas, such as Medicaid and education, likely will receive more funding than will infrastructure, the work of the task forces was influential. The task forces developed work plans for projects that could be implemented using the anticipated funding and were instrumental in the appointment of a director of infrastructure investment (a “recovery czar”) to coordinate and monitor state agencies’ and municipalities’ implementation of projects. The task forces also encouraged the creation of a central Web site to enhance transparency, called for the involvement of the oversight community in contract oversight to ensure accountability, and prompted the introduction of legislation (now being considered by the legislature) intended to ease some of the procurement and contracting processes that might delay quick implementation of construction projects. The task force efforts helped prepare the state to submit several certifications required under the Recovery Act to the federal government. In late February, the Governor certified that the state would request and use all funds provided by the act. Additional certifications for transportation and energy have also been submitted. Revenue from the state’s “rainy-day” fund, a reserve fund built up during more favorable economic conditions to be used during difficult economic times, will give the commonwealth additional flexibility to avoid some cuts in fiscal year 2010. The commonwealth’s budget already calls for using about $925 million from the rainy-day fund in fiscal year 2009, and the Governor’s proposed 2010 budget calls for using about $489 million of the rainy-day funds. According to budget documents, the combination of funds made available as a result of the increased FMAP and rainy-day funds will help the state avoid cuts in several areas, including health care, education, and public safety. State documents suggest that officials are concerned about using one-time federal and rainy-day funds to make longer-term operational and program commitments that could require additional revenue in the future to avoid job and service cuts. State officials note that using temporary funds, such as Recovery Act and rainy-day funds, make budgeting uncertain and require strategic fiscal management. The commonwealth is expanding the use of its existing accounting system to track all Recovery Act funds that will flow through the state government. New codes are being added to the existing system in order to segregate and track the Recovery Act funds. The Office of the Comptroller has issued guidance on the required use of these newly created account codes for all Recovery Act transactions and has stipulated that all the Recovery Act-funded contracts include provisions to segregate Recovery Act money. While these changes have been made, officials were still testing the system and developing reporting capabilities as of April 13, 2009. The portion of Recovery Act funds going directly to recipients other than Massachusetts government agencies, such as independent state authorities, local governments, or other entities, will not be tracked through the state comptroller’s office. While state officials acknowledged that the commonwealth lacks authority to ensure adequate tracking of these funds, they are concerned about the ability of smaller entities to manage Recovery Act funds—particularly municipalities that traditionally do not receive federal funds and that are not familiar with Massachusetts’ tracking and procurement procedures, as well as recipients receiving significant increases in federal funds. In order to address this weakness, the administration introduced emergency legislation that, according to state officials, includes a provision requiring all entities within Massachusetts that receive Recovery Act money to provide information to the state on their use of Recovery Act funds. Alternatively, the two large nonstate government entities we spoke with to date—the city of Boston and the Massachusetts Bay Transportation Authority (MBTA, a quasi- independent authority responsible for metropolitan Boston’s transit system)—believe that their current systems, with some modifications, will allow them to meet Recovery Act requirements. For example, the city of Boston hosted the Democratic National Convention in 2004, and city officials said that their system was then capable of segregating and tracking a sudden influx of one-time funds. Some state programs have received actual allocations of federal Recovery Act funds, while for other state programs, officials have developed spending plans based on preliminary figures provided by federal departments. The U.S. Department of Transportation, through the Federal Transit Administration, published apportionment amounts for the Transit Capital Assistance and the Fixed Guidance Infrastructure Investment Programs on March 5, 2009. The Massachusetts Executive Office of Transportation (EOT) and the MBTA have been able to develop spending plans with a degree of certainty and EOT has advertised requests for bids on 19 projects totaling about $62 million. Other program officials have had to develop plans with preliminary estimates. For example, as of mid-March 2009, state officials from the Department of Elementary and Secondary Education said that local education officials reported that one of their biggest challenges was a lack of reliable information on federal Recovery Act allocations that they could use to plan their budgets. However, on April 1, 2009, Education announced the release of state allocations of ESEA Title I and IDEA funds, along with more detailed guidance for these programs. Some state and local officials said that while clear, specific guidance takes time to develop, the lack of guidance from federal agencies had limited their ability to make spending decisions. Officials from some of the entities we spoke with, including the state Department of Elementary and Secondary Education, the Department of Housing and Community Development, and the city of Boston, said they are comfortable making spending decisions with money slated to flow through pre-existing grant programs. However, the lack of specific guidance for federal Recovery Act funds for some programs has presented challenges, according to some state officials. An area of significant challenge for education officials concerns how to use federal Recovery Act funding to supplement state and local revenues for existing educational programs, rather than use these funds to supplant state and local revenue. State education officials said they anticipated that to prove funds have not been supplanted will be very challenging for local school districts and have requested additional guidance from the U.S. Department of Education to help them make better decisions about spending priorities. For example, state housing officials are seeking clarification from the U.S. Department of Housing and Urban Development (HUD) on whether the Tax Credit Assistance Program can be used to provide loans rather than grants to subrecipients, and state transportation officials are waiting for guidance on whether competitive grants can be used for “signature projects.” Some state agencies told us they anticipate they will be able to manage additional Recovery Act funding coming through well-established grant programs with existing agency resources but, in some cases, will hire additional staff to manage Recovery Act programs. For example, the state’s Department of Housing and Community Development (DHCD) reported it is expecting to receive significant Recovery Act funds and has plans to hire staff to help manage the programs. DHCD has well- established methods for managing expenditures and accomplishments, so agency officials believe they can effectively administer Recovery Act funds using existing structures. MBTA officials told us that given the enhanced transparency and reporting requirements associated with an additional $230 million in project spending, they anticipate that managing these Recovery Act projects will present some new challenges and will require that they hire a project management firm. Finally, a Department of Elementary and Secondary Education official told us they anticipate a need to hire additional staff, for a limited term, to manage competitive grant programs funded under the Recovery Act. The commonwealth has entities responsible for monitoring, tracking, and overseeing financial expenditures. The comptroller, who is responsible for implementing accounting policies and practices, oversees fiscal management functions, including internal controls. The State Auditor audits the administration and expenditure of state funds, ands partners with an accounting firm to perform the state’s annual Single Audit—a comprehensive review of all state agencies’ accounts and activities. The state Inspector General, with a broad mandate to prevent fraud, waste, and abuse, conducts operational and management reviews and has authority to examine independent authorities and municipalities. The Attorney General also plays a role, including preventing and prosecuting fraud. Further, according to state officials, some state departments have internal audit groups that focus on programmatic issues. In addition to these entities, the commonwealth has laws that provide further safeguards. Past experience has shown financial management vulnerability involving organizations that will receive funds under the Recovery Act. The Office of the Attorney General has documented improper Medicaid payments and has concerns regarding the funds from the Recovery Act going to the Medicaid program. They plan to take a risk-based approach, but are waiting for firm information on which programs and recipients will receive Recovery Act funds. The Inspector General stated that his office will need to emphasize oversight of larger procurement projects, which may be vulnerable. In addition, officials pointed to the multibillion-dollar cost overruns on a federally funded highway project in Boston (the “Big Dig”) as an example of what can go wrong when a large project lacks sufficient oversight. The Massachusetts fiscal year 2007 Single Audit report identified vulnerabilities that included insufficient monitoring of subrecipients of federal grants to the state. For example, the Massachusetts Department of Early Education and Care programs, which will receive Recovery Act funds, did not conduct any on-site monitoring of the Child Care Resource and Referral Agencies (subrecipients), which received approximately $11 million in child care development funds and $122 million in Temporary Assistance for Needy Families funds. Since that audit, the department has implemented numerous improvements and controls to address these issues. The State Auditor has also identified financial management concerns with nonprofit entities that receive federal funds and will receive additional funds under the Recovery Act. In addition, oversight officials noted some more general situations raising concerns. For example, some oversight officials identified new programs as potentially risky; however, new programs would have little impact on the fiscal year 2009 Single Audit report. New programs would probably be included on the fiscal year 2010 Single Audit report, which typically comes out some months after the end of the state’s fiscal year. Oversight officials also expressed concern about programs receiving large increases under the Recovery Act, and recipients that do not typically receive federal funds—and therefore may not have systems in place to track them—are also at risk. In order to better understand areas of potential vulnerability, the Governor asked all commonwealth agencies in late January 2009 to conduct self- assessments identifying existing oversight and accountability mechanisms. Most agencies submitted reports, which included varying levels of detail. The reports we reviewed showed that the agencies are generally comfortable with the mechanisms currently in place. One report expressed a need for additional resources to oversee any new funding. The self- assessments were shared with the State Auditor, Inspector General, and Comptroller’s offices. The State Auditor has provided comments to the Governor’s office, noting that while the self-assessments indicated existing control mechanisms in place to manage, account for, and monitor the spending of the Recovery Act funds, he expressed two areas of concern. He was concerned about tracking funds that bypass the state government and, based on past audits, about subgrantee monitoring. The Inspector General plans to provide comments on the needs assessments to the Governor’s office by the end of April. The Comptroller is using the assessments to monitor agencies’ controls over Recovery Act funds on an ongoing basis. While the commonwealth’s oversight community has come together to discuss issues such as avoiding areas of duplication and preventing oversight gaps, as a whole, it has yet to develop a coordinated plan describing which programs and departments it will focus on or how it will conduct critically needed oversight. Both the Inspector General and Attorney General recognize the need for training for local officials, specifically related to procurement. The Inspector General stated that his department would continue its training of local procurement officials and announced in its March 2009 Procurement Bulletin that his office should be contacted regarding any questions on procurement or Recovery Act expenditures. While the Inspector General identified the need for increased oversight, particularly related to procurements, oversight officials generally stated that once they determine the total distribution of Recovery Act money, they then would begin selecting areas for review. The Attorney General has convened a task force to coordinate on oversight issues with the federal and state oversight community. The state legislature will also provide oversight of the Recovery Act funds through the newly created Joint Committee on Federal Stimulus Oversight. This committee has already held three hearings with plans to hold more regarding the oversight of Recovery Act spending. According to committee members, the impetus for creating this committee was Massachusetts’ failure to control fraud, waste, and abuse in the federally funded “Big Dig” construction project. The purpose of the joint committee is to ensure compliance with federal regulations and to review current state laws, regulations, and policies to ensure they allow the commonwealth to access Recovery Act funding and streamline processes to quickly stimulate the economy. In addition to the co-chairmen having the capability to subpoena individuals, a co-chairman said that the Joint Committee has broad authority and its jurisdiction extends to wherever public federal, state, and local money is spent. Massachusetts’ administration has emphasized transparency of Recovery Act spending and identified the state recovery Web site as a transparency tool. In addition, the Web site has links to planning documents, guidance, and intended uses of Recovery Act money, and officials are planning to enhance the Web site with a goal of making it the central portal for all Recovery Act information and reporting. Their goal is to include the ability to track Recovery Act money by town and by project, as well as to include each project’s budget, schedule, awarded contracts (with contract details), and its on-time status. In addition, the public can send e-mails regarding stimulus issues to this site and the Recovery czar’s staff is responsible for replying. Several Massachusetts officials expressed concern that the Recovery Act did not provide funding specifically for state oversight activities, despite the importance of ensuring that Recovery Act funds are used appropriately and effectively. In addition, the task forces the Governor convened in December 2008 concluded that it is critical the Inspector General and State Auditor have resources to audit Recovery Act contracts and management of Recovery Act funds, as well as recommended that the Attorney General’s office should be provided with the resources to promptly and effectively pursue fraud and abuse. However, due to the present economic conditions, state officials said the Massachusetts oversight community is facing budget cuts of about 10 percent at a time when increased oversight and accountability is critically needed. To illustrate the impact of the impending budget situation, the Inspector General told us that his department does not have the resources to conduct any additional oversight related to Recovery Act funds. This significantly impacts the Inspector General’s capacity to conduct oversight since the budget of the Inspector General’s office is almost entirely composed of salaries, and any cuts in funding would result in fewer staff available to conduct oversight. In addition, the State Auditor described how his office has already furloughed staff for 6 days and anticipates further layoffs before the end of fiscal year 2009. Similar to the Inspector General’s office, 94 percent of his department’s budget is for labor and any cuts in funding generally result in cuts in staff. Some of these vulnerabilities may be mitigated by emergency legislation that the Governor recently filed, which included a provision to allow the pooling of administrative costs. This new legislation may make some Recovery Act funds available to the audit community for oversight, as long as federal law permits. Meanwhile, officials stated they are moving forward with developing and implementing enhancements to the Massachusetts recovery Web site, yet they are doing so without any Recovery Act funds. One senior state official stated she did not believe the Recovery Act provided funding for any state-level centralized information technology planning or development but noted that the Recovery Act provided a considerable level of funding for information technology development at the program level. Although they are awaiting federal guidance on how to assess the impact of the Recovery Act, Massachusetts agencies are in the process of considering how to assess the number of jobs that will be created. For example, officials from DHCD are examining different methodologies for identifying job creation, while the city of Boston is using an economic forecasting model to evaluate job creation and other economic effects of projects. In addition, DHCD officials told us that they asked Tax Credit Assistance Program project managers to report estimates on the number of jobs, by trade, that will be needed to complete projects and are also looking for a reliable economic forecasting model to use for this reporting objective. DHCD officials also said they are waiting for guidance from HUD on how to calculate and document job creation for programs funded under the Neighborhood Stabilization Program. DHCD officials said they plan to use a pre-existing process developed for community action programs to collect information on job creation for projects funded by the Weatherization Program. MBTA officials said they feel confident they can estimate the number of new jobs created using Recovery Act funds; however, they are waiting for specific guidance from the U.S. Federal Transit Administration or the Office of Management and Budget on what to include in job creation calculations, as well as how to track indirect (jobs created to manufacture goods used in the project) and leveraged jobs (jobs created by new building projects that result from transportation improvements). MBTA officials also said they are looking to outsource some of the required oversight, including documenting job creation. Finally, state transportation officials are concerned that incentives may encourage contractors to overinflate the number of jobs created by their projects. They told us that, in the absence of specific guidance on how to account for job creation, some smaller contractors might overreport the number of jobs created. Furthermore, the cold weather conditions in the commonwealth can prohibit construction from continuing during the winter months. Officials suggested the pressure to show that the projects are contributing to the recovery may encourage some contractors to inflate the number of jobs created in some months when weather conditions decrease employment. We provided the Governor of Massachusetts and representatives of oversight agencies with a draft of this appendix on April 17, 2009, and representatives from the Governor’s office and the oversight agencies responded that day. In general, they agreed with our draft and provided some clarifying information, which we incorporated. The officials also provided technical suggestions that were incorporated, as appropriate. In addition to the contacts named above, Carol L. Patey, Assistant Director; Ramona L. Burton, analyst-in-charge; Kathleen M. Drennan; Salvatore F. Sorbello, Jr.; and Robert D. Yetvin made major contributions to this report. Use of funds: In estimated 90 percent of fiscal year 2009 Recovery Act funding provided to states and localities nationwide will be for health, transportation, and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance percentage (FMAP) awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, the Centers for Medicare & Medicaid Services (CMS) had made about $701 million in increased FMAP grant awards to Michigan. From January 2008 to January 2009, the state’s Medicaid enrollment increased from 1,547,259 to 1,624,245 with the highest share of increased enrollment attributable to two population groups: (1) children and families and (2) disabled individuals. As of April 1, 2009, Michigan has drawn down about $463 million—which represents funds drawn down for two quarters—or 66.1 percent of its initial increased FMAP grant awards. Officials plan to use funds made available as a result of the increased FMAP to cover increased caseloads, offset general fund shortfalls, ensure compliance with prompt payment provisions, maintain existing populations of Medicaid recipients, avoid eligibility restrictions, increase provider payments, maintain current levels of benefits, and avoid benefit cuts. Michigan was apportioned about $847 million for highway infrastructure investment on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009 the U.S. Department of Transportation had obligated $110.8 million for 27 Michigan As of April 13, 2009, the Michigan Department of Transportation had advertised 16 projects for competitive bid totaling more than $41 million. These projects included resurfacing I-196 in Grand Rapids and M-13 in Genesee County. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) Michigan was allocated about $1.1 billion from the U.S. Department of Education’s initial release of these funds on April 2, 2009. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance of effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. Michigan plans to submit its application on or after May 15, 2009, once it completes its review of all program priorities for which it intends to use stabilization funds. Michigan Department of Education officials told us they consulted with local education agencies to develop plans and establish priorities for the use of State Fiscal Stabilization Fund funds that were consistent with the state’s priorities, policies and programs, such as increasing support for the lowest performing schools. Michigan is receiving additional Recovery Act funds under other programs, such as programs under Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA) (commonly known as No Child Left Behind); the Individuals with Disabilities Education Act (IDEA), Part B; Federal Transit Administration Transit Grants; and the Edward Byrne Justice Assistance Grants. These are described in this appendix. Safeguarding and transparency: All of the state and local agency officials we interviewed indicated they plan to use existing systems to separately identify and track Recovery Act funding. State officials were confident that their existing processes, modified to incorporate specific Recovery Act codes, would be sufficient to allow them to separately account for funds as required by the act. However, officials were uncertain whether local entities have the capacity to similarly track federal funds that go directly to local entities rather than through the state. Michigan also plans to continue using existing internal controls and processes to provide assurances over Recovery Act spending. Michigan has established a new Recovery Office to, among other things, provide oversight and enhance transparency over the availability and use of funds and maintain a Web site on Michigan’s Recovery and Reinvestment Plan (www.michigan.gov/recovery). Michigan’s existing processes also include ongoing risk-based self-assessments of controls by major state agencies that are next due on May 1, 2009. However, these assessments are limited to state agencies. In addition, the state Auditor General has identified material weaknesses in two key departments that have received Recovery Act funds—Michigan’s Department of Human Services and Department of Community Health. The state Auditor General plans to continue working on a biennial basis, reviewing and reporting on about one-half of the state agencies each year. The state Auditor General’s oversight responsibilities do not include efforts to ensure accountability over federal funds going directly to localities. For example, the U.S. Department of Education’s Inspector General identified weak internal controls that resulted in problems in how the city of Detroit school district used federal funds for programs under Tile I of ESEA. Specifically, its July 2008 report found that Detroit Public Schools, among other things, did not always properly support compensation charges against ESEA Title I funds. Detroit Public Schools officials told us that in the spring of 2009 they hired new staff to develop corrective action plans for addressing existing internal control weaknesses. Assessing the effects of spending: Michigan officials have some experience in measuring the impact of funds in creating jobs and promoting economic growth. The state plans to rely on experts in economic modeling. The state’s financial management system, however, is old and does not have the capability to track impacts, so the state will have to rely upon its agencies for this. State officials also told us that the state information technology group will implement a database system at the end of April 2009 that will support its financial management system in recording the impact of Recovery Act funds. Faced with the highest unemployment rate of all the states (as of February 2009), heavy reliance on the deteriorating car manufacturing sector, and declining tax revenue, Michigan officials plan to use Recovery Act funds to address the state’s immediate fiscal needs as well as to help develop long- term capacity. From an employment peak in June 2000, Michigan had lost about 520,000 jobs as of December 2008. Unemployment sharply increased from 7.4 percent in February 2008 to 12 percent in February 2009, and several local communities had even higher rates. For example, since domestic auto manufacturing dominates Detroit’s economy, the unemployment levels in the city have been consistently higher than in the rest of the state. As of December 2008, the city’s jobless rate was 18.6 percent and according to Detroit officials reached nearly 22.8 percent in March 2009. To help address these issues, prior to the enactment of the Recovery Act on February 17, 2009, the federal government provided $23.7 billion to two auto companies and two financing companies operating in Michigan as part of the Troubled Asset Relief Program. Michigan has been experiencing declines in state revenues. In January 2009, Michigan reported an expected budget gap of approximately $1.4 billion for fiscal year 2010. In response, the Governor has proposed budget cuts for fiscal year 2010 of $670 million in key state programs such as public education, corrections, and community health; $232 million in revenue enhancements, such as tax increases and elimination of tax exemptions; and using funds made available as a result of $500 million in increased FMAP funds to offset the budget gap. In March 2009, Michigan’s legislature estimated that the state would receive approximately $7 billion in Recovery Act funding. These estimates show that the majority of Recovery Act funds would support education (36 percent), Medicaid (32 percent), and transportation (14 percent), with smaller amounts of funding available for other programs (18 percent). Michigan has begun to use some of its Recovery Act funds, as follows. Increased Federal Medical Assistance Percentage Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 percent to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery act. Generally, for federal fiscal year 2009 through the first quarter of federal fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs; (2) a general across-the- board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of this increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. Under the Recovery Act, Michigan’s FMAP will increase to at least 69 percent, up from 58 percent in 2008. From January 2008 to January 2009, the state’s Medicaid enrollment increased from 1,547,259 to 1,624,245, with the highest share of increased enrollment attributable to two population groups: (1) children and families and (2) disabled individuals. As of April 1, 2009, Michigan has drawn down $463 million, 66.1 percent, of its awards to date. Michigan officials indicated that they will use funds made available as a result of the increased FMAP to cover increased caseloads, offset general fund shortfalls, ensure compliance with prompt payment provisions, maintain existing populations of Medicaid recipients, avoid eligibility restrictions, increase provider payments, maintain current levels of benefits, and avoid benefit cuts. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and for other surface transportation projects. States must follow the requirements for the existing programs, and in addition, the governor must certify that the state will maintain its current level of transportation funding, and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Michigan has submitted these certifications. As of April 16, 2009, the U.S. Department of Transportation had obligated $110.8 million for 27 Michigan projects. On March 31, 2009, the Governor signed state legislation authorizing the use of federal Recovery Act funds for transportation projects that are expected to create about 25,000 jobs. As of April 13, 2009, the Michigan Department of Transportation (MDOT) had advertised 16 projects totaling more than $41 million for competitive bidding. These projects included resurfacing I-196 in Grand Rapids and M- 13 in Genesee County. Michigan was apportioned about $982 million for transportation projects, including $847 million for highway infrastructure investment projects and $135 million for urban and rural transit projects. MDOT was apportioned about 75 percent of Recovery Act highway infrastructure investment funds and remaining funds will be suballocated to metropolitan, regional, and local organizations. MDOT identified 178 road and bridge projects that would, among other things, improve road pavement conditions on 1,300 lane miles of roadways, add lanes to four major roads to reduce congestion, and perform work on 112 bridges, of which 41 are structurally deficient. According to MDOT officials, the priority was to select shovel-ready projects that could be initiated and completed quickly. In Michigan, Recovery Act funds are being used primarily to fund transportation projects in fiscal year 2009 that were originally scheduled to begin in fiscal year 2010 or beyond, as well as some projects that had been identified but had no source of funding. MDOT officials told us they intend to complete selecting and approving specific road and bridge projects to be funded with Recovery Act money by May 1, 2009. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education (Education). The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures, among other things, it will take action to meet certain educational requirements, such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. Michigan’s initial SFSF allocation is $1.1 billion. The Recovery Act provided State Fiscal Stabilization Funds to increase funding for education over the next several years and avoid program cuts and teacher layoffs in fiscal year 2010. The amount of funding for each of the initiatives has not yet been determined. Michigan plans to submit its application for SFSF funds on or after May 15, 2009, once the state completes its review of all program priorities for which it intends to use stabilization funds. Michigan Department of Education officials told us they consulted with local education agencies to develop plans and establish priorities for the use of SFSF funds that were consistent with the state’s priorities, policies, and programs, such as increasing support for the lowest performing schools. U.S Department of Education ESEA Title I and Individuals with Disabilities Education Act (IDEA) Funds: Michigan Department of Education officials told us that although the amount of funding for each of these two initiatives has not yet been determined they anticipate that Recovery Act funds for ESEA Title I ($390 million) and IDEA ($426 million) will generally be used to support the same priorities that are funded in part by U.S. Department of Education funds that the state now receives. The state plans to use Recovery Act funds to support specified educational outcomes—reading, mathematics, and other learning proficiencies—and foster enhanced access to education programs for special needs students. Michigan’s Department of Education also intends to use Recovery Act funds to support professional development among teachers that can help sustain achievement of educational outcomes beyond the time limits of Recovery Act funding. U.S. Department of Justice Edward Byrne Memorial Justice Assistance Grant Program: Michigan plans to apply for $67 million in Recovery Act funds for crime control and prevention activities. Michigan Department of Community Health officials told us that about $41 million of these funds will support, among other things, state efforts to reduce the crime lab backlog, funding for multi-jurisdictional courts, and localities’ efforts regarding law enforcement programs, community policing, and local correctional resources. An additional $26 million in Recovery Act funds will go directly to localities to support efforts against drug-related and violent crime. On April 13, 2009, Michigan began accepting grant applications from local Michigan jurisdictions for Byrne Justice Assistance grants funding administered by the state and will continue to accept them until May 11, 2009. All state and local agency officials we interviewed indicated that they plan to use their existing systems to tag and track Recovery Act funding, including increased FMAP funds. State officials were confident that their existing processes for receiving, coding, and monitoring federal funds could be used to separately account for the use of Recovery Act funds as required by the Act. For example, Michigan’s Department of Education has used the Michigan Electronic Grants System since 2001 to generate recipient reports on the use of ESEA Title I, IDEA, and State Fiscal Stabilization Funds. According to its officials, Michigan Department Education plans to continue to use the grants system for reporting on recipients’ use of Recovery Act funds by creating new accounting codes for Recovery Act funds. Although state government officials told us they believed that their departments have sufficient capabilities to segregate Recovery Act funds, many expressed less confidence in the capabilities of sub-recipients to separately account for the use of Recovery Act funds. State officials expressed concerns about the capacity of smaller agencies and organizations to separately track and monitor Recovery Act funds. For example, Detroit Public Schools officials told us that the school district has not had a clearly specified process for segregating funds from different funding streams and for how it intends to use Recovery Act funds. According to the officials, in the last several years, the district has commingled ESEA Title I funds with its general funds, making it difficult to track the use of ESEA Title I funds and show that they were used only for allowable expenditures. In addition, according to Detroit Public Schools officials, without improvements to its oversight of these funds, Detroit Public Schools may continue experiencing oversight challenges with respect to Recovery Act funds provided through ESEA Title I and IDEA funding streams. For example, according to a July 2008 report from the U.S. Department of Education’s Office of Inspector General, the Detroit Public Schools district, among other things, did not always properly support compensation expenses charged to ESEA Title I funds. District officials told us that in April 2009 they hired new staff to develop corrective action plans for addressing existing internal control weaknesses. In anticipation of the opportunity to receive additional federal funding and the need to act quickly, Michigan began preparations before the Recovery Act was enacted. For example, the Governor established a working group of executive branch officials from Michigan state agencies and departments, known as Economic Recovery Coordinators (ERC), to plan for the use of anticipated Recovery Act funds. On February 13, 2009, the Governor established a Recovery Office for coordination of all Recovery Act activities, including communication with stakeholders within and outside the state. The Recovery Office is responsible for helping develop priorities for the use of Recovery Act funds by the state consistent with the objectives of the Recovery Act and with the state’s priorities identified to fully maximize the impact of these federal funds. Similarly, Detroit officials told us that they began planning in November 2008 for the receipt of Recovery Act funds and identified over 160 city projects that could be funded by working closely with city departments and community action organizations. Lansing Schools District officials told us that they began planning early for use of Recovery Act funding for the district’s 34 schools. The Recovery Office has also been working with state agencies to develop strategies for overseeing and tracking the use of Recovery Act funds to comply with requirements of the act and minimize fraud, waste, and abuse of funds and to help ensure consistent, timely, and accurate compliance with all reporting and certification requirements under the Recovery Act. Michigan is also maintaining a Web site on Michigan’s Recovery and Reinvestment Plan (www.michigan.gov/recovery). According to state officials, Recovery Act funds must be appropriated by the state legislature before the state is authorized to spend the money. In addition, the Michigan Senate created a special committee, known as the Senate Federal Stimulus Oversight subcommittee, to oversee Recovery Act funds. Michigan Department of Management and Budget officials told us that they are prepared to manage Recovery Act funds because they plan to use existing processes for purchasing goods and services. For example, Michigan will use existing processes to obtain competitive bids for contracts awarded by state agencies under the Recovery Act in accordance with state law, which state officials described as requiring competitive bids (other than certain exceptions such as emergencies or imminent protection). In January 2009, Michigan created a prequalification program for vendors to provide an inventory of prequalified vendors ready to quickly respond to bids for work that will spend Recovery Act funds. As part of preparing to spend Recovery Act funds, Michigan Department of Management and Budget officials also told us they have been looking at ways to further streamline awarding contracts. Michigan also allows local units of government to join state contracts to leverage the state’s negotiating and purchasing power. Michigan will continue to use existing internal controls to provide assurances over Recovery Act spending, including ongoing self- assessments of controls by major state departments that are next due to the state Auditor General on May 1, 2009. The self-assessments include identification of internal controls and programmatic weaknesses and developing and tracking actions taken in response to corrective action plans. The state Auditor General told us his office will include specific audit procedures to address Recovery Act funding as part of the planned procedures for its ongoing federal Single Audits of state departments which will start again in July 2009. However the state Auditor General does not yet have specific plans to audit Recovery Act funds. The state Auditor General’s Single Audit approach is to audit and report on individual state departments. Approximately one-half of Michigan’s 18 departments are audited each year, with the audits covering 2 fiscal years of departmental activity. Recent state Auditor General Single Audit Act reports identified numerous material weaknesses in key state operations that are slated to receive significant amounts of Recovery Act funds. For example, the state Auditor General reported in August 2007 that, for fiscal years 2005 and 2006, Michigan’s Department of Human Services did not materially comply with federal program requirements regarding allowed or unallowed costs, subrecipient monitoring, and eligibility. The October 2008 Single Audit report on Michigan’s Department of Community Health stated that internal controls were not sufficient to ensure the accuracy of financial accounting and reporting and compliance with federal requirements for 10 of 11 major programs. The Michigan Auditor General’s oversight responsibilities do not include most subrecipients that receive federal funding, so any upfront safeguards to track or ensure accountability over Recovery Act funds going directly to localities have not been determined. Officials from Detroit’s Office of the Auditor General told us that they intend to audit the use of Recovery Act funds. The superintendent of the Lansing school district told us the district, along with all the other 840 local school districts in the state, contract with independent public accountants to perform annual financial statement audits. A lack of staff and uncertainty of funding available under the Recovery Act to oversee the use of federal funds may pose challenges for Michigan. Michigan officials reported that a hiring freeze may not allow some state agencies to hire staff to increase their Recovery Act oversight efforts. Officials with the state’s Departments of Community Health and Education and the Lansing School District are concerned about available administrative resources to cover increased oversight activities on the use of Recovery Act funds. For example, the state Department of Community Health said that because it has been downsizing for several years through attrition and early retirement, it does not have sufficient staff to cover its current responsibilities and that further reductions are planned for fiscal year 2010. However, state officials told us that they will take the actions necessary to ensure that state departments have the capacity to provide proper oversight and accountability for Recovery Act funds. Michigan officials we spoke with in March 2009 wanted additional federal guidance related to state responsibilities and reporting requirements under the Recovery Act and expressed concern about spending funds before they had received such guidance. For example, officials were unclear about the state’s responsibilities concerning tracking or reporting on funds that go directly to local entities, such as transportation funding going directly to localities for urban transit. In addition, Michigan Department of Education officials expressed concern about the lack of guidance from the U.S. Department of Education regarding several aspects of how to manage the receipt, allocation, use, and reporting of Recovery Act funds. In particular, state officials said they had not yet received guidance on tracking funds under IDEA, Part C and were concerned that recipients of grant funds might report information inconsistently. On April 1, 2009, the U.S. Department of Education issued additional guidance on the use of Recovery Act funds. Michigan may face challenges in assessing the impact of Recovery Act funds because the state’s financial management system is old and does not have the capability to track impacts, so the state will have to rely upon its agencies for this. Furthermore, state officials said they are aware of the requirement that the state measure the extent that certain Recovery Act funds create jobs and promote economic growth and have identified prospective participants to estimate the impact of Recovery Act funds. State officials also told us that the state information technology group will implement a database system at the end of April 2009 that will support its financial management system in recording the impact of Recovery Act funds. They told us that the Michigan Economic Development Corporation, universities in the state and other experts in economic modeling are expected to participate in prospective analysis supporting the potential impact of Recovery Act funds on a project basis. Additionally, the Department of Energy, Labor and Economic Growth and the state Treasurer will also be involved in analysis related to the impact of Recovery Act’s funds. We provided the Governor of Michigan with a draft of this appendix on April 17, 2009. Michigan’s Recovery Czar responded for the Governor on April 20, 2009, stating that staff in the Michigan Governor’s office and the Michigan Economic Recovery Office have reviewed the draft appendix and, in general, agree with its overview of the state’s preparations for receiving and spending Recovery Act funding. These officials provided technical comments on the draft which were incorporated, as appropriate. In addition to the contacts named above, Robert Owens, Assistant Director; Jeffrey Isaacs, Analyst-in-Charge; Manuel Buentello; Leland Cogliani; Anthony Patterson; and Mark Ward made major contributions to this report. Use of funds: An estimated 90 percent of Recovery Act funding provided to states and localities nationwide in fiscal year 2009 (through Sept. 30, 2009) will be for health, transportation and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 1, 2009, Centers for Medicare and Medicaid Services (CMS) had made about $225.5 million in increased FMAP grant awards to Mississippi. As of April 1, 2009, the state had drawn down $114.1 million, or just more than 50 percent of its initial increased FMAP grant awards. State officials reported that they plan to use funds made available as a result of the increased FMAP to cover their increased Medicaid caseload and to offset expected state budget deficits due to lower general fund revenue collections. On March 2, 2009, the U.S. Department of Transportation apportioned Mississippi about $355 million for highway infrastructure investment. As of April 16, 2009, the U.S. Department of Transportation had obligated approximately $137 million for 32 As of April 1, 2009, Mississippi had signed contracts for 10 projects totaling approximately $77 million. The Mississippi Department of Transportation (MDOT) used a competitive and transparent process to select projects. These projects include activities such as road construction and road maintenance. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) On April 2, 2009, the U.S. Department of Education allocated Mississippi about $321 million from the initial release of these funds. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance of effort requirements or will be able to comply with waiver provisions and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. Mississippi plans to submit its application for state fiscal stabilization funds after it receives and reviews the final program guidance. Mississippi expects to use these funds to help restore funding for elementary, secondary, and public higher education to prior levels in order to minimize reductions in education services in fiscal years 2009, 2010, and 2011. The state does not foresee having leftover funds for additional subgrants to local education agencies. Mississippi is receiving additional Recovery Act dollars to fund other programs, including employment and training programs under the Workforce Investment Act, capital and management activities under the Public Housing Capital Fund, and gap financing for low-income housing tax credit projects under the Taxpayer Credit Assistance Program. The status of Mississippi’s plans for using these funds is described throughout this appendix. Safeguarding and transparency: The State Auditor’s office has taken steps to ensure accountability. For example, the office hosted a meeting with state agency heads to discuss accountability requirements and expectations, and the office plans to conduct training seminars on accounting for and controlling the use of Recovery Act funds. In addition, officials with the auditor’s office said Mississippi plans to add special accounting codes to the statewide accounting system in order to track the expenditure of Recovery Act funds. The state also plans to publicly report Recovery Act spending that state agencies receive directly. State officials noted that the statewide accounting system would not capture those funds that the federal government allocates directly to local and regional governmental organizations, nonprofit organizations, or higher education entities. According to the Governor’s office, the state is developing a framework that would require these entities to report Recovery Act revenues and expenses to a central website. Assessing the effects of spending: According to state officials, they are waiting for the federal government to provide more specific guidance for measuring job creation and retention. For example, the officials noted that the federal government’s Office of Management and Budget (OMB) should provide more guidance for estimating job creation and retention. Mississippi has begun to use some of its Recovery Act funds, as follows. Increased Federal Medical Assistance Percentage Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 percent to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for federal fiscal year 2009 through the first quarter of federal fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs; (2) a general across-the- board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of the increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. Under the Recovery Act, Mississippi’s FMAP will increase to 83.62 percent, an increase of 7.33 percentage points over its fiscal year 2008 FMAP. As of April 1, 2009, Mississippi had drawn down $114.1 million or just more than 50 percent of its initial increased FMAP grant awards. Mississippi officials plan to use funds made available as a result of the increased FMAP to cover their increased Medicaid caseload and to offset expected state budget deficits due to lower general fund revenue collections, avoiding cuts in services. Mississippi officials indicated that simplifications to CMS expenditure reporting systems are needed to automatically generate the increased FMAP applicable to qualifying expenditures. Officials also reported a need for CMS guidance regarding programmatic changes that were made to its Family Planning Waiver since July 1, 2008, and whether these changes affect the state’s ability to draw down the increased FMAP. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation program, which apportions money to states to construct and maintain eligible highways and for other surface transportation projects that could affect highways. States must follow the requirements for the existing programs, and in addition, the governor must certify that the state will maintain its current level of transportation spending, and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Mississippi’s Governor provided this certification in a letter dated March 17, 2009. The Governor noted that transportation spending authority in Mississippi is granted annually by the state Legislature to the Mississippi Department of Transportation (MDOT), which operates under the guidance of independently elected transportation commissioners. As such, MDOT’s Executive Director also provided this certification. As of April 1, 2009, MDOT had signed contracts for 10 projects totaling approximately $77 million. The agency used a transparent and competitive process for awarding contracts for these projects. MDOT issued an advance notice on its Web site to inform contractors of the opportunity to bid on the projects. Furthermore, MDOT used cost as a key criterion for awarding contracts. MDOT awarded the contract to the lowest bid, provided that the lowest bid did not exceed the state’s cost estimate for the project by more than 10 percent. These projects include the expansion of State Route 19 in eastern Mississippi into a four-lane highway. This project fulfills part of MDOT’s 1987 Four-Lane Highway Program, which seeks to link every Mississippian to a four-lane highway within 30 miles or 30 minutes. In addition, MDOT plans to upgrade a section of a major road, US-78, which runs across northern Mississippi. An MDOT official anticipated the project would have major economic benefits for Mississippi. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF), to be administered by the U.S. Department of Education (Education). The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures, among other things, it will take actions to meet certain educational requirements, such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. Mississippi’s initial SFSF allocation is about $321 million. The Recovery Act specifies that 81.8 percent is to be used for support of elementary, secondary and postsecondary education, and early childhood programs. The Recovery Act also authorizes the Governor to use 18.2 percent of these funds for “public safety and other government services,” which may include education. Mississippi’s Governor has not yet announced specific plans for the use of these other government services funds. According to state education officials, Mississippi will file its application for these funds after receiving and reviewing sufficient guidance. The funds will be appropriated to the state education agencies by the Mississippi State Legislature when it returns to session later this spring. The funding is expected to be used to stabilize education budgets in fiscal years 2009, 2010, and 2011 to help avoid reductions in education services. Restoring funding in those years to required levels is expected to consume all of the stabilization funds to be received by the state. Mississippi began planning for how the state would provide oversight of Recovery Act funding in February 2009. Officials from the Governor’s Office said that the state did not establish a new office to provide statewide oversight of Recovery Act funding, in part because they did not believe that the act provided states with funds for administrative expenses—including additional staff. The Governor’s Director of Federal Policy is serving as the stimulus coordinator for the state with support from a loaned executive from a statewide business development association. The stimulus coordinator told us she met individually with state agency heads to discuss their plans for spending funds allocated under the Recovery Act. In late March 2009, the Governor submitted a letter certifying that Mississippi would request funds available under the Recovery Act and such funds will be used to create jobs and promote economic growth. The Governor added in the certification letter that the state would continue to examine the various guidelines and fund-specific requirements associated with the Recovery Act funds. In April 2009, the Governor hosted a Mississippi Stimulus Summit where state agency heads provided information on the detailed steps that were already being taken or were planned regarding the use of Recovery Act funds. Finally, the Governor established a state stimulus Web site (www.stimulus.ms.gov) to provide information to the public on the Recovery Act funding received by the state. Mississippi officials plan to use the anticipated $2.8 billion in Recovery Act funding to address fiscal challenges the state has experienced due to a weakened economy. State officials reported that Mississippi entered a recession in late 2008. One indicator of Mississippi’s weakened economy is the state’s unemployment rate, which was 8.7 percent in January 2009 compared with 6.9 percent in June 2008. The state’s weakened economy has also resulted in lower-than-expected tax revenues for the state’s current fiscal year. According to the Governor, Mississippi’s Revenue Estimating Committee projected that the state’s fiscal year 2009 general fund revenue will fall $301 million, or 5.9 percent, short of expectations. In response to anticipated budget shortfalls, the Governor made two cuts to most state agency budgets. In November 2008, the Governor cut most agency budgets by 2 percent, or $42 million. In January 2009, the Governor cut state agencies’ budgets by an additional $158.3 million, bringing the total cuts to date for the fiscal year to $200 million. Each agency or department received a budget cut of up to 5 percent (see table 8). Although the Governor anticipated that Congress would pass a stimulus package, he ordered the cuts in agency budgets to comply with state law that requires a balanced budget for the fiscal year, which ends on June 30. To mitigate the impact of economic fluctuations on state revenues, Mississippi has historically set aside 2 percent of projected revenues into a budget stabilization fund. In 2008, however, the state did not set aside any revenues for this fund, which made available an additional $100 million for Mississippi’s 2009 fiscal year budget. Going forward, Mississippi faces budgetary challenges for fiscal year 2010. According to the Governor, the state’s Revenue Estimating Committee projects that Mississippi’s revenues will be $402.7 million, or 7.9 percent, short of expectations. State officials anticipate that the recession will increase the demand for certain government services, including unemployment benefits, Medicaid, food stamps, and rental assistance. Some Mississippi officials believe that the state’s recession could continue through fiscal year 2012. Most of the Recovery Act funds that Mississippi will receive are directed toward education, Medicaid, and transportation programs (see fig. 9). According to the Governor’s office, state law provides for state agencies to escalate their spending plans to account for federal funds received under the Recovery Act. State officials also told us that the Legislature was considering adding further escalation language to the current fiscal year’s appropriations bills that would authorize state agencies to spend any Recovery Act funds received. The Legislature normally conducts its regular session between the beginning of January and the end of March. However, the Legislature recessed early during the 2009 regular session in part because of uncertainty regarding how the state’s portion of Recovery Act funds should be spent. The Legislature plans to reconvene in early May 2009 to complete its work on the state’s fiscal year 2010 budget. Officials with the State Auditor’s office told us that special accounting codes will be added to the Statewide Automated Accounting System (SAAS) in order to track the expenditure of Recovery Act funds. The state also plans to publicly report Recovery Act spending that state agencies receive directly. However, state officials noted that SAAS would not track Recovery Act funds allocated directly to local and regional governmental organizations, nonprofit organizations, or higher education entities. For example, cities with a population of more than 50,000 residents can apply directly to federal agencies for certain programs, such as Community Development Block Grants. In addition, Mississippi has 10 regional planning and development districts that may receive funding directly from federal agencies. Finally, Mississippi localities may receive Recovery Act funds directly from the Appalachian Regional Commission or Delta Regional Authority, federally chartered regional commissions charged with promoting economic development in certain parts of the state. According to the Governor’s office, the state is developing a framework that would require these entities to report Recovery Act revenues and expenses to a central website. A few state agencies have made spending decisions for Recovery Act fund apportionments received: The Mississippi Department of Employment Security (MDES) received about $40.7 million in Recovery Act funding for adult, dislocated worker, and youth activity programs under the Workforce Investment Act. MDES officials told us they planned to use the youth activity funding to provide summer youth programs across the state. The Jackson Public Housing Authority received a $1.1 million allocation to its Public Housing Capital Fund from the Department of Housing and Urban Development (HUD) for capital and management activities, including modernization and development of public housing projects. The officials told us they planned to use the Recovery Act allocation to fund projects already included in their 5-year Capital Fund Plan—for instance, one project will redevelop housing in Jackson’s North Midtown Community. The Mississippi Home Corporation (MHC) was allocated approximately $21.9 million to provide additional gap financing to Low Income Housing Tax Credit (LIHTC) projects under the Taxpayer Credit Assistance Program (TCAP). MHC officials told us they had provided an initial notice to developers of LIHTC projects in the state about the additional funding provided under the Recovery Act for the TCAP but were waiting for HUD to issue final guidance before releasing details on their plans for administering the Recovery Act funding. The State Auditor’s office has taken and plans to take a number of steps to establish accountability. For example, in March 2009 the office hosted a meeting with staff from state agencies that are expected to receive Recovery Act funds to discuss accountability requirements and expectations. The office is planning to conduct training seminar for local officials and others concerned about accounting for and controlling the use of Recovery Act funds. Overall, the State Auditor believes the state has adequate controls for the use of Recovery Act funds but is concerned that the funding of new programs and the significant increase in funding of current programs will stress the control system. In addition to the State Auditor, a legislative oversight committee and internal audit offices within each agency may provide oversight of Recovery Act funds. For example, the legislative committee staff in March 2009 said they began tracking the Recovery Act and the state’s Recovery Act-related legislation and funding provided to Mississippi. Mississippi’s most recent Single Audit Act findings highlight two material weaknesses in internal control over financial reporting at one state agency that will receive Recovery Act funds. In its Single Audit report for fiscal year 2008, the State Auditor found that the Mississippi Department of Employment Security did not record the tax liens receivable account and corresponding Unemployment Insurance Premiums revenue account on the department’s financial statements in accordance with generally accepted accounting principles. As a result, the State Auditor proposed, and management made an audit adjustment of, approximately $35.5 million to properly state the department’s current year financial statements. In addition, the State Auditor found that the department’s internal controls over its tax lien receivable system were inadequate, and management proposed audit adjustments totaling approximately $6.4 million to properly state the department’s tax lien receivables. The State Auditor also identified one material weakness in internal control over compliance at the Mississippi Department of Human Services for the department’s failure to verify and document compliance with the Davis- Bacon Act requirements for the Social Services Block Grant, which could result in questioned costs and funds due back to the federal granting agency. State officials stated that the Recovery Act does not provide funding to oversight entities, but the federal government expects states to ensure accountability and transparency over expenditures. For example, officials from the State Auditor’s office told us they had experienced significant staff turnover in recent years and relied on less-experienced staff to conduct audit work. In addition, the Lieutenant Governor expressed concern about whether the State Auditor could be funded to conduct additional Recovery Act-related auditing responsibilities, as was done for Hurricane Katrina related oversight. Officials from the State Auditor’s office added that they normally charged the audit agency for the cost of audit services provided, but they were not sure whether Recovery Act funds could be used for this purpose. The State Auditor noted that the office would like to hire certified public accounting firms to conduct Recovery Act oversight work rather than increase staff. Further, the officials noted that OMB should provide guidance regarding state level oversight, auditing, and administrative costs—such as how costs should be paid for and with what funds. The legislative oversight committee also expressed concerns about the capabilities of the State Auditor’s office and some state agency internal audit functions. For example, in a recent report, the committee noted that low staffing levels and high turnover in the office’s Department of Audit’s Financial and Compliance Audit Division had resulted in a decreased experience level of audit staff and reduced institutional knowledge to use in forming auditor judgment. In addition, the committee noted there were limitations in the internal audit functions of some state agencies—for instance, state law required 19 agencies to establish an internal audit function; 13 had done so as of December 2008. Further, the committee reviewed the internal audit functions of 8 agencies and found that most focused on reviewing agency programs rather than testing internal controls. Finally, the committee found that the Executive Director for these agencies reviewed and approved the plans for their internal audit function, but this could limit the internal auditor’s freedom to determine the internal controls tested and programs reviewed. Officials from the State Auditor’s office recommended that the federal government provide specific guidance for reporting on the use of Recovery Act funds to support job creation or retention because the reliability of such estimates depends critically on using a solid methodology. Furthermore, the officials recommended that OMB provide a clear definition of time-limited, part-time, full-time, and permanent jobs. Another concern was how to report on jobs created from the use of funds for programs, such as unemployment, food stamps, and Medicaid. These funds make up a large portion of the Recovery Act funding, but, according to state officials, the purpose of these programs is not job creation and retention. The State Auditor’s office also expressed concerns about data reliability. For example, staff noted that standardization of data was lacking and the various decentralized reporting mechanisms, while certainly cheaper and less burdensome on state agencies, will not likely provide meaningful data on the impact of Recovery Act funds. Additionally, the staff noted that, if state agencies require their subrecipients to provide nonstandardized and nonuniform data, it will be difficult to identify trends at the state level. They also expressed concern that decentralized reporting would bypass the state-level efforts of accountability. Ultimately, they said state-level, centralized reporting using standardized and uniform data collection elements would be beneficial for state and federal oversight and would raise both the actual and perceived level of accountability. As an example of state efforts to assess the impact of Recovery Act funds, MDOT hired a contractor to conduct an economic impact analysis of projects MDOT had preselected to receive Recovery Act funding. According to one of the contractor’s staff, these projects were preselected on the basis that they were “shovel ready” during the first 90 days of the state receiving stimulus funds. The contractor used a forecasting model to measure the impact that an estimated $726 million in transportation stimulus funding would have on the state of Mississippi with regard to increased economic spending and the number of jobs from 2009 through 2011. We provided the Governor of Mississippi with a draft of this appendix on April 17, 2009. The Director of Federal Policy, who serves as the stimulus coordinator, responded for the Governor on April 20, 2009. The official provided technical suggestions that were incorporated, as appropriate. In addition to the contacts named above, Chris Keisling, assistant director; Marshall Hamlett, analyst-in-charge; David Adams; Michael O’Neill; Carrie Rogers, and Erin Stockdale made major contributions to this report. Use of funds: An estimated 90 percent of Recovery Act funding provided to states and localities nationwide in fiscal year 2009 (through Sept. 30, 2009) will be for health, transportation and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, the Centers for Medicare & Medicaid Services (CMS) had made about $550 million in increased FMAP grant awards to New Jersey. As of April 1, 2009, the state has drawn down $362.2 million, which is almost 66 percent of its awards to date. Officials stated that the funds made available as a result of the increased FMAP allow the state to cover the increase in caseload and maintain current populations and benefits. In addition, these funds will help balance the state’s budget and allow the state to eliminate premiums for children in families with incomes less than 200 percent of the federal poverty level in New Jersey’s State Children’s Health Insurance Program. New Jersey has begun to use some of its Recovery Act funds, as follows. and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for fiscal year 2009 through the first quarter of fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs; (2) a general across-the-board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of the increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. (SCHIP). This will help the state retain children in SCHIP who would otherwise be terminated from the program for nonpayment of premiums. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and for other surface transportation projects that could affect highways. States must follow the requirements for the existing programs, and in addition, the governor must certify that the state will maintain its current level of highway spending and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. New Jersey provided these certifications but noted that the state’s level of funding was based on the best information available at the time of the state’s certification. Recovery Act transportation funding to 3 large projects, including one in an economically distressed area. As of April 16, 2009, 10 projects totaling about $269.5 million have been put out for bid through a competitive process. NJDOT officials estimate that Recovery Act funds will save the state about $100 million in interest charges over 12 years for one of the selected projects, as the state will not have to borrow to start and complete it. Not all of the selected projects were on the State Transportation Improvement Plan (STIP), but New Jersey, in consultation with the Federal Highway Administration, amended its STIP to include all of the selected projects. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF), to be administered by the U.S. Department of Education (Education). The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures, among other things, it will take actions to meet certain educational requirements, such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. The state expects to receive $891.4 million in SFSF funds, about 82 percent of which is for education and about 18 percent of which is for the state to use for “public safety and other government services.” State officials said that, pending a New Jersey Supreme Court decision on the state’s new education funding formula, the SFSF funds for primary education would follow that formula. The state’s use of SFSF funds for higher education is unclear. The Governor’s Chief of Staff stated that New Jersey is currently trying to determine what portion of the SFSF education and other government services funds will be used for higher education and will not submit its application for SFSF funding until it completes this determination. New Jersey expects those determinations to be made sometime in April. The state expects that the receipt of stabilization funds will help balance its fiscal year 2009 budget and avoid layoffs or tax increases. currently planning their upcoming fiscal year budgets and would like to know how the Recovery Act funds would complement their upcoming school spending. On April 1, 2009, Education issued guidance to the states on how Recovery Act funds could be used for education. State officials are continuing to review the guidance and on April 16, 2009, issued guidance to local school districts outlining each district’s allocation of additional funds made available under the Recovery Act for programs authorized under Title I of the Elementary and Secondary Education Act and the Individuals with Disabilities Education Act. Transportation—Urban/Rural Transit Capital Assistance and Fixed Guideway Modernization Grants: New Jersey Transit (NJT), the primary public operator of bus and commuter rail transit lines in the state, was apportioned all of the Recovery Act funds for transit for New Jersey, which amounted to about $425 million in three pre-existing federal transit programs. NJT has selected 15 projects that will use Recovery Act funds, all of which were on their 20-year capital plan. About 70 percent of the funds are allocated to capacity expansion and improvement projects, with the remainder allocated to maintenance projects, as its regular funds are concentrated on safety, security and maintenance needs. According to NJT officials, NJT can move quickly to use these funds as the Federal Transit Administration (FTA), through its preaward authority, will reimburse the agency for funds expended for the selected projects, even though the funding for those projects has not yet been obligated by the FTA. The largest allocation of NJT’s Recovery Act funds ($130 million) will be used toward designing and undertaking some construction activity for new train tunnels under the Hudson River. The tunnels are expected to double the number of NJT trains going into and out of New York City. Housing and Urban Development—Housing Capital Assistance: HUD allocated approximately $104 million to 86 public housing authorities in New Jersey for capital and management activities, including modernization and development of public housing developments. Officials from the Newark Housing Authority (NHA), which is receiving an allocation of about $27.4 million, told us they planned to use the allocation to fund projects already included in their 5-year capital plan—including rehabbing 700 vacant units and 300 occupied units—which will generate income and additional HUD subsidies to NHA and provide new and improved affordable units for additional families. Justice—Edward Byrne Memorial Justice Assistance Grants: State officials expect to receive a Recovery Act allocation of $48 million from the Byrne Justice Assistance Grant Program. Local law enforcement officials stated that this program may provide for some additional facilities and other law enforcement equipment. For example, the Trenton Police Department is planning to use its Byrne Justice Assistance Grant funds on projects that will enhance its crime reduction efforts by sharing information with Mercer County’s Prosecutor’s Office and enhancing the department’s forensic crime analysis capabilities. In contrast, according to Newark’s Chief of Police, the amount of Byrne Justice Assistance Grants allocated to the Newark Police Department may be sufficient to provide some new equipment but not fund a major capital program. Anticipating even less revenue in fiscal year 2010, which begins on July 1, 2009, the Governor has proposed a $29.8 billion budget. According to the Governor, if New Jersey did nothing to curtail growth in state spending or adjust its mandatory obligations, the fiscal year 2010 budget would be about $36 billion, or $7 billion above anticipated revenues. In response to declining revenue, the Governor has proposed about $4 billion in cuts to programs, rebates, pension payments, and state worker personnel costs. In all, more than 850 line items in the budget have been cut. The largest cuts will come from scaling back state rebates of local property taxes by $500 million and reducing state payments to the pension fund by $895 million. The Governor is also proposing to save $400 million in personnel costs through a wage freeze and furloughs for state employees, avoiding an otherwise anticipated layoff of up to 7,000 state workers. Some New Jersey officials began preparing for receipt of Recovery Act funds prior to passage of the Recovery Act. Anticipating federal stimulus spending for infrastructure, the Governor asked NJDOT to identify projects that could be ready for federal funding and quick implementation in November 2008. NJDOT officials identified about $1.4 billion in potential eligible projects but had to scale this list back to meet New Jersey’s eventual apportionment of Recovery Act transportation funds. The city of Newark also prepared a process with evaluative criteria for selecting local projects for Recovery Act funds before the Recovery Act was enacted. fiscal year 2010 budget, which is currently being debated by the state legislature. New Jersey officials have been and are planning to continue submitting certifications for the state’s use of Recovery Act funds. The Governor issued a certification memo to the Secretary of the U.S. Department of Transportation that the state would maintain its efforts with regard to state funding for the types of projects funded under the Recovery Act. Other local officials told us they would issue or had issued similar certifications for Recovery Act funds for which they are directly responsible. For example, NHA staff told us their Executive Director signed a certification letter for the Recovery Act funds that the NHA was responsible for. the state Inspector General, who is responsible for investigations of fraud related to state government; and the internal audit offices that exist within most agencies, including the state Medicaid Inspector General and the contract compliance audit units within the Division of Purchase and Property (DPP) ad the Division of Property Management and Construction (DPMC). According to the state’s Comptroller, the legislature’s State Commission on Investigation, which is concerned with investigations on enforcement of state law, particularly regarding racketeering and organized crime, will also be among the agencies helping to ensure that the state’s public employees who administer Recovery Act funds do so effectively and in compliance with federal or state requirements. In addition, the state legislature, state agencies, and many local entities (e.g., housing authorities, school districts, and metropolitan planning organizations) also have a role in overseeing these funds. As described by state officials, Recovery Act funds must be used by state agencies pursuant to appropriation by the state legislature, and Recovery Act funds were appropriated in legislation enacted in March 2009.Under that legislation, the specific programs and activities conducted by those agencies with Recovery Act funds are also subject to approval by the legislature’s Joint Budget Oversight Committee. However, according to state officials, any Recovery Act funds directly received by local governments or other entities in the state would not be budgeted or appropriated by the state legislature. State officials describe New Jersey as a strong “home rule” state and its constitution as giving localities many rights and responsibilities for providing local services. Therefore, New Jersey has more than 1,900 cities, counties, towns, townships, and local authorities or taxing districts, including 86 housing authorities, 566 municipal governments, and 616 school districts that can apply for, use, and potentially be held accountable for Recovery Act funds. mandated limitations on compensation practices and proficiency targets for state assessments have been raised. Additionally, the state has a significant amount of oversight over the three districts that are under state control to review and control their budgets. The U.S. Department of Education and the county superintendent have the authority to review these school districts budgets, as well. Further, according to the Governor’s Chief of Staff, because the state already funds local school districts with $8.8 billion in state funds, ensuring accountability for the use of state funds by school districts is not a new challenge to the state oversight agencies. Many of the state and local agencies interviewed stated that their current accounting systems can track Recovery Act funds by program and project and can generate reports showing the use of those funds: Both the Newark and Trenton Housing Authorities stated that they use the Line of Credit Control System (LOCCS) accounting system, which HUD uses to provide funds to public housing authorities. LOCCS includes special accounting codes under which housing authorities can track Recovery Act funds by program and by type of use. Housing authorities can also use LOCCS to generate the required reports back to HUD showing how they have used Recovery Act funds. Both NJDOT and NJT stated that their accounting systems can track Recovery Act funds separately from their regular funds because they have created separate accounting codes to track these funds. Furthermore, most of the selected projects will be funded primarily with Recovery Act funds, making the process of tracking them easier. they were confident no special enhancements were needed to their accounting software, although they would monitor the accounting system to ensure it was functioning properly. DPP will also publicly advertise bids for projects funded with Recovery Act funds, include terms and conditions in each request for proposals and contract for these projects stating detailed reports required by the act, and will post contract award notices for Recovery Act funded projects. To track increased FMAP funds, New Jersey has established a discrete identifier in the state accounting system. The state has begun the process of adjusting systems, so that the additional FMAP funds can be tracked and monitored by specific service category. Despite these adjustments, tracking of these funds will not be dramatically different from how the state tracks funds for their overall budget. Additionally, the state is monitoring increased FMAP funds and comparing them against actual expenditures. According to New Jersey officials, the state is also monitoring unemployment levels to anticipate and project future FMAP levels. New Jersey has not increased its number of state auditors or investigators, and there has not been an increase in funding specifically for Recovery Act oversight. Additionally, the state hiring freeze has not allowed many state agencies to increase their Recovery Act oversight efforts. For example, despite an increase of $469 million in Recovery Act funds for state highway projects, no additional staff will be hired to help with those tasks or those directly associated with the act, such as reporting on the number of jobs that the Recovery Act funds created. While NJDOT has committed to shift resources to meet any expanded need for internal Recovery Act oversight, currently one person is responsible for reviewing contractor- reported payroll information for disadvantaged business enterprises, ensuring compliance with Davis-Bacon wage requirements, and job creation figures. 2005, the state Inspector General’s review of the now-dissolved School Construction Corporation, which was responsible for more than $8.66 billion in school construction funds, found the authority had “weak financial controls, glaring internal control deficiencies and lax or non- existent oversight and accountability” after it had disbursed $4.3 billion in contracts and approved approximately $540 million in changes to those contracts. In its place, in 2007, the state created a Schools Development Authority with a completely different management and accountability structure. State officials noted that certain towns and cities, as well as regional planning organizations, can apply for and directly receive federal recovery funds under the terms of the Recovery Act. According to the state Inspector General, the risk for waste, fraud, and abuse increases the farther removed an organization is from state government controls. While some state officials said they have statewide investigative authority, they would not be able to readily track the funding going directly to local and regional government organizations and nonprofits as a result of the funding delivery and reporting requirements set up in the Recovery Act. In addition, staff from the state Auditor’s office noted that some smaller cities and towns in New Jersey are not used to implementing guidance from the state or federal government on how they are using program funds, which could result in the localities reporting using funds for ineligible purposes. However, state Department of Education officials stated that although the sheer number of school districts in the state raises concerns, sufficient internal controls (state audits, Single Audits, state oversight, etc.) exist to prevent most instances of fraud and other illegal uses of funds. programs are being designed and how they are using the funds. For example, state officials are emulating the federal oversight effort, in part by trying to build internal controls at the outset of the process and to use merit-based selection criteria for Recovery Act projects. The state Inspector General, in coordination with the New Jersey Recovery Accountability Task Force, will be conducting training at New Jersey government agencies concerning Recovery Act related internal control issues. As of April 17, the Inspector General hopes to present the first trainings by mid-May. The Governor’s Chief of Staff stated that different state agencies are planning to evaluate the impact of Recovery Act funds. Assessing the impact of the increased FMAP funds will involve the extent to which the Medicaid program is able to accommodate additional applicants as a result of these funds. A New Jersey official noted that the state will have benchmark numbers on how many additional people are served and that this approach is no different from how the state would currently report impact. The state Auditor and the state Comptroller have also committed to carrying out audits and assessments of the impact of Recovery Act funds. Officials we interviewed at New Jersey state agencies have different ways of either collecting or estimating data on the number of jobs created or retained as a result of Recovery Act funds. For example, the NHA will use payroll data to keep track of the exact number of union tradesmen and housing authority residents employed to turn damaged vacant units into rentable ones. In contrast, NJT is using an academic study that examined job creation from transportation investment to estimate the number of jobs created by contractors on its Recovery Act-funded construction projects. Finally, officials stated that both DPP and DPMC both have methodology and mechanisms in place to track jobs created and maintained for goods and services procured under Recovery Act contracts. We provided the Governor of New Jersey with a draft of this appendix on April 17, 2009. The Governor’s Chief of Staff responded for the Governor on April 20, 2009. In general, the Chief of Staff substantially agreed with the draft and provided technical comments that were incorporated, as appropriate. In addition to the contacts names above, Raymond Sendejas, Assistant Director; Greg Hanna, analyst-in-charge; Jeremy Cox; Colin Fallon; Tarunkant Mithani; and Cheri Truett made major contributions to this report. Use of funds: An estimated 90 percent of fiscal year 2009 Recovery Act funding provided to states and localities will be for health, transportation and education programs. The three largest funding categories are the Medicaid increased Federal Medical Assistance Percentage (FMAP) grant awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 13, 2009, the Centers for Medicare & Medicaid Services (CMS) had made about $3.14 billion in increased FMAP grant awards to New York. As of April 13, 2009, New York had drawn down about $1.74 billion, or 55 percent of its initial increased Nearly $1.3 billion of the funds made available as a result of the increased FMAP were used to close the state’s budget deficit for the fiscal year ending on March 31, 2009, or applied to lower the deficit for the current fiscal year. In addition, $440 million was returned to the counties for their contributions towards the non-federal share of Medicaid expenditures that qualified for the increased FMAP. New York was apportioned about $1.12 billion for highway infrastructure investment on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had obligated about $276.5 million for 108 projects to the New York State Department of Transportation. New York will request reimbursement from the U.S. Department of Transportation as the state makes payments to contractors. As of April 13, 2009, the New York State Department of Transportation had advertised for bids on 38 projects. Work on all of these projects is expected to begin this spring. The state will target Recovery Act transportation funds to infrastructure rehabilitation, including preventive maintenance and reconstruction, such as bridge repairs and replacement, drainage improvements, repaving and roadway construction. State officials emphasized that these projects extend the life of infrastructure and can be contracted for and completed relatively easily in the 3-year time frame required by the act. Some Recovery Act funds will go to more typical “shovel-ready” highway construction projects for which there were insufficient funds. By the end of April 2009, New York expects to have a complete list of transportation projects that Recovery Act funds will support. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) As of April 13, 2009, New York had been allocated about $2.0 billion from the initial release of these funds by the U.S. Department of Education. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that the states will meet maintenance-of-effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. As of April 13, 2009, New York had not submitted its application for these funds. New York plans to use the majority of Fiscal Stabilization funding to support K-12 education costs for the 2009-2010 and 2010-2011 school years beginning July 1, 2009. New York education officials told us that most of the funds will be used to offset expected budget cuts throughout the school system that were caused by the downturn in the economy and in state revenues. New York is also receiving additional Recovery Act funds under other programs, such as programs under Title I, Part A, of the Elementary and Secondary Education Act (ESEA) (commonly known as No Child Left Behind), and the Individuals with Disabilities Education Act, Part B (IDEA). These are described throughout this appendix. Overall, New York expects to receive about $26.5 billion in Recovery Act funds plus possible additional discretionary program funds over the next 3 years (fiscal years 2009-2011). Safeguarding and transparency: New York plans to track and monitor Recovery Act funds mostly through its existing internal control, audit, and accounting systems, although the new Recovery Cabinet and other state institutions have initiated several steps to coordinate the oversight of Recovery Act projects. For example, the Office of the State Comptroller (OSC) is using its accounting system to tag and track these funds, while the New York State Department of Transportation (NYSDOT) is conducting a federal-aid risk assessment to focus its internal and contract audit resources on projects and contracts that may be most vulnerable to fraud, waste, and abuse. New York officials, however, expressed concerns about monitoring Recovery Act funds that do not pass through state offices but flow directly from federal agencies to local agencies or authorities. For example, the Metropolitan Transportation Authority, which provides transportation services for the New York City metropolitan area, expects to receive directly about $1 billion in federal transit funds under the Recovery Act. Assessing the effects of spending: Officials have taken some initial steps to meet the Recovery Act’s reporting requirements, but generally they are awaiting further federal guidance. Officials throughout the state government expressed concerns about how to consistently report on the impact of Recovery Act funds. Percentage (FMAP). Across states, the FMAP may range from 50 percent to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for federal fiscal year 2009 through the first quarter of federal fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs; (2) a general across-the-board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of the increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. For the second quarter of fiscal year 2009, New York’s FMAP was 58.78 percent, an increase of 8.78 percentage points over its fiscal year 2008 FMAP. New York expects to receive about $11 billion in federal Medicaid funds as a result of the increase in its FMAP. As of April 13, 2009, CMS had made about $3.14 billion in increased FMAP grant awards to New York and the state had drawn down about $1.74 billion of its grant awards. Nearly $1.3 billion of the funds made available as a result of the increased FMAP was used to close the state’s budget deficit for the state fiscal year ending on March 31, 2009, while $440 million was returned to the counties for their contributions towards the non-federal share of Medicaid expenditures eligible for the increased FMAP. towards the nonfederal share of Medicaid expenditures. The state’s counties provide this local share. According to state officials, in 2006, in order to control Medicaid spending at the local level, the state instituted a cap on local Medicaid expenditures that constituted about 33 percent of the nonfederal share of expenditures at the time. This cap, unique to New York, basically limits the annual increase in a locality’s Medicaid expenditures to 3 percent of what it spent in 2005. The result has been that the localities’ percentage share of Medicaid expenditures has slightly declined each year since 2006. The 2009-2010 enacted state budget plans to use nearly half of the enhanced FMAP funding expected to be received through March 31, 2010 on (1) health care to avoid certain difficult provider reimbursement cuts, and (2) other savings actions proposed by the Governor in his initial budget proposal in December 2008. These funds will also help pay for unanticipated rising Medicaid costs, primarily driven by rising caseloads resulting from the current economic downturn. In addition, the FMAP funds (1) helped avoid proposed cuts to important human services and mental hygiene programs, (2) were used to maintain revenue sharing funding for New York City, and (3) avoided several proposed tax increases that would have impacted middle class families and small businesses. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing federal-aid highway Surface Transportation Program, through which money is apportioned to states for the construction and maintenance of eligible highways and for other surface transportation projects. States must follow the requirements for the existing programs, and in addition, the governor must certify that the state will maintain its current level of transportation spending, and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. As of April 16, 2009, the Federal Highway Administration had obligated about $276.5 million to New York State for 108 transportation projects. The state has been able to move quickly on these projects largely because NYSDOT, as required by federal surface transportation legislation, has a planning mechanism that routinely identifies needed transportation projects and performs preconstruction activities, such as obtaining required environmental permits. A NYSDOT official told us that as of April 13, 2009, 38 projects approved in March 2009 had been advertised for bids for contracts. In late 2008, NYSDOT began preparing to manage potential stimulus funding in transportation programs. NYSDOT, which oversees over 113,000 miles of highway, 16,000 bridges, and more than 130 transit operators, initially established a working group that began reviewing or “scrubbing” core projects in the state’s transportation improvement plan (STIP) in late 2008 to make sure projects would be fully permitted and “shovel ready,” should funding be made available. Because of an approximately 8 percent per year increase in construction costs during the last 3 years and the state’s declining fiscal position, New York has a large backlog of planned transportation projects. As of April 16, 2009, the Governor had certified that 108 projects met the objectives of the act and that the state will maintain its planned level of effort within its transportation program. support. NYSDOT officials noted that the list of projects would be fluid depending on bid results, budget overruns, and the ability of localities to start and complete planned projects within expected time frames. Consistent with the Governor’s goal of leveraging the impact of Recovery Act funds, NYSDOT has also begun working with rural public transportation systems to identify eligible Federal Transit Administration activities. Recovery Act transit funds will be used to replace a significant number of vehicles that currently exceed their federally rated service life with new cleaner-fuel buses that comply with the Americans with Disabilities Act. NYSDOT will use a statewide bus contract to procure the majority of these new vehicles. This cooperative effort would also allow the communities to take advantage of the state’s procurement expertise and presumably lower overall procurement costs. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education (Education). SFSF is intended to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures it will take actions to meet certain educational requirements, such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. As of April 13, 2009, New York’s SFSF allocation was about $2.0 billion; however, the state had not drawn down any of this amount. The state has not applied for these funds and they will not be allocated to public entities such as K-12 school districts and public higher education institutions until the school year begins on July 1, 2009. The Governor’s office said that this application is expected to be submitted soon. The New York State Education Department (NYSED), which has an annual budget of about $30 billion, expects to receive about $5 billion in Recovery Act funds. About half of the amount—approximately $2.5 billion—is expected to be provided through SFSF. These funds can be used to help avert elementary, secondary, and higher education reductions, such as the loss of teachers. NYSED officials told us that they believe most of these funds will be used to offset expected budget cuts throughout school systems that were caused by the downturn in the economy and in state revenues. State officials also have discretion over an additional 18 percent of the stabilization funds–approximately $549 million--and can use this portion for a wide range of government services, including school modernization. As of April 13, 2009, New York had also been allocated an additional $1.7 billion in Recovery Act funds for programs under Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA), as amended by the No Child Left Behind Act, and the Individuals with Disabilities Education Act (IDEA). The key New York institutions involved in managing Recovery Act funds are the governor’s office, the state program departments and agencies, and OSC. In addition, localities, transit, or housing authorities will play a role in managing some Recovery Act funds that do not pass through state offices. Because of the timing of New York’s annual fiscal year and the February 17, 2009, enactment of the Recovery Act, the state had to quickly incorporate Recovery Act funding into the budget for the fiscal year beginning April 1, 2009. New York’s Governor, in anticipation of the Recovery Act, established a Recovery Cabinet in February 2009. The Recovery Cabinet is led by the Governor’s Senior Advisor for Transportation and Infrastructure. All state agencies and many state authorities are represented on the cabinet, which is charged with coordinating and managing Recovery Act funding throughout the state. Similarly, New York City officials developed a City Hall Working Group comprising city management and individuals from the relevant agencies that are planning to receive Recovery Act funding to coordinate and manage the funding. Recovery Act administrative and monitoring costs might strain existing financial and human resources. The Recovery Cabinet also serves as a focal point of contact for counties and other localities throughout the state—informing them of the types of projects that could be eligible for stimulus funding and soliciting ideas and proposals for such funding. In addition, New York established an economic recovery Web site in February 2009—www.recovery.ny.gov. By using the Web site, New Yorkers have been able to enter their project ideas directly into a project database and track Recovery Act funding and its impact. This database currently contains over 16,000 project ideas. Other key players in New York’s management of Recovery Act funds include OSC, an independently elected office that is charged with issuing the state’s internal control standards, managing the central accounting system, and directing internal audits throughout the state’s departments and agencies, among other responsibilities. OSC will be responsible for tracking and monitoring the progress of Recovery Act funding and ensuring that the funding meets established internal controls. State authorities and metropolitan planning organizations that are not directly managed by the Governor are also key players in the delivery of New York State services and are therefore central to the management of some Recovery Act funds. For example, the Metropolitan Transportation Authority will manage about $1 billion of Recovery Act funds. concerned about their ability to consistently report on the impact of Recovery Act funding. Several New York government entities are responsible for the management, implementation, and oversight of internal controls and for safeguarding taxpayers’ money. These entities include OSC, individual state agencies, and the governor’s office. For example, OSC is responsible for the state’s Central Accounting System, disburses funds, and audits state agencies and authorities, among other responsibilities. Each large state agency, such as NYSDOT, has a director of internal audit, as well as an internal control officer who reports to the head of the agency, coordinates internal control activities, and helps ensure internal control program compliance. The head of each state agency and public authority must annually certify compliance with the State’s Internal Control Act. Each state agency operates its own financial management and reporting system and has its own procurement officer. However, OSC must review and approve all contracts over $50,000. appropriate for recovery. OSC had also identified about $17 million in potential overpaid claims in 2007. State officials told us, however, that many of the instances of potential Medicaid overpayments were without basis and were, in fact, made consistent with federal requirements. NYSDOT did not adequately document audit extensions that it granted subrecipients. Furthermore, the department did not have a sanction policy in effect for subrecipients that were not in compliance with audit requirements. Effective August 2008, NYSDOT established a formal sanctioning policy. The Housing Trust Fund Corporation did not have procedures in place to adequately monitor the compliance requirements of the Single Audit Act, as amended, and OMB’s implementing guidance in OMB Circular No. A-133, for grant subrecipients. Several programs, including Temporary Assistance for Needy Families, the Child Care and Development Block Grant, and the Office of Children and Family Services, did not adequately complete forms documenting the transfer of funds awarded by the federal government. The Department of Education’s Vocational Rehabilitation Services program had not determined individuals’ eligibility for the program services within a reasonable period of time. The Single Audit did not provide 10 federal programs, including the Medical Assistance, Low-Income Home Energy Assistance, and Food Stamp Cluster Programs, an unqualified opinion because of various findings, including cost allocation plans that were not approved by the federal government. New York also received an unqualified opinion on OSC’s comprehensive annual financial statements for the state fiscal year that ended March 31, 2008. The audit reported control deficiencies but disclosed no instances of noncompliance that would be material to the basic financial statements. may rely on multiple databases for handling transactional and performance data, making data reliability difficult to ascertain. According to this official, state agencies vary in their capabilities, and the independent financial management systems that operate distinctly from the Central Accounting System have varying degrees of sophistication and accessibility. In addition to existing control systems, the Governor’s office has planned several new initiatives for ensuring accountability of Recovery Act funds. First, drawing on past efforts of New York state agencies and the New York State Internal Control Association to improve the state’s internal controls, transparency, and data integrity, the Recovery Cabinet plans to establish a working group on internal controls. This working group will be made up of internal control officers from major agencies in the cabinet and will meet regularly to provide additional guidance to those agencies receiving and or administering Recovery Act funds. Second, the Governor’s office plans to hire a consultant to review the state’s management infrastructure and capabilities to achieve accountability, effective internal controls, compliance, and reliable reporting under the Recovery Act. Third, the Director of State Operations provided initial guidance to the state agencies and authorities on the Recovery Act accountability and transparency requirements. According to state officials, all agencies and departments that expect to receive Recovery Act funds have been asked to review and report on their practices for fraud prevention, contract management, and grants accountability to assess their current vulnerabilities and to ensure that the state is prepared to meet the Recovery Act requirements. Finally, the state plans to coordinate fraud prevention training sessions. OSC says that it will continue to advise and provide technical assistance to local governments as the requirements of the Recovery Act become clearer. Guided by the Recovery Cabinet working groups, state agencies are planning to implement various types of oversight and reporting mechanisms to comply with the Recovery Act. For example: NYSDOT is relying heavily on existing program oversight controls, such as normal highway project procurement requirements, to manage and control Recovery Act spending. In addition to those oversight controls, as described above, NYSDOT is conducting a risk assessment of federal-aid projects to direct future internal audit and contract reviews. NYSDOT officials said that special emphasis will be placed on high-risk areas, such as projects developed by local public agencies, and that a formal plan for overseeing Recovery Act subrecipients will include training, technical assistance, and regular reviews of subrecipients’ documents and processes. With regard to reporting, NYSDOT is developing a dataset that is expected to contain all data elements required to fully meet state reporting requirements. NYSDOT is also putting a reporting requirement in existing recovery project contracts alerting contractors that they are responsible for meeting all Recovery Act reporting requirements. NYSED officials said that they have been meeting with OSC to ensure proper accounting codes are used in tracking and reporting Recovery Act funds. However, officials are concerned that once the funds reach localities, the funds may lose their accounting codes and get rolled up with other state and federal funds. In addition, state education officials said that they have established a waste, fraud, and abuse work team to examine risks and identify areas of concern associated with Recovery Act funds. The officials said that the biggest challenge that they foresee is district reporting at the school level. According to the officials, risk assessments for schools with higher spending per student will need to be developed. need to hire subcontractors, many of which will be new to the program. Specifically, New York State expects to receive $395 million in additional weatherization funds from the Recovery Act, compared with a little over $60 million allocated to the program in the previous state fiscal year. In addition, the Recovery Act increased the maximum amount that can be spent for each housing unit qualifying for the program from $2,500 to $6,500. Officials said they are concerned about their ability to effectively manage the program, given the major funding and program changes caused by the Recovery Act, when their existing staff is already stretched. Housing officials said that they are assessing the risk to the weatherization program. According to New York officials, increased FMAP grant awards are segregated from other Medicaid funds received by the state. These funds have received a distinct code to identify them as part of the funding received from the Recovery Act in OSC’s Central Accounting System. Additionally, the increased FMAP grant awards received by the state and local governments are tracked separately in the accounting system. OSC has instructed localities to maintain a separate account for FMAP funds. As of April 13, 2009, the comptroller had not disclosed plans for auditing the increased FMAP funds. State transportation, education, and housing agency officials are just beginning to consider plans to assess the impact of Recovery Act funds. They are generally waiting for the Office of Management and Budget to provide guidance or methods to help in assessing impact, such as job retention and creation, increases in tax revenues, and savings from weatherization or other energy projects. For instance, state housing officials said that they typically track dollars and that they will require additional guidance from the Department of Housing and Urban Development on how to track job creation. State education officials said that it would be difficult to isolate the impact of Recovery Act funds on student achievement from the impact of other initiatives the state is undertaking. State officials also expressed concerns about how to consistently measure the impact of funding, such as how to count job creation and how to track the ripple effect of funding. level. Officials said that the purpose of the database is to provide transparency for New York City residents and to fulfill future reporting requirements. The database is expected to provide such details on a Recovery Act-funded program as the number of additional beds at a homeless shelter. However, New York City officials said that it is difficult to begin planning how to assess impact until they know what measures will be called for by federal reporting guidelines. Furthermore, New York City officials recommended relaxing the reporting deadlines and requirements for the first quarter after Recovery Act funds are received so states and localities have more time to understand new guidance. We provided the Governor of New York with a draft of this appendix on April 17, 2009. The Senior Advisor for Transportation and Infrastructure responded for the Governor on April 20, 2009 by providing technical suggestions that were incorporated, as appropriate. In addition to the contacts named above, Ronald Stouffer, Assistant Director; Barbara Shields, analyst-in-charge; Jeremiah Donoghue, Colin Fallon, Summer Pachman, Frank Putallaz, Jeremy Rothgerber, and Cheri Truett made major contributions to this report. Use of funds: An estimated 90 percent of Recovery Act funding provided to states and localities nationwide in fiscal year 2009 (through Sept. 30, 2009) will be for health, transportation and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, Centers for Medicare & Medicaid Services (CMS) had made about $657 million in increased FMAP grant awards to North Carolina. As of April 1, 2009, North Carolina had drawn down $414.6 million in increased FMAP grant awards, or 63 percent of its awards to date. North Carolina officials reported that they plan to use funds made available as a result of the increased FMAP to maintain current populations and benefits and to offset the state’s general fund deficit. North Carolina was apportioned about $736 million for infrastructure investment on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had obligated about $165 million for 53 projects in As of April 16, 2009 the North Carolina Department of Transportation had selected 138 projects estimated to utilize about 90 percent of its allocated Recovery Act funds. These projects include activities such as repaving highways and replacing bridges. North Carolina Department of Transportation officials told us they identified these projects based on Recovery Act criteria that priority is to be given to projects that are anticipated for completion within a 3-year time frame and that are located in economically distressed areas. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) North Carolina was allocated about $952 million from the initial release of these funds on April 2, 2009, by the U.S. Department of Education. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance of effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. North Carolina officials said that they would apply for fiscal stabilization funds by the end of April 2009. The state had not yet determined how fiscal stabilization funds will be used. North Carolina is also receiving additional Recovery Act funds under other programs, such as Edward Byrne Memorial Justice Assistance Grant program to improve the functioning of the criminal justice system; the Tax Credit Assistance Program for low-income housing; and Workforce Investment Act Youth, Adult, and Dislocated Worker Programs that provide employment and training services. The status of state plans for using these funds is described throughout this appendix. Safeguarding and transparency: The state has set up the Office of Economic Recovery and Investment (OERI) to help agencies track, monitor, and report on Recovery Act funds, and the North Carolina Senate and House of Representatives have established committees to provide legislative oversight of these funds. In addition, the state has a number of initiatives under way that will improve accountability and transparency for Recovery Act funds, and the state will track Recovery Act funds separately to ensure accountability for those funds. North Carolina officials identified several potential concerns about the safeguarding of funds. For example, several officials said that they were concerned about whether there were enough staff members to meet additional management and oversight responsibilities under the Recovery Act. Assessing the effects of spending: North Carolina agencies are in the early stages of developing plans to assess the impact of Recovery Act expenditures. According to state officials, they have been awaiting guidance from the federal government, particularly related to measuring job creation. and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for federal fiscal year 2009 through the first quarter of federal fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs, (2) a general across-the- board increase of 6.2 percentage points in states’ FMAPs, and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act are for state expenditures for Medicaid services. However, the receipt of the increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. As of April 1, 2009, North Carolina had drawn down $414.6 million in increased FMAP grant awards, or 63 percent of its awards to date. North Carolina officials reported that they plan to use funds made available as a result of the increased FMAP to maintain current populations and benefits and to offset the state’s general fund deficit. The state has received guidance on the requirements for reporting Medicaid expenditures under the Recovery Act. However, the state would like additional guidance on other types of reporting requirements, such as performance information. legal reviews and determined that the projects are an appropriate use of taxpayer funds. North Carolina provided these certifications, but conditioned the level of funding from state sources for the Recovery Act covered programs on future revenue collections in the state. The North Carolina Department of Transportation (NCDOT) was apportioned about $736 million in Recovery Act funds for highways and bridges. As of April 16, 2009, the U.S. Department of Transportation had obligated about $165 million for 53 projects in North Carolina. The department has plans to award 70 contracts for Recovery Act projects between March and June, which are estimated to cost $466 million. NCDOT officials told us that they identified these projects based on Recovery Act direction that priority is to be given to projects that are anticipated to be completed within a 3-year time frame and that are located in economically distressed areas. Projects were also evaluated based on several criteria, including alignment with long-range investment plans and considerations about geographical diversity and economic impact. Based on the estimated costs of the initially selected projects, about one-third of costs are for projects not located in economically distressed areas, according to state officials. qualified teachers. North Carolina officials said that they would apply for fiscal stabilization funds by the end of April 2009. The state has been allocated $952 million under the SFSF program. Officials from the state education agency, the North Carolina Department of Public Instruction, said that 81.8 percent of the SFSF would be distributed to school districts and institutions of higher education in accordance with Recovery Act requirements. State officials are in the process of determining how to calculate the relative amount of funding that school districts and public institutions of higher education would receive. Regarding the other 18.2 percent of SFSF, state officials said that a decision had not yet been made about how these funds would be allocated. State officials have emphasized in their communications with school districts that funds should be used for short-term investments with potential for long-term programmatic gains, echoing federal guidance. U.S. Department of Justice Edward Byrne Memorial Justice Assistance Grant Program: The Edward Byrne Memorial Justice Assistance Grant Program (Byrne Grant Program) was established to streamline justice funding and grant administration, and allows states, tribes, and local governments to support a broad range of activities to prevent and control crime based on their own local needs and conditions. According to officials of the North Carolina Governor’s Crime Commission, the office expects to receive an allocation of $34.5 million through the Byrne Grant Program. The Governor’s Crime Commission is allowed to use 10 percent of that total, or about $3.5 million, for administrative purposes. This leaves a balance of $31 million. Of this amount, 42.4 percent, or $13.2 million, must be passed through by formula to local governments and the remainder of $17.9 million will go to other state agencies and institutions. North Carolina officials for the Byrne Grant Program are planning to fund programs based on the state’s list of program priorities, which include programs such as the Criminal Justice System Improvement, Crime Victims’ Services, Juvenile Justice Planning, and North Carolina Gang Prevention Initiative. Also, the localities within the state will receive $21.9 million, which will be awarded by the U.S. Department of Justice. housing credit agencies in each state distribute these funds competitively and according to their qualified allocation plan. According to officials with the North Carolina Housing Finance Agency (NCHFA), the state has identified potential projects for the Tax Credit Assistance Program (TCAP), focusing initially on 40 to 50 tax credit projects that were stalled due to a lack of financing from other sources. NCHFA officials said they are waiting on guidance from the U.S. Department of Housing and Urban Development and Department of the Treasury before they begin the application process for developers. NCHFA officials said that environmental review requirements could pose a challenge to meeting federal timelines for making awards, but that they would not know for certain until final federal guidance has been issued. Workforce Investment Act Youth, Adult, and Dislocated Worker Programs: The Workforce Investment Act (WIA) provides funds for employment and training services to youth, adults, and dislocated workers. North Carolina was allocated nearly $80 million through these WIA programs under the Recovery Act. The North Carolina Department of Commerce (DOC) has been working with local workforce development boards since January to help them plan and prioritize the use of these Recovery Act funds. The state has communicated these priorities to the local workforce development boards: (1) increasing the number of people served and trained, (2) targeting programs toward underserved populations, including those receiving public assistance, (3) implementing a statewide summer youth employment program, and (4) increasing support services, such as child care and transportation. As necessary, the department has worked with other state departments to coordinate efforts. For example, DOC has coordinated with the state community college system to create short-term course offerings in 12 high-growth occupations that lead to certificates at each of the 58 state community colleges. DOC officials are also developing plans to use state-level funds received under the Recovery Act, and anticipate using those funds to help conduct outreach to inform the public of available programs and services funded through the Recovery Act. on health and human services, and about 10 percent of state spending on education. North Carolina is expecting to receive an estimated $6.1 billion of the Recovery Act funding going to states. North Carolina’s fiscal situation is not unlike many other states. In the midst of its economic crisis, the Governor’s proposed biennial budget contains $2.6 billion in spending reductions and $1.3 billion in revenue increases, and proposes to use $2.9 billion of federal recovery funds to support education and other mission- critical services over the biennium. The Governor’s budget proposal indicated that most programs face reduced or level funding, but recommended continued focus on growing North Carolina’s economy, improving public education, keeping higher education accessible and affordable, and protecting the state’s most vulnerable citizens. North Carolina, after 3 consecutive years of growth, suffered a significant economic decline in 2008. As reported in the Governor’s budget proposal, the state lost over 120,000 jobs—a nearly 3 percent decline—in 2008, pushing its unemployment rate up to about 10 percent. Job losses were particularly steep in the manufacturing sector, but the state reported that its housing sector, while also suffering a decline, was less affected by the housing downturn than other states. The Governor’s budget proposal projects the economy to continue its decline, but to stabilize in 2010 and begin to grow in 2011. In general, the state projects economic performance to outpace the U.S. average. The North Carolina state government operates on a biennial budget cycle, which begins on July 1 of odd-numbered years. The North Carolina constitution requires the Governor to submit a balanced budget, and state statute requires the General Assembly to pass a balanced budget, according to the National Association of State Budget Officers. North Carolina’s General Assembly must pass an appropriations bill in order for state agencies to disburse federal funds, according to state officials, according to NASBO. Treasurer and the Superintendent of Public Instruction, are selected through statewide elections. Also, the State Auditor, who is responsible for providing independent evaluations and audits of state agencies and programs, is selected by statewide election. North Carolina has a bicameral General Assembly, with members of both the House and Senate being elected to 2-year terms. The General Assembly typically meets for a full session in odd-numbered years and a shorter session in even- numbered years. There is no concluding date for either session, according to state officials. On February 17, 2009—the same day the Recovery Act was enacted— Governor Perdue created the OERI to oversee North Carolina’s handling of federal stimulus funds as well as state-level economic recovery initiatives. OERI’s responsibilities include, among other things, coordinating state efforts to track and report on Recovery Act funds and maximizing the state’s use of Recovery Act funds. Another of OERI’s major responsibilities is to provide guidance to state departments and localities on how to monitor, track, and report the use of Recovery Act funds. On March 30, 2009, the state issued a memorandum on budgeting and accounting for Recovery Act funds. This memorandum is the first of what is anticipated to be a continuing series of information and directives to ensure that state agencies and subrecipients comply with federal and state requirements. Specifically, this memorandum provides guidance requiring that Recovery Act funds may not be commingled with other funds and that Recovery Act expenditures will require review and approval by the Office of State Budget and Management (OSBM). In addition, OERI has established two management directives requiring agencies to make weekly reports on Recovery Act funds, and to submit grant applications to OERI for review. management team with representatives from state agencies to exchange information and facilitate Recovery Act implementation. Governor Perdue’s budget proposal, which according to state documents incorporated an anticipated $6.1 billion in Recovery Act formula funds, is currently being considered and reviewed by the General Assembly. In an effort to monitor and oversee these Recovery Act funds, the North Carolina Senate established the Select Committee on Economic Recovery. According to the committee’s Chairman, the new committee was established to have legislative review of how the Recovery Act funds will be used and the effect the funds may have on the state’s budget. The North Carolina House of Representatives has established a similar committee. As North Carolina prepares for the receipt, tracking, monitoring, and reporting of Recovery Act funds, it currently faces a number of known financial management challenges and other risks. For example, North Carolina’s 2007 Single Audit report had 18 findings for material weaknesses and material noncompliance related to issues with federal program compliance for the North Carolina Departments of Health and Human Services (16) and Crime Control and Public Safety (2). Five of the 18 findings were related to insufficient subrecipient monitoring. The state auditor’s office also noted that single audit findings have consistently found issues related to subrecipient monitoring by state agencies. Insufficient subrecipient monitoring and other deficiencies such as these may leave Recovery Act funds vulnerable to fraud, waste, and abuse. 1965 (ESEA, commonly known as No Child Left Behind). However, officials in other agencies, such as the North Carolina Department of Commerce, which administers Workforce Investment Act funds, felt that they would be able to absorb additional responsibilities with current staff and resources. State officials also identified programs that were receiving a significant increase in program funding as a risk. For example, several officials noted that the weatherization program is receiving a substantial increase in funding. Finally, state officials told us that state agency guidance and communications with local governments are areas that will bear watching, as ensuring that local governments understand how to properly account for and segregate federal and state funds will be critical. Within the state of North Carolina, a variety of efforts are under way to establish new safeguards over Recovery Act funds, including some that will build on current systems and recent initiatives. For example, officials at North Carolina’s OSC and OSBM told us that several state agency accounting systems will need to be modified to track Recovery Act funds as required by the Recovery Act. OSBM officials told us that they have been waiting for Office of Management and Budget (OMB) guidance on the reporting requirements, which was released by OMB on April 3, 2009. These officials have not identified any state agency accounting systems that are incapable of adding a unique identifier code to separately track Recovery Act funds, but said that nearly all systems will need some modifications. A bigger concern is that Recovery Act reporting time frames may not be aligned to the state departments’ normal accounting cycles, which may delay the departments’ ability to provide monthly or quarterly reports to OSBM and OERI. this Web site include the ability to provide additional information about how funds will be distributed, information on how to apply for funds or contracts, a mechanism to track spending on individual projects, and estimates of the economic impact and jobs created. Additionally, OSBM, in consultation with the state Department of Administration, Division of Purchase and Contract, is reviewing a statewide procurement process to streamline the process and identify any areas that need to be improved. The results of this review may indicate either systemic statewide or individual agency needs related to the Recovery Act. Finally, the OSC is phasing in a statewide internal control program called EAGLE (Enhancing Accountability in Government through Leadership and Education), which is intended to establish adequate internal controls and increase fiscal accountability. Under the EAGLE program, each agency will be required to perform an annual assessment of internal controls over financial reporting and identify risks. North Carolina’s State Auditor told us that, given current staffing levels, her office will conduct as many oversight reviews and audits of Recovery Act funds as they can. In order to handle the new Recovery Act work, it will be necessary to cut back on some of the other fiscal control audits. The State Auditor told us that she uses a risk-based approach to auditing and plans to focus the State Auditor’s Recovery Act work on subrecipient monitoring and on how the Recovery Act funds are being segregated from other federal funds coming through traditional funding streams. The State Auditor’s office also noted that OMB and other federal agency guidance may identify areas that may merit closer scrutiny. State officials across agencies told us that that the state Office of Economic Recovery and Investment was developing guidance on the Recovery Act reporting requirements, but that the state has not yet begun assessing the effects of Recovery Act funds. The state provided localities with guidance on a number of Recovery Act-related topics on March 30, 2009, but the guidance has not yet specifically addressed Recovery Act reporting requirements. State officials told us that they needed federal guidance about how to assess the effects of Recovery Act funds before they can release state guidance. For example, the state’s Chief Procurement Officer said that the state needs guidance about how to measure specific reporting requirements such as jobs created and jobs saved. We provided the Governor of North Carolina with a draft of this appendix on April 17, 2009. The Director of OERI responded for the Governor on April 20, 2009. In general, the comments were either technical or were status updates. The official also provided technical suggestions that were incorporated, as appropriate. In addition to the contacts named above, Bryon Gordon, Assistant Director; Scott Spicer, analyst-in-charge; Carleen Bennett; George Depaoli; Bonnie Derby; Leslie Locke; Stephanie Moriarty; and Anthony Patterson made major contributions to this report. Use of funds: An estimated 90 percent of Recovery Act funding provided to states and localities nationwide in fiscal year 2009 (through Sept. 30, 2009) will be for health, transportation and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, the Centers for Medicare & Medicaid Services (CMS) had made about $760 million in increased FMAP grant awards to Ohio. As of April 1, 2009, Ohio has drawn down about $420.6 million, or 55.3 percent of its initial increased FMAP Ohio officials indicated that they will use Recovery Act funds made available as a result of the increased FMAP to cover increased caseloads, offset general fund shortfalls due to state budget deficits, ensure compliance with prompt payment provisions, maintain existing populations, avoid eligibility restrictions, increase provider payments, and maintain and increase current levels of benefits. Ohio was apportioned about $935.7 million for highway infrastructure investment on March 2, 2009 by the U.S. Department of Transportation. Of the $935.7 million, about $774.2 million was apportioned to the Ohio Department of Transportation On March 26, 2009, ODOT announced that it will fund 149 projects with $774.2 million in Recovery Act funding. According to ODOT officials, they are currently meeting with all project sponsors and performing detailed reviews of project documentation, confirming federal eligibility, assessing project delivery, and establishing project schedules. As of April 16, 2009, the U.S. Department of Transportation had not obligated any Recovery Act funds for ODOT expects to begin advertising for bids during the week of April 20, 2009. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) Ohio was allocated $1,198,882 from the initial release of these funds on April 2, 2009, by the U.S. Department As of April 17, 2009, Recovery Act funds for education and some child care programs had not been appropriated by the legislature. Officials with the Governor’s office and Ohio’s Office of Budget and Management (OBM) said these funds would be included in the budget for state fiscal years 2010-2011 and must pass by June 30, 2009. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance of effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. State officials said that they intend to apply for State Fiscal Stabilization Funds sometime in the future. The state of Ohio expects to receive a total of $8.2 billion from the Recovery Act over the next 3 years (fiscal years 2009-2011). In addition to the funding described above, Ohio is also receiving Recovery Act funds under other programs, such as programs under Title I, Part A of the Elementary and Secondary Education Act (ESEA) (commonly known as No Child Left Behind); programs under the Individuals with Disabilities Education Act (IDEA); and two programs of the U.S. Department of Agriculture—one for administration of the Temporary Food Assistance Program and one for competitive equipment grants targeted to low-income districts from the National School Lunch Program. The status of plans for using some of these funds is described in this appendix. Before passage of the Recovery Act, Ohio created a Web site at Recovery.Ohio.gov, which represents the state’s effort to create an open, transparent, and equitable process for allocating Recovery Act funds. Through the Web site, the state has encouraged proposals for uses of Recovery Act funds, and as of April 8, 2009, individuals and organizations from across Ohio have submitted over 23,000 proposals. While still receiving proposals, new submissions to the Web site have dropped in number dramatically, as guidance from federal agencies has clarified details about funding opportunities. By mid-April, approximately 26 state agencies with programmatic expertise had sorted the 23,000 submissions for response. Ohio regularly updates its Web site to provide timetables and information on applying for funds from state and federal agencies. State agencies are beginning to identify specific projects to fund. On April 1, 2009, the Governor signed House Bill 2. As described by state officials, the bill appropriates $1.9 billion in Recovery Act resources for 11 state agencies. According to state officials, additional appropriations are needed to spend Recovery Act funds for education and some child care programs, including Ohio’s share of the State Fiscal Stabilization Fund. According to state officials, these appropriations are included in House Bill 1, which is part of the state’s biennial budget and must be approved by June 30. As of April 1, 2009, The Ohio Department of Public Safety received about 730 proposals for Edward Byrne Memorial Justice Assistance Grant projects through the Ohio Recovery Web site. Applications for the state-administered funds are due on May 1, 2009; the department issued its request for proposals with caveats that specific reporting requirements are forthcoming from OMB and the U.S. Department of Justice. The Ohio Department of Job and Family Services (ODJFS) plans to allocate Workforce Investment Act (WIA) funds directly to local area workforce boards, and ODJFS provided these boards with estimates early so they could begin the planning process. Before funds were appropriated, some local areas began their efforts to procure providers for youth programs, particularly for work sites. Safeguarding and transparency: Ohio is planning to use existing systems and safeguards to track Recovery Act funds, but reliance on subrecipients to provide data for enhanced reporting requirements may present challenges. For example, the fiscal year 2007 single state audit identified material weaknesses with a number of the systems that Ohio’s Department of Jobs and Family Services uses to record and process eligibility and financial information for all their major federal programs. Moreover, officials with the Columbus Metropolitan Housing Authority (CMHA) noted limitations in how far they could reasonably be expected to track Recovery Act funds. They said they could track Recovery Act dollars to specific projects but could not systematically track funds spent by subcontractors on materials and labor. Assessing the effects of spending: Ohio continues to explore ways to assess the impact of Recovery Act funds, but officials anticipate challenges. Specifically, in the absence of guidance on the types of data to collect, funding could be released before state officials have determined reporting requirements. Moreover, Ohio officials are concerned that, without uniform reporting requirements, each state will develop their own methodologies for assessing the impact of the federal stimulus, eliminating any possibility of making assessments that are comparable nationwide. Ohio has begun to use some of its Recovery Act funds, as follows: Increased Federal Medical Assistance Percentage (FMAP) Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for fiscal year 2009 through the first quarter of fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs; (2) a genera across-the-board increase of 6.2 percentage points in states’ FMAPs; and l (3) a further increase to the FMAPs for those states that have a qualifyingincrease in unemployment rates. The increased FMAP available under theRecovery Act is for state expenditures for Medicaid services. However, the receipt of this increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. Ohio began the planning process to spend these funds before the enactment of the Recovery Act. In December 2008, to mitigate a budget revision resulting from a 3.3 percent drop in estimated state tax revenues, the Governor’s office assumed that additional federal assistance would be forthcoming. By including funds made available as a result of the increased FMAP in the assumptions used to revise the budget, cuts to state agency budgets and services were less severe. As of April 1, 2009, Ohio has drawn $420.6 million in Medicaid Recovery Act funds or 55.3 percent of its initial FMAP funds. Ohio officials indicated that as of March 31, 2009, they will use Recovery Act funds to cover increased caseloads, offset general fund shortfalls due to state budget deficits, ensure compliance with prompt payment provisions, maintain existing populations, avoid eligibility restrictions, increase provider payments, and maintain and increase current levels of benefits. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and for other surface transportation projects. States must follow the requirements for the existing programs, and in addition, the governor must certify that the state will maintain its current level of transportation spending, and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Ohio provided this certification, but conditioned it, noting that future highway spending would depend on: the state’s collection of transportation revenues, state budgeting levels, ability to sell bonds, construction inflation, pending state legislation and the solvency of the federal highway trust fund. On March 26, 2009, the Governor announced that Ohio will fund 149 projects with $774.2 million in Recovery Act funding. At least 113 of these projects, costing $605.5 million, involve roadway repaving and bridge repair. Specific roadway projects range from $200 million, for the Cleveland Innerbelt Bridge in Cuyahoga County, to $50,000, for pavement markings in Belmont County. The remaining transportation funds, nearly $170.0 million, are to be spent for railroad, maritime, intermodal, and engineering projects. ODOT officials told us that they are currently meeting with all project sponsors and performing detailed reviews of project documentation, confirming federal eligibility, assessing project delivery, and establishing project schedules. ODOT expects to begin advertising for bids during the week of April 20, 2009. In addition to the more than $774 million apportioned to ODOT, another $161.5 million was directly suballocated to Ohio’s eight major metropolitan planning organizations in Akron, Canton, Cincinnati, Cleveland, Columbus, Dayton, Toledo, and Youngstown. As of April 16, 2009, the U.S. Department of Transportation had not obligated any Recovery Act funds for Ohio projects. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education (Education). The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures, among other things, it will take actions to meet certain educational requirements such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. Ohio’s initial SFSF allocation is $1,198,882. According to state officials, the Ohio legislature has not passed the appropriations bills for Recovery Act- funded education programs and some child care programs. Those funds are expected to be appropriated, along with the rest of the state budget, by June 30, 2009. State officials said that they intend to apply for the State Fiscal Stabilization Funds sometime in the future. To provide guidance on key Recovery Act requirements and assure that the state is maximizing its access to and use of Recovery Act funds, a number of statewide teams have formed to aid the planning process. The Governor’s office organized a team of policy advisors, information technology specialists, and agency program staff to work on the application, program administration, reporting, and accountability related to the Recovery Act funds. This team is to ensure coordination with other offices, state agencies, or federal government entities and will work to ensure that Ohio appropriately applies for Recovery Act funding for which the state is eligible. In addition to the Governor’s teams, Ohio’s Office of Budget and Management (OBM) mandated that state agencies establish Recovery Act teams and recommended including fiscal, program, and compliance staff. The Governor also appointed an Infrastructure Czar to advise on the creation of an open, transparent process and to assist the state’s leaders in the strategic use of infrastructure dollars. As the infrastructure awards moved toward completion, state officials said he has turned his attention to assisting in competitive grant opportunities for entities in Ohio, including state agencies. The Czar will head a process for determining the most efficient and effective distribution of Recovery Act funds for competitive projects. The Ohio OBM will have primary responsibility for collecting and presenting financial data from state agencies through its Ohio Administrative Knowledge System (OAKS). OBM has issued guidance to state agencies on Recovery Act reporting requirements and risk management and accountability responsibilities. To ensure that Recovery Act funds are segregated from other program funds and accounted for separately, OBM will create a centralized system to report all accounting data through OAKS. To facilitate tracking, OBM assigns an OAKS program number (for both revenues and expenses) unique to Recovery Act funds. OBM plans to develop a series of program reports that state agencies can use to regularly monitor Recovery Act revenues and spending metrics to ensure the agency is in compliance. Although OAKS will allow the state to tag Recovery Act funding, in many cases the state agencies will rely on grantees and contractors to track the funds to the end use. Because the state intends to code each Recovery Act funding stream separately, and because these recipients typically manage more than one funding stream at a time, state officials said that the recipients should be able to track Recovery Act funds and other funding sources separately. However, some state departments may not be able to rely on data from a number of the complex information systems they use. For example, the fiscal year 2007 single state audit identified material weaknesses with a number of the systems that ODJFS uses to record and process eligibility and financial information for all their major federal programs. Auditors found that without sufficient, experienced internal personnel possessing the appropriate technical skills to independently analyze, evaluate, and test these complex information systems, ODJFS management may not be reasonably assured these systems are processing transactions accurately. In its response, ODJFS replied that it did not have the resources to create a separate independent office, but said that it had protocols in place to provide some assurances that its systems were processing transactions accurately. State officials said they are aware of the weakness listed and are taking action to remedy it. Further, OBM has instructed its own internal audit office to provide additional resources to assist the agency. Moreover, state and local officials we talked to raised some concerns about the ability of some localities to track Recovery Act funds to their end use. Specifically, they raised concerns about the capacity of grantees and contractors to track funds spent by subrecipients. For example, officials with the Ohio Department of Education said that they can track Recovery Act funds to school districts and charter schools, but they have to rely on the recipients’ financial systems to be able to track funds beyond that. An official with the Columbus City Schools said that its accounting system might be challenged to meet enhanced reporting requirements. While they could provide assurances that Recovery Act funds were spent in accordance with program requirements, they could not report systemwide how each federal Recovery Act dollar was spent. Officials with the Columbus Metropolitan Housing Authority (CMHA) also noted limitations in how far they could reasonably be expected to track Recovery Act funds. They said they could track Recovery Act dollars to specific projects but could not systematically track funds spent by subcontractors on materials and labor. These officials added, however, that if they required the contractors to collect this information from their subcontractors, they would be able to report back with great detail. Still, without guidance from the federal government on specific reporting requirements, they were hesitant to burden their contractors with collecting the data. On March 27, 2009, OBM directed state agencies to put in place risk management strategies for programs receiving Recovery Act funds. The guidance stresses the importance of having risk mitigation strategies in place that assure (1) management controls are operating to identify and prevent wasteful spending and minimize fraud, waste, and abuse; (2) adequate program monitoring by qualified personnel occurs; (3) awards are competed; (4) revenues and expenses are accurately reported; and (5) cost overruns and improper payments are minimized. To ensure that existing safeguards are followed, OBM’s Office of Internal Audit (OIA) plans to (1) provide training and education to state agency personnel, (2) assess the adequacy and effectiveness of the current internal control framework, (3) test whether state agencies adhere to the current framework, and (4) coordinate multiagency reviews with both federal and state officials. According to OIA officials, pursuant to its statutory implementation plans, OIA will increase its internal audit staff from 9 (current) to 33 by transferring internal audit personnel from other state agencies and hiring new staff by July 2009. OBM officials said that the increase in OIA staff will help provide the needed resources to implement its objectives and ensure that current safeguards are in place and followed as the state manages it Recovery Act-funded programs. Separately, both the Ohio State Auditor’s office and the Ohio Office of Inspector General are to provide independent reviews of the use of Recovery Act funds. The Ohio State Auditor’s office has created a Web- based database for all state agencies and local governments to report on Recovery Act funding and project expenditure activity. This database will also allow for public viewing of Recovery Act funds activity in the future. The State Auditor plans to use this information in helping assess risks and determine which programs to test as part of its single audit requirements. In addition, the State Auditor’s office plans to conduct interim audit work over controls and compliance at various state agencies and local governments. According to state officials, as part of House Bill 2, the Ohio General Assembly created in the Office of Inspector General the position of Deputy Inspector General for funds received through the Recovery Act. The Deputy Inspector General is charged with monitoring state agency distribution of Recovery Act funds, conducting a program of random reviews of the processing of contracts associated with Recovery Act projects, and investigating all wrongful acts or omissions committed by officers, employees, or contractors. OBM officials said that the emphasis on measuring the impact of certain Recovery Act funding has focused, thus far, on job creation; however, they noted that there are other goals of the Recovery Act. They argued that without comprehensive reporting guidance, states will struggle to assess impact on some of these other outcomes. States will not be able to go back later in the process to assess the impact of the Recovery Act on these other outcomes if they do not have guidance on what data to collect. While some state agencies have identified options for reporting on job creation, there are concerns about the soundness of some of the methodologies. The Ohio Department of Transportation, for example, identified a study from 1979 which projects how many jobs will be created by a given expenditure. Other models have also been identified; however, in the absence of uniform guidance from the federal government, Ohio officials are concerned that states and localities will use a variety of methods that will ultimately not be comparable and will make nationwide assessment of the Recovery Act difficult. We provided the Governor of Ohio with a draft of this appendix on April 17, 2009. The Chief Legal Council for OBM responded for the Governor on April 20, 2009. In general, the comments were either technical or were status updates. The Auditor of State also reviewed the draft and provided technical suggestions. We incorporated these comments, as appropriate. In addition to the contacts named above, Bill J. Keller, Assistant Director; Sanford F. Reigle, Analyst-in-Charge; Matthew Drerup; Laura Jezewski; Myra Watts-Butler; Lindsay Welter; Charles Willson; and Doris Yanger made major contributions to this report. Use of funds: An estimated 90 percent of Recovery Act funding provided to states and localities nationwide in fiscal year 2009 (through Sept. 30, 2009) will be for health, transportation and education programs. The three largest funding categories are the Medicaid increased Federal Medical Assistance Percentage (FMAP) grant awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, Centers for Medicare & Medicaid Services (CMS) had made about $1 billion in increased FMAP grant awards to Pennsylvania. As of April 3, 2009, Pennsylvania has drawn down about $330.8 million, or nearly 32 percent of its initial increased FMAP grant awards. Officials plan to use funds made available as a result of the increased FMAP grant awards to help cover the state’s increased Medicaid caseload, ensure prompt claims payments, and to offset Pennsylvania’s general fund budget deficit. Pennsylvania was apportioned about $1.0 billion for highway infrastructure investment on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had obligated $308.6 million for 108 Pennsylvania projects. As of April 16, 2009, the Pennsylvania Department of Transportation had advertised competitive bids on 97 projects totaling about $260 million, and the earliest contract was awarded on March 20, 2009. These projects include activities such as highway repaving as well as bridge replacement and painting. Pennsylvania will request reimbursement from the U.S. Department of Transportation as the state makes payments to contractors. U.S. Department of Education State Fiscal Stabilization Fund (Initial Release) Pennsylvania was allocated about $1.3 billion from the initial release of these funds on April 2, 2009, by the U.S. Department of Education. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance of effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increased teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. Pennsylvania plans to submit its application by April 25, 2009. The Governor plans to use the funds to increase state funding for school districts and restore state funding for public colleges. The Governor also plans to use some funds to pay operating costs for the Department of Corrections. Pennsylvania is receiving additional Recovery Act funds under other programs, such as programs under Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA) (commonly known as No Child Left Behind); programs under the Individuals with Disabilities Education Act (IDEA); Transit Capital Assistance and the Fixed Guideway Infrastructure Investment Programs; Workforce Investment Act; the U.S. Department of Housing and Urban Development Neighborhood Stabilization Program; the U.S. Department of Justice Edward Byrne Memorial Justice Assistance Grants; and the U.S. Department of Energy Weatherization Assistance Program. Plans to use these funds are described throughout this appendix. Safeguarding and transparency: On March 4, 2009, the Governor named the Secretary of General Services as the state’s Chief Implementation Officer responsible for the effective and efficient delivery of all Recovery Act-funded initiatives and projects. Additionally, the Governor set up a Recovery Management Committee to report to him on the progress of recovery efforts. According to the Chief Implementation Officer, this body meets regularly to discuss the status of the program, troubleshoot areas of concern, and report to the Governor on the progress of recovery efforts. In addition, Pennsylvania officials said they would use their existing integrated accounting system to track Recovery Act funds flowing through the state government. Although Pennsylvania has plans to publicly report its Recovery Act spending through a Web site (www.recovery.pa.gov), officials have said that the state may not be aware of all Recovery Act funds sent directly by the federal agencies to municipalities and independent authorities. In late March 2009, the Governor appointed a Chief Accountability Officer who will be responsible for reporting on Pennsylvania’s use of Recovery Act funds. Pennsylvania plans to conduct several risk assessments for Recovery Act programs by June 2009. Pennsylvania’s Auditor General also anticipates work auditing and investigating Recovery Act funds received by state and local agencies. Assessing the effects of spending: Pennsylvania state departments are in the early stages of developing plans to assess the effects of Recovery Act spending. According to state officials, they are awaiting further guidance from the federal government, particularly related to measuring job creation. Pennsylvania has begun to use some of its Recovery Act funds, as follows: Increased Federal Medical Assistance Percentage Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 percent to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for federal fiscal year 2009 through the first quarter of federal fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs; (2) a general across-the- board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of this increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. As of April 1, 2009, Pennsylvania has drawn down $330.8 million in increased FMAP grant awards, which is almost 32 percent of its awards to date. Pennsylvania officials reported that they plan to use funds made available as a result of the increased FMAP to cover the state’s increased Medicaid caseload and maintain current populations and benefits. State officials also noted that such funds are allowing them to forgo reductions that they otherwise would have had to make because state funding streams are smaller this year. For example, Pennsylvania officials indicated that the state's share for Medicaid expenditures is 20 percent of their state revenues; thus this funding fluctuates as the economy rises and falls. Funding made available as a result of the increased FMAP will also be used to offset the state’s general fund deficit and to help ensure that the Medicaid prompt payment requirements are met. Pennsylvania officials noted that early notification from CMS regarding any reporting forms that the state will be required to complete would be beneficial to ensure that the state’s accounting systems are properly aligned to produce needed reports. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and other surface transportation projects. States must follow the requirements for the existing program, and in addition, the governor must certify that the state will maintain its current level of transportation spending, and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Pennsylvania provided the first of these certifications but noted that the state’s level of funding was based on “planned non-bound state expenditures” (sic) and represented the best information available at the time of the state’s certification. As of April 16, 2009, the U.S. Department of Transportation had obligated $308.6 million for 108 Pennsylvania projects. As of April 16, 2009, the Pennsylvania Department of Transportation (PennDOT) had advertised 97 projects for competitive bid totaling about $260 million. These projects included highway repaving as well as bridge replacement and painting. Pennsylvania will request reimbursement from the U.S. Department of Transportation as the state makes payments to contractors. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education (Education). The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures, among other things, it will take actions to meet certain educational requirements such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. Pennsylvania’s initial SFSF allocation is $1.3 billion. According to the Chief Implementation Officer, Pennsylvania plans to file its application for these monies by April 25, 2009. According to the Governor’s proposal, $418 million in SFSF will support state funding to elementary and secondary schools and $317 million to improve basic programs operated by local educational agencies in state fiscal year 2010. Similarly, $44 million will help restore state funding for higher education. The Governor proposes to spend $173 million on Department of Corrections operations in state fiscal year 2009 and reserve $324 million for appropriation in fiscal year 2010. Faced with declining revenue projections since fiscal year 2008, Pennsylvania officials believe that federal funds are critical to help alleviate the immediate fiscal pressure and help balance the state budget. Based on February 2009 projections, Pennsylvania faces a $2.3 billion shortfall in fiscal year 2009, largely because of lower-than-expected revenues. Since September 2008, the Governor has cut state spending by more than $500 million, imposed a state hiring freeze, and banned out-of- state travel and new vehicle purchases. Pennsylvania plans to draw $250 million from the state rainy day fund—one-third of the current balance— to help avoid further cuts in fiscal year 2009. According to Pennsylvania’s Secretary of the Budget, state revenues continue to decline and this may necessitate using even more rainy day funds during the current fiscal year. For fiscal year 2010, the Governor proposes to draw $375 million from the rainy day fund. The Governor’s budget proposal for fiscal year 2010, among other things, includes program cuts, layoffs, and reduced contributions for employees’ health care. According to budget documents, federal fiscal relief would be used to prevent even deeper cuts throughout the budget. As part of the budget process, the Pennsylvania General Assembly generally must appropriate federal funds, including Recovery Act amounts. The Governor’s office and state agencies have begun planning for the use of Recovery Act funds in Pennsylvania. As noted previously, in March 2009, the Governor named a Chief Implementation Officer who is responsible for the effective and efficient delivery of all Recovery Act- funded initiatives and projects. According to the Chief Implementation Officer, the Recovery Management Committee meets regularly to discuss the status of the program, troubleshoot areas of concern, and report to the Governor on the progress of recovery efforts. Pennsylvania plans to apply for competitive grants available under the Recovery Act, and the Governor’s Secretary for Planning and Policy is coordinating this strategy. Some state programs have received federal Recovery Act funds, and in some cases they have made funding decisions. For example, the U.S. Department of Transportation, through the Federal Highway Administration and the Federal Transit Administration, published final apportionments for the federal-aid highway program and Transit Capital Assistance and the Fixed Guideway Infrastructure Investment Programs March 2 and March 5, 2009, respectively. PennDOT officials said that they have been working closely with metropolitan and rural transportation planning organizations to develop spending plans. On March 17, 2009, PennDOT released its final list of 241 highway and bridge projects to be funded by the $1.0 billion Recovery Act investment in highways. Youth activities under the Workforce Investment Act have also received a funding allocation, and local Workforce Investment Boards must quickly establish summer youth programs for the Recovery Act Funding. According to local officials in the Harrisburg region, planning challenges include identifying eligible youth (some of whom are out of school and difficult to locate), identifying employment opportunities that fit the requirements of the Recovery Act and the Workforce Investment Act, and performing required background checks on staff before the summer program begins. The Pennsylvania Department of Education estimated allocations for their school districts while waiting for their final Recovery Act allocations. The Recovery Act funding will not be available to schools until the state General Assembly appropriates the funds. Program officials with whom we spoke provided varying levels of satisfaction with the guidance they had received from federal agencies, but some agencies were waiting for federal guidance to make spending and programmatic decisions. Officials from PennDOT stated that they have received guidance and have been able to administer Recovery Act funds. For the two new low-income housing tax credit financing programs created under the Recovery Act, the Pennsylvania Housing Finance Authority received initial information from the U.S. Department of Housing and Urban Development but no information from the U.S. Department of the Treasury; the housing finance agency is waiting for formal guidance before releasing implementation plans. Pennsylvania Department of Education officials also stated that although they received guidance on April 1, 2009, from the U.S. Department of Education on Recovery Act funds, they are concerned about certain provisions, such as the maintenance of effort provision, and are anticipating additional guidance. Some agency officials were unclear about whether Recovery Act funds could be used to fund administrative costs. Even though a good portion of the Recovery Act funds is flowing through established grant programs, some state agency officials were concerned about paying for the increased administrative costs associated with program implementation, including increased reporting and tracking requirements. For example, Pennsylvania Department of Education officials were unclear if Recovery Act funds could be spent on state administrative costs and anticipated applying to the U.S. Department of Education for a waiver for these costs. State department officials were specifically concerned that they might need to build an entirely new reporting system to evaluate teachers and principals to meet Recovery Act requirements. Pennsylvania Department of Community and Economic Development officials said they had not received guidance from the U.S. Department of Housing and Urban Development about implementation of the Recovery Act portion of the Neighborhood Stabilization Program, and were unsure of how much Recovery Act funds could be used for administrative purposes. PennDOT officials told us that, in some instances, non-Recovery Act funds were used to pay administrative costs for Recovery Act initiatives. This was the case in hiring two consultants to assess potential transit projects for Recovery Act funding. Pennsylvania has entities responsible for tracking, monitoring, and overseeing financial expenditures. The Office of the Budget oversees the state’s uniform accounting, payroll, and financial reporting systems. Pennsylvania is reorganizing and centralizing its internal audit and comptroller functions within the Governor’s Office of the Budget. The state’s elected Treasurer has a pre-audit function to review disbursements to be paid out by state agencies prior to payment. The state Inspector General—who works for the Governor—is charged with investigating fraud, waste, abuse, and mismanagement. The state’s elected Auditor General, who is responsible for ensuring that all state money is spent legally and properly, performs performance audits, financial audits, and investigations of state and local government entities. The Auditor General also partners with an accounting firm to perform Pennsylvania’s annual single audit of the federal money that Pennsylvania receives to ensure the funds are spent according to federal laws and guidelines. Pennsylvania will use its existing accounting system to track Recovery Act funds and state officials are confident that it will adequately identify Recovery Act funds received and how they are used. Pennsylvania has an enterprise resource planning (ERP) system that is used by all state agencies to account for federal and state funding. The integrated accounting system will be used to track Recovery Act funds. To accommodate the Recovery Act, on March 10, 2009, Pennsylvania’s Office of the Budget issued an administrative circular to all agencies under the Governor’s jurisdiction describing the specific accounting codes they must use to separately identify the expenditure of Recovery Act funds. Individual agencies are also taking action to ensure that Recovery Act funds are tracked separately. For example, PennDOT issued an administrative circular in March 2009 that established specific Recovery Act program codes to track highway and bridge construction spending. The department also established four new funds to account for Recovery Act fund reimbursements to local governments. Pennsylvania officials said that the state will rely on subrecipients to meet reporting requirements at the local level. Recipients and subrecipients can be local governments or other entities such as transit agencies. For example, about $367 million in Recovery Act money for transit capital assistance and fixed guideway infrastructure investment was apportioned directly to areas such as Philadelphia, Pittsburgh, and Allentown. State officials also told us that the state would not track or report Recovery Act funds that go straight from the federal government to localities and other entities, such as public housing authorities. Past audits have identified vulnerabilities in Pennsylvania’s financial reporting and noncompliance with requirements for federal money. Pennsylvania’s fiscal year 2007 single audit report had an unqualified opinion on financial reporting, but auditors found material weaknesses in the accounting controls. For example, auditors found weaknesses in segregating duties among staff and monitoring user activities to reduce the risk of inappropriate changes to accounting data or misappropriation of assets. Pennsylvania’s Secretary of the Budget told us that to mitigate this risk, internal auditors now are to work closely with the Office of Administration and the Office of Information Technology on all new system changes to ensure internal controls are built into the application. The single audit scope was limited in that auditors could not obtain key documentation needed to check compliance with procurement regulations for competitively bid contracts for goods and services. The Secretary of the Budget told us that, beginning in January 2009 under Pennsylvania’s Right to Know law, information related to losing bids and scoring by participants of the procurement committees will now be available for audit purposes. In 2007, Pennsylvania had a qualified opinion due to noncompliance with major federal programs. For example, auditors identified 13 weaknesses in which state agencies, such as the Department of Community and Economic Development, did not adequately monitor subrecipients or failed to document procedures for performing on-site monitoring for subrecipients or subgrantees. It is important to correct these weaknesses for Pennsylvania to be able to provide reasonable assurance that its subrecipients comply with requirements for Recovery Act funding, when appropriate. Pennsylvania’s Secretary of the Budget told us that the Office of Budget monitors the agencies’ corrective action plans and provides additional program monitoring and training for agency program staff as appropriate. As of April 2009, the Office of the Budget’s auditors were reviewing the status of implementing corrective action plans for past single audit findings. Pennsylvania officials also cited potential risks, based on experience with existing structures, with programs receiving Recovery Act funding. Pennsylvania’s Governor told us that he is concerned that school districts may use Recovery Act funds to start or expand education programs that are fiscally unsustainable when the federal funds expire. Several Pennsylvania officials, including the Governor, were specifically concerned about the Weatherization Assistance Program. Under the Recovery Act, the program is receiving a significant increase in funding and will make substantial use of contractors to weatherize properties. A 2007 Pennsylvania Auditor General report found that the program had, among other things, weak internal controls, weaknesses in contracting, and inconsistent verification and inspection of subcontractor work. According to the Chief Implementation Officer, Pennsylvania plans to conduct several risk assessments by June 2009, including assessments of potential contractor capacity challenges for transportation projects and the capacity of current weatherization providers and contractors. The Office of Chief Counsel is reviewing all construction contracts and grants to ensure compliance with the Recovery Act requirements as well as guidance issued by the U.S. Office of Management and Budget (OMB) and federal agencies. According to Pennsylvania’s Secretary of the Budget, the new Bureau of Audits within the Office of the Budget will develop a risk- based approach for Recovery Act audits with measurable criteria and develop a matrix of risks for each Recovery Act program by the end of June 2009. Pennsylvania has established structures to oversee Recovery Act funds and provide transparency to the public. On March 31, 2009, the Governor appointed a Chief Accountability Officer who will be responsible for reporting on Pennsylvania’s use of Recovery Act funds and working with the Office of Budget to ensure funds are spent in accordance with Recovery Act requirements. To serve as a portal for transparency of state Recovery Act spending, Pennsylvania also established a Web site (www.recovery.pa.gov) that makes available updates on funding and solicits public input on funding use. The Chief Accountability Officer will be responsible for identifying ways to present visual evidence, such as photographs and mapping, to help citizens track Recovery Act projects in Pennsylvania. A new Pennsylvania Stimulus Oversight Commission was created by the Governor—by executive order on March 27, 2009—after outreach to the Pennsylvania congressional delegation, the state legislature, and others. In addition to the Chief Accountability Officer, the commission is composed of the Governor, the Recovery Act Chief Implementation Officer, four representatives selected by Pennsylvania’s congressional delegation, members of each of the four caucuses in Pennsylvania’s General Assembly, and representatives from the Pennsylvania Chamber of Business and Industry, United Way of Pennsylvania, and Pennsylvania AFL-CIO. The commission was established to, among other things, monitor Pennsylvania’s efforts to ensure compliance with the Recovery Act and to review the state’s approach to allocating and disbursing funds, tracking funds, transparency, performance, and grants management and oversight. The commission met for the first time on March 31, 2009, and has not announced its oversight plans; the next commission meeting will be on April 23, 2009. Other state offices are generally not expecting new staff or resources for Recovery Act oversight. The Auditor General anticipates work auditing and investigating Recovery Act funds received by state and local agencies. For example, the Auditor General will audit Recovery Act funds during the annual single audit and will initiate additional compliance audits for Recovery Act programs. The Auditor General observed that the Recovery Act did not provide funding for his office to undertake work related to the act. In addition, officials of the Auditor General's office have different views about what authority they have to audit federal money that flows directly to localities, such as housing authorities and municipalities. Pennsylvania is also in the process of reorganizing and centralizing its internal audit and comptroller functions within the Governor’s Office of the Budget. According to the Secretary of Budget, the Bureau of Audits is not expected to dramatically change audit responsibilities in the state but rather provide a more focused, risk-based approach, particularly for Recovery Act funding. This office is expected to employ 95 people, about 70 of whom will be field auditors. The remaining staff will be responsible, among other things, for subrecipient desk reviews and agency risk assessments. The number of staff devoted to program oversight, and implementation in some state agencies has been affected by the state’s hiring freeze. For example, Workforce Investment Act program officials said monitoring efforts will need to increase under the Recovery Act and they have applied to the Governor for a waiver to hire additional staff. Department of Community and Economic Development officials told us that they have requested to hire 12 people, 3 or 4 of whom will be devoted to Recovery Act work related to the Neighborhood Stabilization Program. The Pennsylvania Commission on Crime and Delinquency, which administers the Edward Byrne Memorial Justice Assistance Grants, is trying to maximize the use of its existing staff and sought advice from the U.S. Department of Justice Inspector General; the latter will give a presentation, share checklists, and train program staff in monitoring subrecipients. PennDOT officials told us that they meet weekly to oversee the highway and bridge program funded through the Recovery Act. These meetings cover such things as the status of obligating program funds and potential problems. The department also has a special “war room” that tracks each project in each state district. Agency officials stated that, although they are emphasizing the planning and allocating of Recovery Act funds quickly, they are aware of requirements to assess the economic and other impacts of these funds. The new Chief Accountability Officer will be responsible for developing and using performance measures to demonstrate outcomes associated with Recovery Act spending and projects. Some agency officials with whom we met—at the Pennsylvania Department of Education and the Department for Community and Economic Development—are generally waiting for additional guidance from the federal government on performance measures, especially on how to measure and report jobs created and sustained. We provided the Governor of Pennsylvania with a draft of this appendix on April 17, 2009. The Chief Implementation Officer and the Secretary of the Budget responded for the Governor on April 20, 2009. These officials provided clarifying and technical comments that we incorporated where appropriate. We also provided the Auditor General's staff with portions of the draft that addressed the Auditor General's past work and plans related to Recovery Act funding. We incorporated those technical comments as appropriate. In addition to the contacts named above, MaryLynn Sergent, Assistant Director; Richard Jorgenson, Analyst-in-Charge; Andrea E. Richardson; George A. Taylor, Jr.; Laurie F. Thurber; and Lindsay Welter made major contributions to this report. Use of funds: An estimated 90 percent of fiscal year 2009 Recovery Act funding provided to states and localities will be for health, transportation, and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, the State Fiscal Stabilization Fund, and highways. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, the Centers for Medicare & Medicaid Services (CMS) had made approximately $1.45 billion in increased FMAP grant awards to Texas. As of April 1, 2009, the state has drawn down about $665.7 million, or 46 percent, of its initial increased FMAP grant awards. Texas officials noted that the funds made available as a result of the increased FMAP will allow the state to maintain the program’s level of service and eligibility standards in fiscal year 2009. Texas was apportioned about $2.25 billion for highway infrastructure investments on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had obligated $533.7 million for 159 projects in According to Texas Department of Transportation officials, the department is scheduled to receive bids in April 2009 on 137 contracts that would total approximately $400 million in Recovery Act funds. Texas will request reimbursement from the U.S. Department of Transportation as the state makes payments to contractors. U.S. Department of Education State Fiscal Stabilization Fund Texas was allocated about $2.66 billion from the initial release of these funds on April 2, 2009, by the U.S. Department of Education. Before receiving the funds, states are required to submit an application that provides several assurances to the Department of Education. These include assurances that they will meet maintenance of effort requirements (or that they will be able to comply with waiver provisions) and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. According to Texas officials, the state’s application likely would not be submitted before the state legislature (which is in session until June 1, 2009) has finalized an appropriation for public and higher education. Texas officials indicated that the state plans to use its allocated federal funds to assist in continuing the historical levels of support for elementary, secondary, and higher education in the state. Education Agency officials said funds could be used, for example, to support efforts related to assessing school performance, teacher incentives, and teacher equity. Higher education officials anticipate using the funds to mitigate tuition and fee increases; support modernization, repair, and renovation of facilities; and provide incentive funding based on degrees awarded. Texas is receiving additional Recovery Act funds under other programs, such as programs under Title I, Part A of the Elementary and Secondary Education Act (ESEA), commonly known as No Child Left Behind; programs under the Individuals with Disabilities Education Act (IDEA); two programs of the U.S. Department of Agriculture—one for the administration of the Temporary Food Assistance Program and one for competitive equipment grants targeted to low income districts from the National School Lunch program; housing programs, including weatherization assistance; and justice assistance grants. The status of plans for using selected funds is discussed throughout this appendix. Safeguarding and transparency: To help ensure accountability and transparency, the Texas legislature’s forthcoming general appropriations act—expected to be passed by June 2009 to function as the state’s fiscal 2010-2011 biennium budget—will have a provision for tracking Recovery Act funds allocated to the state, according to the executive and legislative branch officials we contacted in Texas. To provide additional accountability and transparency, the Comptroller of Public Accounts has established a centralized budget account (with a unique funding code) for Recovery Act funds and has also established a Web page, www.window.state.tx.us/recovery, with links to www.recovery.gov/. To further help ensure accountability and transparency, Texas officials suggested that federal authorities provide concurrent notification to the state’s key stakeholders—particularly the Office of the Governor, the Comptroller of Public Accounts, the State Auditor’s Office, and the Legislative Budget Board—when Recovery Act funds are periodically distributed to Texas agencies and/or localities. Also, Texas officials told us that despite U.S. Office of Management and Budget (OMB) guidance, the increased FMAP funds the state has received through the Recovery Act, to date, have not been separately identified by the federal government. Assessing the effects of spending: Texas officials commented that— under the state’s performance-based budgeting process—agencies already have measures in place for assessing the performance of programs. Officials also believe that the state’s current monitoring and control processes and procedures are adequate to administer initiatives funded under the Recovery Act. The officials recognized, however, that some adjustments to performance measures may be needed for assessing the impact of Recovery Act funds. Texas has begun to use some of its Recovery Act funds, as follows: Increased Federal Medical Assistance Percentage Funds: Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the Federal Medical Assistance Percentage (FMAP). Across states, the FMAP may range from 50 to no more than 83 percent, with poorer states receiving a higher federal matching rate than wealthier states. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, the Centers for Medicare & Medicaid Services (CMS) made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for federal fiscal year 2009 through the first quarter of federal fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for (1) the maintenance of states’ prior year FMAPs, (2) a general across-the- board increase of 6.2 percentage points in states’ FMAPs, and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of the increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. As of April 1, 2009, Texas had drawn down $665,665,000, or 46 percent, of its initial increased FMAP grant awards of $1,448,824,000 in FMAP funds. Texas officials commented that the funds made available as a result of the increased FMAP will allow the state to maintain the program’s level of service and eligibility standards and cover increased caseloads, among other uses. Texas officials indicated that guidance from CMS is needed regarding whether certain programmatic changes being considered by Texas, such as a possible extension of the program’s eligibility period, would affect the state’s eligibility for increased FMAP funds. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and for other surface transportation projects. States must follow the requirements for the existing programs, and in addition, the governor must certify that the state will maintain its current level of transportation spending, and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. Texas provided this certification but noted that the state’s level of funding was based on the best information available at the time of the state’s certification. Texas was apportioned about $2.25 billion of Recovery Act funds for highway infrastructure investments on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had obligated $533.7 million of Recovery Act funds for 159 projects in Texas. According to Texas Department of Transportation officials, the department is scheduled to receive bids in April 2009 on 137 contracts that would total approximately $400 million in Recovery Act funds. Texas will request reimbursement from the U.S. Department of Transportation as the state makes payments to contractors. U.S. Department of Education State Fiscal Stabilization Fund: The Recovery Act created a State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education (Education). The SFSF provides funds to states to help avoid reductions in education and other essential public services. The initial award of SFSF funding requires each state to submit an application to Education that assures, among other things, it will take actions to meet certain educational requirements such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. Texas’ initial SFSF allocation is $2,662,203,000. According to Texas officials, the state generally plans to use its SFSF allocation to assist in continuing the historical levels of support for elementary, secondary, and higher education in the state. In April 2009, officials from the Office of the Governor informed us that the state was in the process of preparing its application for submission to the U.S. Department of Education—and that the application would reflect the fact that providing funding for public education is a priority in the state. The officials noted that the state’s application likely would not be submitted until the state legislature (which is in session until June 1, 2009) has finalized an appropriation for elementary, secondary, and higher education. Also, the officials commented that the state was awaiting further federal guidance on the appropriate use of Recovery Act funds. Generally, however, Texas Education Agency officials said that the federal funds could be used, for example, to support efforts related to high-quality assessment performance in schools, teacher incentives, and teacher equity. Also, according to the Texas Higher Education Coordinating Board, public institutions of higher education in Texas anticipate expending Recovery Act funds for three purposes—mitigating tuition and fee increases; supporting modernization, repair, and renovation of facilities; and providing incentive funding based on degrees awarded. To provide tracking and oversight of the Recovery Act funds, board officials commented that existing systems for implementing policies for accountability, internal controls, compliance, and reporting would be leveraged to the maximum extent possible to avoid the administrative burden associated with creating a completely new system. These officials explained that the proposed uses of the Recovery Act funds are not dissimilar to other well-established programs within the agency. Overall, throughout the multiyear time frame covered by the Recovery Act, Texas’ share of the total federal funds is estimated to be more than $15 billion for supporting a variety of program areas, such as health and human services, state fiscal stabilization, transportation, and education. (See table 9.) In his letter certifying acceptance of federal Recovery Act funds, the Texas Governor voiced opposition to “using these funds to expand existing government programs, burdening the state with ongoing expenditures long after the funding has dried up.” Similarly, during our review in Texas, legislative branch officials generally acknowledged that most of the federal Recovery Act funds appear to be one time in nature and that the state must avoid spending the funds for ongoing projects that would result in unsustainable future costs to the state’s budget. An illustration of such avoidance involves unemployment insurance. While the Texas Governor accepted some Recovery Act funds for unemployment insurance, he did not request Unemployment Insurance Modernization funds because the Governor believed that receiving those funds would place additional tax burdens on businesses, which would impede job creation and hamper the economy. Even though Texas generally continues to fare better economically than most states, nearly all available data suggest that the Texas economy is in recession, according to the Federal Reserve Bank in Dallas. In January 2009, the Office of the Comptroller of Public Accounts reported that the state’s fiscal 2010-2011 biennium budget will have $9 billion less in revenue than the current biennium budget. For perspective, officials with the Governor’s office told us that the $9 billion represents a 5 percent adjustment to the budget. In January 2009, anticipating that Texas faced a likely budget shortfall, the co-chairs of the state’s Legislative Budget Board requested that state agencies look for ways to reduce fiscal year 2009 expenditures by 2.5 percent. The co-chairs further noted that the state legislature should prudently plan on having a reasonable reserve in the state’s economic stabilization fund so that the state does not face a large deficit in the next biennium, ending August 31, 2011. In response to the co-chairs’ request for ways to reduce spending in fiscal year 2009, state agencies identified approximately $396 million in potential budget reductions based on hiring freezes, reduced services, delayed capital purchases, and other cost- cutting efforts. At the time of their request, the co-chairs noted that the Recovery Act—which was being debated in Washington, D.C.—could not responsibly be factored into the state’s budget process because many details were not known. In discussions with our review team in March 2009, representatives of the Office of the Lieutenant Governor commented that because of Recovery Act funds, state agencies were not required to implement the 2.5 percent spending reductions anticipated for state fiscal year 2009 and, further, the state did not have to tap into its rainy day fund. The representatives told us that absent the availability of Recovery Act funds, state agencies likely would have been asked to make cuts of about 10 percent for the fiscal 2010-2011 biennium budget, in addition to the state drawing upon the rainy day fund. On the other hand, officials representing the Office of the Governor commented that budget deficit situations do not necessarily result in the state using its rainy day fund. The officials stressed that—to meet the requirement to pass a balanced budget—a variety of other solutions could be considered, such as budget reallocations among state agencies and programs, as well as spending cuts. As an example, these officials noted that even though the state’s overall budget was reduced in 2003, the state raised education spending by $1 billion that year. Additionally, the officials explained that use of the rainy day fund is not an option readily available because it requires approval by two-thirds of the state legislature. Texas is taking various steps to help ensure accountability and transparency and address areas of vulnerability potentially associated with Recovery Act spending. Texas officials noted that Recovery Act funding will flow generally through existing federal-state agency partnerships or programs. Thus, to the extent possible, the state plans to use existing systems, processes, or mechanisms to provide Recovery Act funding accountability and transparency, according to the executive and legislative branch officials we contacted in Texas. In further reference to accountability and transparency, oversight of federal Recovery Act funds in Texas involves various stakeholders, including the Office of the Governor, the State Auditor’s Office, and the Office of the Comptroller of Public Accounts as well as two entities established within the Texas legislature specifically for this purpose—the House Select Committee on Federal Economic Stabilization Funding and the House Appropriations’ Subcommittee on Stimulus. Also, according to executive and legislative branch officials in Texas, the state plans to ensure that the forthcoming biennial general appropriations bill has a provision designed to specifically facilitate the tracking of federal Recovery Act funds distributed to Texas—that is, the act will have a separate section (“article”) that identifies, by applicable state agency, Recovery Act funds allocated to Texas. At the time of our study in April 2009, the Texas legislature was in session (81st regular session) and had not finished its work to complete and submit to the Governor a general appropriations bill for the state’s fiscal 2010-2011 biennium (Sept. 1, 2009, through Aug. 31, 2011). To further facilitate tracking, in March 2009, the Office of the Comptroller of Public Accounts established a centralized budget account for federal Recovery Act funds, with a unique funding code (0369). In turn, according to Texas officials, state agencies are modifying their financial systems to enable tracking of Recovery Act funds. Also, after the Recovery Act passed, the Office of the Governor began hosting regularly scheduled meetings (twice weekly) of a Stimulus Working Group comprising representatives of major state agencies to help ensure statewide communication of the need for accountability and transparency regarding Recovery Act funds. Similarly, a periodic forum of the internal audit staff of Texas state agencies serves as another means of statewide communication. Also, in March 2009, the Office of the Comptroller of Public Accounts scheduled training regarding federal awards and financial statements—training that included representatives from the Office of the Governor to discuss Recovery Act funds. Further, the Comptroller’s Office plans to hire 5 to 10 additional staff to help account for Recovery Act funds, according to office officials. In April 2009, the Comptroller’s Office issued policies and procedures to state agencies related to use and subsequent reporting on Recovery Act funds. The State Auditor’s Office is taking additional steps to ensure accountability. Anticipating that federal Recovery Act funding will increase its scope of responsibilities, the State Auditor’s Office plans to hire 10 additional staff (9 auditors and 1 investigator). The office intends to audit Recovery Act funds through the Single Audit of the State of Texas’ expenditures of federal awards—that is, the audit required by the Single Audit Act and to which OMB Circular A-133, Audits of States, Local Governments, and Non-Profit Organizations, relates. Also, the State Auditor’s Office may conduct discretionary audits based, for example, on (1) discussions with internal auditors at state agencies or (2) risk assessments that consider previously reported material weaknesses in program compliance and internal controls, as well as risk assessments of programs that have not been tested before. Furthermore, the State Auditor’s Office noted that, as warranted, it pursues leads generated by complaint letters, hotline calls, and other information received from the public. In this regard, the State Auditor’s Office has Web-based and telephone “hotline” contacts for the general public to use in reporting possible fraud, waste, and abuse. In March 2009, the State Auditor told us that he was preparing a letter to send to state agencies regarding their general fraud responsibilities related to state funds. Moreover, in April 2009, the State Auditor’s Office informed us that a provision for reporting Recovery Act-related fraud is being added to the state’s fiscal 2010-2011 biennium appropriations bill. Among other requirements, this legislative provision, according to the State Auditor’s Office, will require that state agencies’ Web sites provide information on how to report suspected fraud, waste, and abuse directly to the State Auditor’s Office. According to state officials, in March 2009, a bill was filed in the Texas legislature that proposed creating a new office—the Texas Fiscal Responsibility Office—to oversee or monitor the spending of federal Recovery Act funds in Texas. As of early April 2009, the bill’s status had not been determined by the state legislature, which was scheduled to be in regular session until June 1, 2009. In response to our inquiry, the State Auditor’s Office provided us its views regarding accountability risks and other challenges potentially associated with the expenditure of federal Recovery Act funds in Texas. Based on its experience in auditing Texas’ use of previous federal awards and reporting internal control deficiencies or material weaknesses, the State Auditor’s Office noted that relatively high risks generally can be anticipated with certain types of programs—such as (1) new programs with completely new processes and internal controls, (2) programs that lack clear guidance on allowable uses of Recovery Act funds, (3) programs that distribute significant amounts of funds to local governments or boards, and (4) programs that rely on subrecipients for internal controls and monitoring. The State Auditor’s Office also noted that general economic stability and public education programs are considered to be high risk because they are new programs and federal guidance regarding the state’s appropriate use of the funds the funds is uncertain. The State Auditor’s Office further noted that highway construction and workforce programs are also high risk because funds flow through contractors or to local entities, respectively. Officials from Office of the Governor acknowledged that there are inherent risks associated with large, complex programs as well as programs that involve a large number of contracts and rely on subrecipients. However, the officials emphasized that Texas has experience in monitoring these types of programs, and officials noted that state agencies have controls in place to mitigate these risks. Regarding the Medicaid program, for example, the officials noted that in 2003, the Governor appointed an Inspector General for the Texas Health and Human Services Commission and charged the Inspector General with monitoring and preventing fraud, waste, and abuse. Also, the officials noted that the state’s Attorney General’s Office has a Medicaid Fraud Investigation Unit. The Texas State Auditor’s Office made a recommendation regarding the monitoring of subrecipients for risk in its most recent audit of the Texas Education Agency. The audit report did not find that subrecipients were improperly spending federal funds or were not meeting federal requirements; however, the report did note, however, that the agency had “a limited number of resources available to monitor fiscal compliance.” The audit report recommended that the Texas Education Agency continue to add resources, within its budget constraints, to increase its monitoring of federal fiscal compliance performed. According to the State Auditor’s Office, following the audit in February 2009, the Texas Education Agency created a comprehensive correction plan, which the agency is implementing to address this resource issue. After the Recovery Act was enacted, the Texas Education Agency announced in March 2009 that it was creating a task force on federal stimulus and stabilization to coordinate the agency’s plans. Also in March 2009, the agency reported that it had established new accounting codes for tracking Recovery Act funds. Furthermore, the agency indicated that its application guidance for the temporary funding would specify that (1) grantees are expected to expend funds in ways that do not result in unsustainable continuing commitments after the funding expires and (2) the funds must be separately tracked and monitored. Generally, state officials recognized that a potential vulnerability can be associated with significant increases in funding levels. An example is the weatherization assistance program. As noted in table 1, of the estimated $1.2 billion in Recovery Act funds to be used for housing and infrastructure programs in Texas, weatherization assistance is the largest component program in terms of funding ($327 million). This funding level represents about a 25-fold increase over the estimated annual amount ($13 million) that existed before the Recovery Act, according to Texas Department of Housing and Community Affairs data. Tentatively, the department indicated that its program implementation plan will include using an existing network of 34 weatherization assistance program providers (e.g., various community action entities) as well as awarding other contracts to cities with populations over 75,000. Under the program, subrecipients have 2 years to fully expend the weatherization funding. The Texas Department of Housing and Community Affairs noted that it intends to periodically assess progress and determine if unobligated funds need to be moved to high-performing entities. More broadly, a particular challenge or difficulty cited by the executive and legislative branch officials we contacted in Texas is the need for more guidance from OMB and other applicable federal agencies. Regarding quarterly recipient reports, for example, the officials said that there is a lack of clarity regarding whether all agencies in the state must submit reports to OMB or whether each state must submit a consolidated report. The officials also noted that it would be useful to have a reporting “template” that specifies the specific data fields or information to be reported. Furthermore, the officials commented that rather than simply being directed to a Web site, it would be helpful to have a centralized point of contact in Washington, D.C., for receiving and addressing questions. In April 2009, the Governor’s Office and State Comptroller of Public Accounts officials continued to express concerns to us about unclear guidance from federal agencies on allowable uses and reporting requirements. Also in April 2009, the officials informed us that the Office of the Governor had hired a consulting company, and six consultants had been staffed to track deadlines and work with state agencies to assist Texas in meeting Recovery Act reporting requirements. Regarding other opportunities for enhancing Recovery Act funding accountability, the executive and legislative branch officials we contacted in Texas advocated that various oversight entities in the state be concurrently notified when funds are distributed. As mentioned previously, in Texas, the state-level decision-making process regarding use (and accountability and transparency) of federal Recovery Act funds involves several entities or key stakeholders, particularly the Office of the Governor, the Office of the Comptroller of Public Accounts, the State Auditor’s Office, and the Legislative Budget Board. Generally, in our meetings with representatives of these entities, a common theme expressed has been a desire to be notified by federal authorities when Recovery Act funds are distributed to Texas state agencies and/or localities. The representatives stated that concurrent notification to the state’s key stakeholders would help to further ensure accountability and transparency. In April 2009, officials from the Office of the Governor and the State Comptroller’s Office told us that, in its disbursement of Recovery Act funds to the state, the federal government was not identifying these funds separately from other federal funds. The Texas officials cited increased FMAP funding as an example. Absent separate coding from the disbursing federal agency, the Texas officials said that the state relies on the Texas Health and Human Services Commission to inform the State Comptroller’s Office of what portion of the combined funds are Recovery funds. The Texas officials commented that it would be helpful if the federal government put in place the coding structure to identify Recovery Act funds separately from other federal funds—as they believe the Act requires—before Recovery Act funds are disbursed to Texas. The executive and legislative branch officials we contacted in Texas— including officials from the Office of the Governor, the Office of the Comptroller of Public Accounts, the State Auditor’s Office, the Legislative Budget Board, and various program agencies—recognized the importance of the state taking steps to assess the impact of Recovery Act funds in terms of preserving and creating jobs, assisting those individuals most impacted by the recession, and so forth. In late January 2009, for example, in preparing to implement the transportation components of the anticipated national economic recovery program, the Texas Transportation Commission recognized that a primary purpose of the recovery program is to “create and sustain jobs.” Texas officials commented that agencies in Texas—a state that has a performance-based budgeting process—already have performance measures in place for their respective programs and operations, although some Recovery Act-related adjustments or modifications may be needed. Texas Department of Transportation officials noted, for example, that contracts involving the use of Recovery Act funds will have special provisions requiring contractors to report on jobs created. These officials also cited potential difficulties in measuring the impact of Recovery Act funds used for programs that commingle these funds with other federal or state funds. Finally, Texas officials told us that the Governor’s Office has taken the lead in administering the state’s responsibilities under the Recovery Act. As mentioned previously, the Governor’s Office chairs a Stimulus Working Group with representatives from the state agencies that have a role under the Recovery Act. Texas officials were uncertain as to whether a specific agency would be designated to be responsible for compiling an overall assessment of the impact of Recovery Act funds in the state. The officials added, however, that the state’s legislature was still in session and that the forthcoming biennial general appropriations bill—which will have a separate section specifically for Recovery Act funds—could perhaps assign such responsibility to an agency. We provided the Governor of Texas with a draft of this appendix on April 17, 2009. A Senior Advisor, designated as the state's point of contact for the Recovery Act, responded for the Governor on April 20, 2009. In general, the Senior Advisor agreed with the information in this appendix but wanted us to provide more context for the views of the State Auditor regarding potential areas of vulnerability with Recovery Act funds. We added contextual perspectives to address this concern and the Senior Advisor’s belief that Texas is equipped to meet its responsibilities under the Recovery Act. The Senior Advisor also provided technical suggestions that we incorporated where appropriate. In addition to the contacts named above, Danny Burton, Assistant Director; K. Eric Essig, auditor-in-charge; Yecenia Camarillo; Camille Chaires; Sharhonda Deloach; Michael O’Neill; Daniel Silva; Gabriele Tonsil; and Christy Tyson made major contributions to this report. XIX: Washington, D.C. Use of funds: An estimated 90 percent of Recovery Act funding provided to states and localities nationwide in fiscal year 2009 (through Sept. 30, 2009) will be for health, transportation, and education programs. The three largest programs in these categories are the Medicaid Federal Medical Assistance Percentage (FMAP) awards, highways, and the State Fiscal Stabilization Fund. Medicaid Federal Medical Assistance Percentage (FMAP) Funds As of April 3, 2009, the Centers for Medicare and Medicaid Services (CMS) had made about $87.8 million in increased FMAP grant awards to the District of Columbia. As of April 1, 2009, the District had drawn down about $49.9 million, or about 57 percent of its initial increased FMAP grant awards. District officials plan to use funds made available as a result of the increased FMAP to cover an increased caseload, offset general fund deficits, and maintain current Medicaid eligibility and benefit levels. The District of Columbia was apportioned $123.5 million for highway infrastructure investment on March 2, 2009, by the U.S. Department of Transportation. As of April 16, 2009, the U.S. Department of Transportation had obligated $36.6 million for one project in the District of Columbia. The District of Columbia plans to use these funds for reviewed and vetted “shovel ready” projects, such as pavement restoration and resurfacing work on federal roadways, once the appropriate contracting processes have been completed. U.S. Department of Education State Fiscal Stabilization Fund The District of Columbia was allocated $89.4 million from the initial release of these funds on April 2, 2009, by the U.S. Department of Education. District officials intend to use these funds to increase aid across all schools in the District. As of April 2, 2009, about $59.9 million of this allocation was available for the District to draw down upon. Before receiving the funds, states are required to submit an application that provides several assurances to the U.S. Department of Education. These include assurances that they will meet maintenance of effort requirements, or that they will be able to comply with waiver provisions, and that they will implement strategies to meet certain educational requirements, including increasing teacher effectiveness, addressing inequities in the distribution of highly qualified teachers, and improving the quality of state academic standards and assessments. As of April 15, 2009, the District was awaiting a response from the U.S. Department of Education on the District’s proposed plan for using the funds before submitting an application. Appendix XIX: Washington, D.C. In addition to the funding for these three programs, the District of Columbia is receiving Recovery Act funds under other programs, such as programs under Title I, Part A, of the Elementary and Secondary Education (ESEA), commonly known as the No Child Left Behind Act; programs under the Individuals with Disabilities Education Act (IDEA); and two programs of the U.S. Department of Agriculture—one for administration of the Temporary Food Assistance Program and one for competitive equipment grants targeted at low income districts from the National School Lunch Program. The District’s plans for using these and other Recovery Act funds are discussed throughout this appendix. Safeguarding and transparency: The District plans to use its existing financial systems to track the use of Recovery Act funds, and plans to use an ongoing accountability program to monitor District agency efforts to ensure that funds are used as intended. District officials are working to correct 89 material weaknesses in internal controls over both financial reporting and compliance with requirements applicable to major federal programs that were identified in the Fiscal Year 2007 Single Audit Report for the District of Columbia. The major federal programs in which these weaknesses were identified include programs that will be receiving Recovery Act funds, such as Medicaid’s FMAP, ESEA Title I Education grants, and Workforce Investment Act programs. At present, it is not clear whether corrective actions will be completed before the Recovery Act funds are received by the District. This could increase the risk that Recovery Act funds may not be used properly. The District’s Inspector General has also identified a number of District agencies with internal control and management issues that place them at risk for misusing Recovery Act funds. The District has initiated a Recovery Act Web site to help ensure that its Recovery Act efforts are transparent to the public. Assessing the effects of spending: The District plans to assess the impact of Recovery Act funds by using the information in reports required by federal agencies under the Recovery Act, including information on the economic impact of the funds, such as on job creation. The District has provided initial guidance to city agencies on the tracking and use of Recovery Act funds and is awaiting further guidance from the federal government, particularly information related to measuring jobs. District officials stated that the Office of Management and Budget (OMB) should provide a common definition of “job” and a metric to measure the number of jobs that are created by Recovery Act funds. District officials are also concerned about the lack of guidance for the methodology of tracking the new jobs created. Appendix XIX: Washington, D.C. The Mayor of the District of Columbia has established 13 work groups to oversee the use of Recovery Act funds in each program area. Each work group is led by the head of a District agency or department, or their designee, who reports to the City Administrator through his Recovery Act coordinator. The work groups will collaborate to make decisions on the use of Recovery Act funds. As of April 3, 2009, the District had been allocated about $240 million in Recovery Act funds. The City Administrator stated that the District is committed to taking full advantage of the opportunities provided by the Recovery Act, and is committed to doing so in a manner that is fiscally responsible, efficient, effective, and transparent, while addressing the goals of the statute and the needs of District residents. The District has begun to use the Recovery Act funds as follows. Appendix XIX: Washington, D.C. expenditures for Medicaid services. However, the receipt of the increased FMAP may reduce the funds that states must use for their Medicaid programs, and states have reported using these available funds for a variety of purposes. As of April 1, 2009, the District of Columbia had drawn down $49.9 million in increased FMAP grant awards, which was 56.8 percent of its awards to date. District of Columbia officials reported that they plan to use funds made available as a result of the increased FMAP to cover an increased caseload, offset general fund deficits, and maintain current eligibility and benefit levels in the District’s Medicaid program. Transportation—Highway Infrastructure Investment: The Recovery Act provides additional funds for highway infrastructure investment using the rules and structure of the existing Federal-Aid Highway Surface Transportation Program, which apportions money to states to construct and maintain eligible highways and for other surface transportation projects. States must follow the requirements for the existing programs, and in addition, the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. As of March 2, 2009, the District’s Department of Transportation was apportioned $123.5 million in Recovery Act funds for highway infrastructure and has identified “shovel ready” projects for these funds. According to the District of Columbia’s certification, approximately $56 million in projects have been fully reviewed and vetted. As of April 16, 2009, the U.S. Department of Transportation had obligated $36.6 million for one District project—the demolition and reconstruction of the existing New York Avenue Bridge over the railroad. Appendix XIX: Washington, D.C. things, it will take actions to meet certain educational requirements, such as increasing teacher effectiveness and addressing inequities in the distribution of highly qualified teachers. As of April 15, 2009, the District was awaiting a response from Education on the District’s proposed plan for using the funds to increase funding for education on a per student basis. Once this response is received, the District will submit an application to the federal government and expects to receive about $89.4 million in fiscal stabilization funds. The District is home to about 220 schools in 60 local education agencies (LEAs). The District’s 60 LEAs include one large public school system (District of Columbia Public Schools, or DCPS) and 59 smaller LEAs that are mostly single public charter schools. For the 2008-2009 school year, about 64 percent of District students were enrolled in DCPS, while about 36 percent were in public charter schools. District officials stated that they intend to distribute stabilization funds across all 60 LEAs. Other Education Funds: The District expects to receive about $37 million in Recovery Act funds for its ESEA Title I program. Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA), as amended by the No Child Left Behind Act provides funds to LEAs for schools that have high concentrations of students from families living in poverty in order to help improve teaching and learning. District officials told us that it may be a challenge to disburse funds rapidly while also meeting programmatic requirements. They also told us they did not yet know how the LEAs were planning on using these funds. The District also expects to receive about $18.8 million in stimulus funds for Individuals with Disabilities Education Act (IDEA) programs. About $16.4 million will be used for Part B grants to states, and about $260,000 for Part B grants for preschool children. The other $2.1 million will be used for Part C (state grants for infants and families). Officials told us that they were unsure of how IDEA funds would be used, but they anticipate being able to serve more children under each program, improve methods for assessing the performance of students with disabilities, and improve services to children and compliance with IDEA’s requirements. Appendix XIX: Washington, D.C. approximately $202 million in Recovery Act funding from FTA. WMATA plans to use these funds for 29 projects, including improving information technology and operating systems, maintenance, repair and replacement of buses, and renovation of passenger facilities in disrepair. According to its Web site, WMATA expects to make its first Recovery Act purchase of 45 hybrid-electric buses at the end of April 2009. Workforce Investment Act (WIA): As of April 3, 2009, the District’s Department of Employment Services had been allocated about $1.5 million for adult Workforce Investment Act (WIA) programs, about $3.8 million for dislocated workers programs, and almost $4 million for youth programs. The District plans to use these Recovery Act funds in accordance with the U.S. Department of Labor’s Guidance Letter Number 14-08. This guidance states that it is the intent of the Recovery Act that WIA adult funds be used to provide the necessary services to substantially increase the number of adults to support their entry or reentry into the job market, and that WIA dislocated worker funds be used to provide the necessary services to dislocated workers to support their reentry into the recovering job market. The guidance also emphasizes Congress’s interest in using WIA youth funds to create summer employment opportunities for youth. The District has also developed a plan that includes an increase in weekly benefits for the unemployed and an expansion of city services to help those filing unemployment claims and looking for work. The new benefits for the unemployed include additional compensation in the form of a supplemental $25 weekly benefit outlined in the Recovery Act. In addition, the District announced an extension for those who have exhausted their unemployment benefits and are actively seeking work. According to District officials, the Mayor plans to forward legislation to the D.C. City Council that will enable those who will exhaust their unemployment benefits by late spring to extend them until December 2009. Both the new supplemental compensation and the extension of benefits are 100 percent federally funded as part of the Recovery Act. Appendix XIX: Washington, D.C. DHCD officials said they have questions about how the program will be implemented and that the answers to their questions could require revisions to state qualified allocation plans and procedures. As a result, further guidance from IRS will be needed to understand whether DHCD would use the program and, if so, what management changes, if any, will be needed for its implementation. As required by the Recovery Act, HUD allocated about $27 million to the District of Columbia Housing Authority (DCHA) for capital and management activities, including modernization and rehabilitation of public housing projects. DCHA officials told us that they planned to use the allocation to fund improvements at ongoing projects included in their 5-year construction plan. Homeland Security and Justice Programs: District officials expect to receive an additional allocation of about $11.7 million through the Department of Justice’s Edward Byrne Memorial Justice Assistance Grant Formula Program, which nearly doubles the total amount of grant funding awarded by the District’s Justice Grants Administration in the last fiscal year. The District plans to use these funds in several areas, including prisoner reentry, detention and incarceration diversion initiatives, and court diversion services for at-risk youth. The District plans to change its funding priority targets by phasing out small discrete grants and instead focus on awarding grants that invest in long-term projects. According to District officials, they have collaborated with local criminal justice stakeholders and community groups to identify funding priorities. District officials plan to track Recovery Act funds using existing financial systems. According to District officials, the financial system already has the infrastructure to track, monitor, and report the source of funds distributed to recipients to ensure strict compliance with the requirements of the Recovery Act and to monitor the flow of Recovery Act funds from the federal government to District agencies. District officials plan to account for Recovery Act funds in a manner similar to the way they track and manage grant funds, using a unique four-digit code. Officials from the District’s Office of the Chief Financial Officer told us that they had notified District agency officials of the need to closely monitor Recovery Act funds. The District has not provided guidance to recipients regarding the tracking and use of Recovery Act funds. The District will determine what guidance needs to be provided to recipients once the District receives guidance from OMB. Appendix XIX: Washington, D.C. The District has developed a Recovery Act Web site (www.recovery.dc.gov) that is intended to allow the public to track Recovery Act efforts. The Web site contains information on the management process the District plans to use to oversee Recovery Act spending, and provides the public a way to track Recovery Act spending and get information on grants and contracts that are available. The Web site also offers the public a means to submit ideas and to identify any waste or fraud. Further, the Mayor’s certification of the use of the funds is also posted on the Web site, as is the testimony of the City Administrator and the Chief Procurement Officer on Recovery Act efforts before the D.C. Council—the District’s legislative body. The District will continue to use CapStat, a performance-based accountability program designed to make the District government run more efficiently and to ensure accountability, effectiveness of internal controls, compliance with reporting requirements, and reliable reporting about uses of Recovery Act funds. The CapStat process takes the form of weekly accountability sessions where the Mayor and City Administrator bring into one room all the executives responsible for improving performance on an issue to examine performance data and explore ways to improve government services, as well as to make commitments for follow-up actions. Each District agency participates in the program. Agency directors prepare for a session by examining their agency’s performance measures and analyzing how they can improve their results. Appendix XIX: Washington, D.C. weaknesses in compliance with requirements applicable to major federal programs including Medicaid’s FMAP, ESEA Title I Education grants, and Workforce Investment Act programs, all of which will be receiving Recovery Act funds. The findings were significant enough to result in a qualified opinion for that section of the report. In addition, Education designated the District as a high-risk grantee in April 2006 because of its poor management of federal grants. If the District continues to be designated as a high-risk grantee, Education could respond by taking several actions, such as discontinuing one or more federal grants made to the District or having a third party take control over the administration of federal grants. OCFO officials told us that they are in the process of working with the federal agencies to address these material weaknesses, but it is unlikely the corrective actions will be completed before the District programs with these weaknesses begin receiving Recovery Act funds. This could increase the risk that Recovery Act funds may not be used properly. Appendix XIX: Washington, D.C. did not receive any additional funds or resources to carry out specific Recovery Act reviews. The Office of the District of Columbia Auditor is the legislative auditor for the District. The office exists to support the District City Council in meeting its legislative oversight responsibilities and to help improve the performance and accountability of the District government. The Auditor has the authority to conduct audits on District funds, including those used by the D.C. Charter schools, but is not set up to provide comprehensive services regarding federal funds except in instances of D.C. Council requests and pre-existing mandates. The D.C. Auditor’s main body of work is developed on a rotating basis, where the Auditor selects specific activities or accounts to review every 3 years, concentrating on financial accounting and reporting. According to the D.C. Auditor, due to limited resources, they only plan to conduct audits based on scheduled rotations and requests, and they have no plans to audit Recovery Act funds. If, however, a planned audit concerns a program receiving Recovery Act funds, then the Auditor may adjust audit plans accordingly. Appendix XIX: Washington, D.C. OMB provide a template for the format and required information for Recovery Act Web sites as well. District officials also plan to use the CapStat performance-based accountability program to examine the impact of the use of Recovery Act funds on District agencies and programs. We provided the Office of the Mayor of the District of Columbia with a draft of this appendix on April 15, 2009. On April 17, 2009, the City Administrator’s office provided technical suggestions on the appendix that were incorporated, as appropriate. In addition to the contacts named above, John Hansen, Assistant Director; Mark Tremba, analyst-in-charge; Maria Strudwick; Shawn Arbogast; Marisol Cruz; Nagla’a El-Hodiri; Sunny Chang; Nancy Glover; Justin Monroe; Ellen Phelps Ranen; and Melissa Schermerhorn made major contributions to this report. The names of GAO staff who served on the teams for the selected states and the District are listed at the end of each respective appendix. In addition, the following staff contributed to this report: Stanley J. Czerwinski, Denise Fantone, and Yvonne Jones (Directors); Thomas James, James McTigue, and Michelle Sager (Assistant Directors); and Allison Abrams, David Alexander, Peter Anderson, Thomas Beall, Joanna Berry, Sandra Beattie, Bonnie Beckett, Pedro Briones, Kimberly Brooks, Kay Brown, Marcia Buchanan, Ted Burik, Steven Cohen, Nancy Cosentino, Robert Cramer, Michael Derr, Kevin Dooley, Heather Dowey, Colin Fallon, Alice Feldesman, Andy Finkel, Shannon Finnegan, Jim Fuquay, Vicky Green, Brandon Haller, Anita Hamilton, Tracy Harris, Laura Heald, Michael Hrapsky, Mary Catherine Hult, Susan Irving, Shirley Jones, Stuart Kaufman, Karen Keegan, Martha Kelly, Ba Lin, Edward Leslie, Leslie Locke, Steve Martin, JoAnn Martinez, Kim McGatlin, John McGrail, Donna Miller, Sheila Miller, Clarita Mrena, Elizabeth Morrison, Andy O’Connell, Lisa Pearson, Janice Poling, Brenda Rabinowitz, Carl Ramirez, Mathew Scire, Thomas Short, Michael Springer, George Stalcup, Andrew Stephens, Hemi Tewarson, Patrick Tobo, Gabriele Tonsil, Cheri Truett, Susan Wallace, Lindsay Welter, Michelle Woods, and Carolyn Yocom.
The American Recovery and Reinvestment Act of 2009 (Recovery Act) is estimated to cost about $787 billion over the next several years, of which about $280 billion will be administered through states and localities. The Recovery Act requires GAO to do bimonthly reviews of the use of funds by selected states and localities. In this first report, GAO describes selected states' and localities' (1) uses of and planning of Recovery Act funds, (2) accountability approaches, and (3) plans to evaluate the impact of funds received. GAO's work is focused on 16 states and the District of Columbia--representing about 65 percent of the U.S. population and two-thirds of the intergovernmental federal assistance available through the Recovery Act. GAO collected documents from and interviewed state and local officials, including Governors, "Recovery Czars," State Auditors, Controllers, and Treasurers. GAO also reviewed guidance from the Office of Management and Budget (OMB) and other federal agencies. About 90 percent of the estimated $49 billion in Recovery Act funding to be provided to states and localities in FY2009 will be through health, transportation and education programs. Within these categories, the three largest programs are increased Medicaid Federal Medical Assistance Percentage (FMAP) grant awards, funds for highway infrastructure investment, and the State Fiscal Stabilization Fund (SFSF). The funding notifications for Recovery Act funds for the 16 selected states and the District of Columbia (the District) have been approximately $24.2 billion for Medicaid FMAP on April 3, $26.7 billion for highways on March 2, and $32.6 billion for SFSF on April 2. Fifteen of the 16 states and the District have drawn down approximately $7.96 billion in increased FMAP grant awards for the period October 1, 2008 through April 1, 2009. The increased FMAP is for state expenditures for Medicaid services. The receipt of this increased FMAP may reduce the state share for their Medicaid programs. States have reported using funds made available as a result of the increased FMAP for a variety of purposes. For example, states and the District reported using these funds to maintain their current level of Medicaid eligibility and benefits, cover their increased Medicaid caseloads-which are primarily populations that are sensitive to economic downturns, including children and families, and to offset their state general fund deficits thereby avoiding layoffs and other measures detrimental to economic recovery. States are undertaking planning activities to identify projects, obtain approval at the state and federal level and move them to contracting and implementation. For the most part, states were focusing on construction and maintenance projects, such as road and bridge repairs. Before they can expend Recovery Act funds, states must reach agreement with the Department of Transportation on the specific projects; as of April 16, two of the 16 states had agreements covering more than 50 percent of their states' apportioned funds, and three states did not have agreement on any projects. While a few, including Mississippi and Iowa had already executed contracts, most of the 16 states were planning to solicit bids in April or May. Thus, states generally had not yet expended significant amounts of Recovery Act funds. The states and D.C. must apply to the Department of Education for SFSF funds. Education will award funds once it determines that an application contains key assurances and information on how the state will use the funds. As of April 20, applications from three states had met that determination- South Dakota, and two of GAO's sample states, California and Illinois. The applications from other states are being developed and submitted and have not yet been awarded. The states and the District report that SFSF funds will be used to hire and retain teachers, reduce the potential for layoffs, cover budget shortfalls, and restore funding cuts to programs. Planning continues for the use of Recovery Act funds. State activities indlude appointing Recovery Czars; establishing task forces and other entities, and developing public websites to solicit input and publicize selected projects. GAO found that the selected states and the District are taking various approaches to ensuring that internal controls manage risk up-front; they are assessing known risks and developing plans to address those risks. State auditors are also planning their work including conducting required single audits and testing compliance with federal requirements. Nearly half of the estimated spending programs in the Recovery Act will be administered by non-federal entities. State officials suggested opportunities to improve communication in several areas. Officials in nine of the 16 states and the District expressed concern about determining the jobs created and retained under the Recovery Act, as well as methodologies that can be used for estimation of each.
States and localities play the principal role in educating all students, including those with limited English proficiency, with most states providing supplemental aid specifically to address the special needs of these students. According to a November 1997 report (the latest available) by the Institute for Research in English Acquisition and Development, 39 states have some form of regulations targeting these students, ranging from a mandate in Texas that school districts provide bilingual instruction in at least some grades to a mandate in California that school districts provide instruction only in English. For the past 30 years, the federal government has served students with limited English proficiency primarily through title I of the Elementary and Secondary Education Act. The Bilingual Education Act, enacted in 1968, also serves a small percentage of these students under a supplemental grant program that assists local school districts in teaching students who do not know English. Other programs that may address, at least in part, the educational needs of children with limited English proficiency include the Emergency Immigrant Education Program, the Migrant Education Program, the Carl D. Perkins Vocational and Applied Technology Education Act programs, and the Individuals With Disabilities Education Act programs (see table 1). The only programs that serve primarily children with limited English proficiency are those associated with the Bilingual Education Act. Federal policy for ensuring equal educational opportunity for children with limited English proficiency has been largely shaped by title VI of the Civil Rights Act of 1964, the Equal Educational Opportunities Act (EEOA), and related court decisions. Title VI bans discrimination on the basis of race, color, or national origin in any program or activity receiving federal financial assistance. In Lau v. Nichols, the Supreme Court held that a school district’s failure to provide English-language instruction to non- English-speakers violated title VI. Like title VI, the EEOA also protects the civil rights of students with limited English proficiency. Under the EEOA, it is unlawful for an educational agency to fail to take “appropriate action to overcome language barriers that impede equal participation by its students in instructional programs.” In 1981, a federal court of appeals decision, Castaneda v. Pickard, created a test for evaluating the adequacy of a school district’s approach to addressing the needs of its non-English- speaking students and limited-English-speaking students. The Department of Education uses the test set forth in the Castaneda decision as the basis for determining whether a school district program for serving students with limited English proficiency is complying with title VI. Headquartered in Washington, D.C., Education’s OCR has 12 regional offices that enforce title VI and other civil rights statutes. In the five cases we reviewed, OCR initiated investigations independently or after deciding that a complaint brought by an individual or group met certain criteria. To determine which school districts had potential problems with their programs and therefore warranted a compliance review, OCR gathered and analyzed statistical data and other information from state education agencies, advocacy groups, parents, and OCR surveys. Once OCR selected a school district for review, it requested data from the school district and, if necessary, conducted on-site visits to schools in the district. If OCR found a school district was not in compliance with civil rights laws, it worked with the district to negotiate an agreement on the problems and the steps required to address those problems (the corrective action plan). During the period in which OCR monitored the implementation of the corrective action plan, school districts periodically submitted information to OCR regarding their programs for children with limited English proficiency. Figure 1 shows the title VI investigative process used by OCR in the five cases we reviewed in depth. In November 1994, OCR changed the procedural guidance it followed from the Investigation Procedures Manual to the Case Resolution Manual. OCR officials told us that since about 1995 they have implemented a more cooperative approach to their reviews. Under this approach, OCR has focused on finding early resolutions to problems and working cooperatively throughout the process with school district and state officials. Also, under this approach, a letter of findings is issued only when problems remain unresolved. No clear consensus exists among researchers and educators on the length of time needed for children with limited English proficiency to become proficient in English. Four factors make generalizations difficult: (1) differences in instructional approaches used to teach children English and the quality of that instruction, (2) differences in the ways states measure proficiency, (3) differences in student characteristics, and (4) the lack of definitive research on this issue. Two basic approaches are used to instruct students with limited English skills. One uses English and makes little use of a student’s native language (English-based approach), while the other makes much more extensive use of a student’s native language, often for a number of years (bilingual approach). Proponents of an English-based approach expect children to learn English fairly quickly, in 2 to 3 years. For example, in Monroe County, Florida, one of the districts we visited, elementary school children with limited English proficiency receive all formal content area instruction in English, alongside their English-fluent peers. District officials told us they chose this English-based approach in part because they believe children learn English more quickly when they are immersed in it. On average, elementary school students enrolled in the district’s English-language acquisition programs receive services for 3 years. The bilingual approach is designed to take much longer—often 5 years or more. While bilingual programs vary in both their goals and length, those programs that promote native-language literacy as well as English-language literacy may take 5 to 7 years to complete. For example, the San Antonio School District develops early literacy in Spanish, beginning with prekindergarten instruction. The program is designed to simultaneously develop English literacy, with a full transition to English-only instruction by the sixth grade. District officials said they believe it is important to develop bilingual citizens in a city that has a long bilingual tradition. Most of the city’s population is Hispanic, and a large proportion of the city’s residents speak both Spanish and English. The National Research Council has determined that there is “little value in conducting evaluations to conclude which type of program is best. The key issue is not finding a program that works for all children and all localities, but rather finding a set of program components that works for the children in the community of interest, given that community’s goals, demographics, and resources.” Whether a school district chooses an English-based or bilingual approach to teaching students with limited English proficiency, instructional quality will ultimately affect children’s academic achievement. Characteristics that contribute to high-quality programs, according to some educators, include adequately trained teachers, clearly articulated goals, systematic assessments, and opportunities for children to practice their English. In our site visits, for example, we visited one classroom in Cicero, Illinois, in which a bilingual education teacher who had been recruited from a Spanish-speaking country was using audiotapes to teach students English during the daily period dedicated to learning English. The students listened and followed along in their workbooks as a speaker on the tape read them a children’s story in English. There was no interaction between the teacher and the students. In contrast, in a Key West, Florida, classroom we visited, the bilingual education classroom teacher did not use audiotapes but instead read aloud a children’s story to his students. This teacher paused frequently to quiz the students on what they had heard. This activity not only gave the teacher an opportunity to see what his students understood of the story but also gave the students an opportunity to speak and practice English. No clear consensus exists about how proficiency should be defined or measured. Educators and researchers have observed that children who speak little or no English may develop “verbal proficiency”—that is, conversational skills on a par with those of their English-speaking peers— in 2 years or less. Broader “academic proficiency,” such as the reading and communicating of abstract ideas required for grade-level academic performance, can take several more years to acquire. Little agreement exists on an appropriate standard against which English proficiency should be measured. Some educators and language experts believe that a child should perform at age- or grade-appropriate levels in reading and other core academic subjects on standardized tests performed in English before the child can be considered English-proficient. This means that the child should score at or above the 50th percentile on a standardized achievement test. In contrast, some states consider students English-proficient when they score at the 40th percentile or even at the 32nd. Some critics question the validity of using these types of standardized achievement tests to measure whether a student’s achievement in English is better than, the same as, or worse than that of other children in his or her age group. These critics argue that a student’s performance on these tests does not necessarily reflect mastery or lack of mastery of certain English skills because the tests are designed to assess a student’s mastery of other subjects. Performance on standardized achievement tests is just one of several criteria states and districts may use to determine if a child is proficient in English. We found that in Rockford, Illinois, officials combined the results of an academic achievement test, English proficiency tests, and an academic review conducted by school and district officials to determine a child’s English proficiency level. In contrast, we found that in Texas students could be considered proficient by scoring at or above the 40th percentile on both the English reading and language arts sections of a state- approved norm-referenced academic assessment. Research indicates that the length of time needed to become proficient in English can vary from child to child. It can be affected by such factors as the child’s age, socioeconomic background, and amount of formal schooling already received in another language. For example, a 1997 study concluded that the most striking feature about learning a second language is the variability in outcomes. A frequently cited factor is a child’s age. Older children generally make faster initial progress than very young children do. For example, a study of students with limited English proficiency attending school in Fairfax County, Virginia, found that students who arrived in this country between ages 8 and 11 needed 5 to 7 years to compete with native speakers in all subject areas, while children who arrived when they were aged 4 to 7 needed 7 to 10 years. Researchers have proposed that this difference perhaps reflects the fact that older learners have developed more sophisticated language and thinking skills before beginning to learn English. Educators have also observed that students with prior formal schooling and higher socioeconomic backgrounds tend to learn a second language more easily. Other characteristics tied to differences in success rates include the amount of exposure students have already had to English; the level of parental support they have at home; and their classroom, school, and community environments. Any of these factors could affect how long students need to catch up with native speakers. While many evaluations of programs serving children with limited English proficiency have been conducted, we identified very few that focused specifically on the length of time students need to become proficient in English. Our review of existing research yielded three studies that met the following criteria: (1) they addressed the acquisition of English rather than other languages, (2) they focused specifically on the length of time required to become proficient, (3) they reached a specific conclusion about the length of time needed to become proficient in English (as described in app. I), and (4) they had been published. Two of these studies were carried out in Canada and one in the United States (see table 2). The students in each of these studies were schooled primarily in English. In general, the studies concluded that children with limited English proficiency need 4 years or more to develop the language skills needed to perform in academic subject areas on a par with native English-speakers. However, with so few studies available, the results should not be viewed as definitive, and other researchers in the field have challenged some of the results. The three studies we identified examined students’ progress in English with respect to two different sets of skills. The two Canadian studies focused on language skills alone, examining the point at which students’ scores on tests of vocabulary, auditory perception, and other language skills approached those of native English-speakers. The Fairfax County study focused on students’ academic achievement in English, measuring the point at which students’ performance on tests in reading, mathematics, and other subjects, given in English, began to approach that of native- English-speaking students. The Fairfax study showed that children took longer to reach grade norms in reading than in other subjects. For example, even among the highest performing subgroup of children (those who arrived in this county between ages 8 and 11), the performance in different subject areas varied widely, averaging 2 years to reach national norms in mathematics, 3 years in language arts, and 5 years or more in reading. English-based instruction is more commonly found in the nation’s public schools than bilingual instruction is. However, most students with limited English proficiency attend schools in which both approaches are used. In the six states we reviewed, most children received services for 4 years or less. More children with limited English proficiency receive instruction through an English-based approach than through an approach that makes use of their native language, according to data from the Department of Education’s most recent survey on the subject. About 76 percent of students with limited proficiency in English receive English-based instruction (such as English as a second language ); 40 percent receive bilingual instruction aimed at teaching subject matter in the student’s home language (such as teaching math in Spanish); and slightly fewer, 37 percent, receive instruction aimed at maintaining or improving fluency in their home language (such as Spanish language lessons for Spanish speakers.) The Education survey, which covered the 1993-94 school year, also asked schools about the types of instructional programs they offer and found that more schools offer English-based programs than bilingual programs. For example, about 85 percent of schools enrolling students with limited English proficiency offer ESL programs, and about 36 percent offer bilingual programs in which the student’s native language is used to varying degrees. Nearly three-fourths of all children with limited English proficiency attend schools with both types of programs. We visited 10 school districts in Arizona, Florida, Illinois, North Carolina, and Texas and found that 6 of the 10 used both English-based and bilingual instruction. The survey also found that students often receive more than one type of instruction during a school day. For example, ESL is often a component of programs classified as bilingual education programs—that is, although explanations and some content areas may be taught in the student’s native language, ESL techniques may be used to teach English. However, the study’s data were not collected in a way that would allow accurate estimates of the proportion of students who received a combination of services. Determining the type of instruction students actually receive is more complicated than these results would indicate for two reasons. First, the instructional approaches used to teach children with limited English proficiency are far more varied than the categories typically used to capture this information. For example, a program model called “structured immersion” uses simplified English to teach subject matter and sometimes allows for the teacher’s use of students’ native language for clarification. While clearly not a bilingual approach, some might classify this approach with English-based approaches, such as ESL; others might classify it as a distinct third approach that makes limited use of students’ native language. Second, the broad program labels used by educators may not reflect actual classroom practices. For example, in the Monroe School District, Florida, we observed a language arts class designed to teach ESL to Spanish- speaking students. Normally, such an approach would involve little or no use of Spanish. In this case, however, the teacher was not only specially trained to teach English language arts to speakers of other languages, but also fluent in Spanish. She provided instruction first in English and then translated much of that instruction into Spanish. We found no national data on the length of time children with limited English proficiency actually spend in programs aimed at helping them become proficient in English. Thus, we contacted education agencies in 12 states with substantial concentrations of students with limited English proficiency to collect any available state-level data on this issue. Of the 12 states contacted, 6 had information on the length of time children with limited English proficiency spent in language assistance programs. Data from these six states—Arizona, Florida, Illinois, New Jersey, Texas, and Washington—indicate that in 1998-99 (the latest year for which data are available), the majority of children with limited English proficiency who made the transition from English-language programs spent 4 years or less in language assistance programs. As table 3 shows, at least two-thirds of the children in Florida, Illinois, New Jersey, and Washington made the transition from programs within 4 years. In Arizona and Texas, the portion that made the transition within 4 years was lower: closer to one-half. In five states, 12 percent or fewer of the children were out within 1 year. In the sixth state—New Jersey—about one-third exited within 1 year. At the other end of the scale, 10 percent of the students with limited English proficiency in New Jersey spent 5 years or more in programs, while 41 percent of such students in Arizona spent more than 5 years. California, with about 40 percent of the nation’s students with limited English proficiency in 1996-97, did not have statewide data that could be used to determine how long children were spending in its programs. To provide an indication of what was happening there, we obtained data from four large school districts with large numbers of students with limited English proficiency: Los Angeles, San Francisco, Santa Ana, and San Diego (see table 4). Because of the limited number of states and school districts from which the data were drawn, these results should be interpreted cautiously. Differences in the way these states and school districts define proficiency for exiting such programs, as well as the types of tests used to measure proficiency, make direct comparisons across states and districts nearly impossible. In addition, districts may also decide on their own whether to apply additional criteria beyond the requirements set by their states. Moreover, in June 1998, California passed Proposition 227, mandating English-based instruction in California public schools (although waivers have been granted under this system, and bilingual programs still operate in some California public schools). This new requirement may have an impact on future data coming from these districts. As school districts address the various challenges associated with meeting the educational needs of children with limited English proficiency, districts are also required to provide these children equal educational opportunities under title VI of the Civil Rights Act. We now focus on the requirements that Education’s OCR expects school districts to meet and how OCR interacted with school districts whose language assistance programs it investigated from 1992 to 1998. During the 6 years covered by our review, OCR relied on the three policy documents regarding children with limited English proficiency discussed below. These documents incorporate the Castaneda decision’s three- pronged test for assessing the adequacy of programs for students with limited English proficiency to determine whether school districts are in compliance with title VI. OCR did not promulgate Castaneda’s requirements as regulations, instead setting them forth in policy documents. OCR used compliance reviews to monitor school districts’ compliance with these requirements. School districts that were found out of compliance with the title VI requirements were required to enter into negotiated agreements with OCR to correct their programs for students with limited English proficiency. Our survey and case reviews of school districts involved in negotiated agreements resulting from OCR’s compliance reviews between 1992 and 1998 revealed that the interaction between OCR and school districts has been generally positive. A majority of districts indicated that OCR regional staff did not favor, or pressure them to adopt, a particular language approach, and almost all of the 245 respondents indicated that OCR was courteous and minimized disruption of daily activities when visiting school districts. However, some school officials reported problems in their interactions with OCR, most frequently related to feeling pressured to change aspects of their programs not related to the language approach used and to OCR’s untimely or inadequate communication with school districts. Castaneda set forth a three-part test for determining whether a school district has adopted a satisfactory method for teaching children with limited English proficiency. The federal courts and OCR now generally accept this test as a threshold for determining compliance with title VI. The test is based on a combination of education theory, practice, and results and requires that school district programs (1) be based on sound educational principles, (2) effectively implement the educational principles, and (3) have succeeded in alleviating language barriers. OCR requirements for title VI compliance are articulated through three policy documents known as the May 1970 memorandum, the December 1985 memorandum, and the September 1991 policy update. The May 1970 memorandum required school districts to meet four basic criteria for title VI compliance: districts must take “affirmative steps” to rectify the language deficiency of students with limited English proficiency; students may not be designated as academically deficient on the basis of the school system’s tracking system for students with limited English proficiency must be designed to meet their needs as soon as possible, and it must not work to lock students into a particular curriculum; and schools must notify parents of school activities in a language they can understand. The second document, the December 1985 memorandum, stipulates that OCR does not require schools to adopt any particular educational or language-teaching approach and that OCR will determine title VI compliance on a case-by-case basis. Any sound educational approach that ensures the effective participation of students with limited English proficiency is acceptable. The December memorandum also outlines steps OCR staff should take to determine whether there is a need for an alternative language program for students with limited English proficiency and whether the district’s program is adequate for meeting the needs of these students. The September 1991 policy update provides additional guidance for applying the May 1970 and December 1985 memorandums. The 1991 document describes the legal standard set forth by the court in Castaneda and therefore contains more specific standards for staffing requirements, criteria for student completion of language assistance programs, and program evaluation. Policy issues related to access to special education programs and gifted/talented programs, as well as OCR’s policy with regard to segregation of students with limited English proficiency, are also highlighted in this update. Over three-fourths of the school districts responding to our survey (77 percent) reported that when investigating cases OCR staff did not appear to favor bilingual instruction over English-based instruction. For example, one school district noted that OCR staff made no mention of bilingual instruction as a recommendation, but rather they emphasized meeting the needs of students with limited English proficiency. But three districts felt pressure to increase emphasis on bilingual instruction. While most school districts indicated that OCR appeared to be neutral regarding instructional approach, about 18 percent reported OCR favored the bilingual approach and about 4 percent reported that OCR favored English-based instruction (see fig. 2). The 38 districts that reported that OCR favored bilingual education were located in every OCR region except for Region 6 (the District of Columbia regional office). More than half of these districts had cases that were handled by either the San Francisco or Denver regional office, two regions that serve almost half the students with limited English proficiency. (See app. III for more detailed information on the cases related to students with limited English proficiency by district, the percentage of students in each of the regions, and the districts’ views about whether OCR favored a particular approach.) In addition, in the school districts investigated by OCR, the kind of program offered after the corrective action plan had been implemented changed little. Further, some school district officials indicated that OCR did not influence the type of language assistance program implemented. Figure 3 shows the distribution of the instructional approaches school districts offered before and after OCR investigation. (See app. IV for further details.) Overall, school districts reported that their interactions with OCR staff during investigations were positive in three areas: courtesy, minimization of disruption of daily activities, and consideration of the rationale for the school district’s existing program (see fig. 4). In comments written on their questionnaires, 13 school districts reported that services to students with limited English proficiency had improved as a result of OCR’s investigation. For example, one respondent indicated that OCR had pointed out identification and assessment procedures that the school district had not previously implemented, and that, as a result of the OCR investigation, improved procedures were adopted. In addition, some respondents called OCR’s approach “collaborative” or “professional.” Similarly, during our site visits, officials in two school districts noted that their interactions with OCR staff were positive. For example, one superintendent said that OCR staff were very professional, the goal of both OCR staff and school officials during the investigation was to meet the needs of students with limited English proficiency, and the students had benefited from OCR’s assistance. In another school district, officials told us that OCR staff were pleasant and cordial and that they showed an interest in how the district was delivering alternative language services to children with limited English proficiency. As part of our survey, we gave school district officials the opportunity to make suggestions on how OCR could improve its investigation procedures and to offer any additional comments about OCR’s investigation of their school district. Of the 245 questionnaires returned by school districts, almost half (47 percent) contained comments on what OCR could do to be more effective or improve its investigative process, and over half (53 percent) made additional comments about OCR’s investigation of their school district. Although district officials generally reported positive interactions between their school district and OCR, some respondents commented on the types of problems they encountered during OCR’s investigation process. We sorted these problems into seven categories and have listed them in table 5 in descending order of the frequency of the comments. Several of the problems reported in the survey comments also surfaced in our case investigations. Some districts suggested that OCR could address some of these issues by ensuring that communications were timely, providing more feedback in response to submitted reports, understanding the constraints within which districts have to operate, attempting to minimize paperwork requirements, including educators on OCR’s investigative teams, and being clear about when the monitoring period would end and the case would be closed. In addition, some districts suggested that OCR should work more closely with state education agencies and involve the state in the early stages of the investigations to deal with situations in which state guidance differs from federal guidance on meeting the needs of students with limited English proficiency. We asked OCR headquarters officials to respond to the problems school districts identified. In doing so, OCR headquarters officials indicated that OCR had also identified some of the issues and that it, in conjunction with regional office staff, was already taking the following steps to address them (see table 6). Policymakers are faced with particularly difficult decisions with regard to students with limited English proficiency because their needs are varied and experts disagree about the best methods to teach them. Moreover, there is no clear time line for acquiring English proficiency. Even though different approaches to English language instruction may be effective, many variables may influence the choice of program used by a school, such as the percentage of students with limited English proficiency, the number of languages spoken by students, and students’ family backgrounds. As a result, local decisions about the amount of time needed to attain proficiency and the amount of language support that should be provided may differ. Available research does not definitively indicate the best teaching methods to use or the amount of time support should be provided. However, guidance from OCR provides the framework and standards that school districts must meet to ensure that students with limited English proficiency have a meaningful opportunity to participate in public education. School districts have the flexibility to select methods of instruction that they deem will produce the best results for their students, so long as they meet OCR requirements. We found that when OCR followed up on complaints or engaged in compliance reviews, for the most part, it worked effectively with districts. Moreover, few districts changed their approach to teaching students with limited English proficiency after OCR investigations. There have been some problems, however, with OCR’s working relationships with districts, which OCR acknowledges and is taking steps to improve. In commenting on a draft of this report, the Department of Education generally agreed with its findings and said it was particularly gratified by the survey results (see app. V). Education also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Honorable Roderick R. Paige, Secretary of Education; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions about this report, please call me on (202) 512-7215. Other GAO contacts and staff acknowledgments are listed in appendix VI. To determine how long students with limited English proficiency need to become proficient in English, we identified potential studies for review and selected studies that met four criteria. To ensure quality and relevance, the study had to (1) focus on the length of time children need to become proficient in English, (2) reach a specific conclusion about the length of time, (3) have English as the second language learned by the students, and (4) involve original research supported by published data. We identified potential studies for review by searching two national databases for information on second-language learning—the National Clearinghouse for Bilingual Education (Department of Education) and the National Educational Resources Information Center (ERIC)—and by contacting experts to obtain both their recommendations on research regarding second-language learning and information on any research they might have conducted on second-language learning. We contacted the following. Mr. Jorge Amselle, Executive Director, Center for Equal Opportunity, Washington, D.C. Dr. Keith Baker, Education Consultant Dr. James Cummins, Ontario Institute for Studies in Education Dr. Russell Gersten, University of Oregon Dr. Kenji Hakuta, Stanford University Dr. Stephen Krashen, University of Southern California Dr. Rosalie Porter, Editor, READ Perspectives Dr. Christine Rossell, Boston University Dr. J. David Ramirez, California State University Long Beach We also reviewed research summaries, including Improving Schooling for Language-Minority Children: A Research Agenda, by the National Research Council, National Academy of Sciences (1997). We also used the bibliographies of all the studies we identified and reviewed to obtain additional relevant research. From these efforts, we obtained over 70 published articles and other reports that appeared relevant and reviewed each of them. Only three met all four of our selection criteria. To determine what approaches are used to teach children with limited English proficiency, we reviewed the literature, spoke with experts, and reviewed the results of survey data collected by the Department of Education. We also obtained information on the approaches used in 10 school districts we visited in Arizona, Florida, Illinois, North Carolina, and Texas—states with large or growing populations of students with limited English proficiency. To determine how long students remained in language assistance programs, because national data are not available, we contacted 12 states in spring 2000, each with over 40,000 students who have limited English proficiency or with populations of such students constituting over 9 percent of the student population (that is, states with substantial concentrations of students with limited English proficiency): Alaska, Arizona, California, Florida, Illinois, Massachusetts, Nevada, New Jersey, New Mexico, New York, Texas, and Washington. We obtained state-level data from the six states that had such data: Arizona, Florida, Illinois, New Jersey, Texas, and Washington. Although no state data were available for California, we did obtain data from four districts in that state: Los Angeles, San Francisco, and San Diego for school year 1998-99 and Santa Ana for school year 1999-2000 (the only data available). To determine the requirements for children with limited English proficiency that the Department of Education's Office for Civil Rights (OCR) expects school districts to meet and how they are set forth, we interviewed OCR officials, searched the Education Web site, and reviewed OCR policy documents and case law regarding students with limited English proficiency. To determine the nature of the interactions between OCR and school districts in those instances in which OCR has entered into an agreement with the school district concerning language assistance programs, we investigated 5 of the 15 cases suggested by your staff in California, Colorado, Massachusetts, Michigan, and Texas. We also surveyed 293 school districts listed by OCR as having entered into corrective action agreements with OCR for providing services to students with limited English proficiency from 1992 through 1998. Of the 293, 245 responded (84 percent). We also reviewed the transcripts of three congressional hearings before the Subcommittee on Early Childhood, Youth, and Families of the Committee on Education and the Workforce: Bilingual Education Reform, San Diego, Calif., February 18, 1998. Serial Reforming Bilingual Education, Washington, D.C., April 30, 1998. Serial The Review and Oversight of the Department of Education's Office for Civil Rights, Washington, D.C., June 22, 1999. Serial No. 106-49 We also contacted Mr. James M. Littlejohn of Jim Littlejohn Consulting, The Sea Ranch, California. Mr. Littlejohn worked for OCR for 27 years. From 1981 to 1993, he was policy director of OCR in Washington and, according to the director of the Denver Regional Office, during the years covered by our study, Mr. Littlejohn trained most of the OCR investigators in how to properly conduct a Lau investigation (those title VI investigations related to children with limited English proficiency). He retired from OCR in 1996 and now works as a consultant to school systems around the country and on several federal court cases involving bilingual education. Mr. Littlejohn was a key information source for the Committee, testifying and providing key analyses. Arizona was the only state we reviewed that had detailed breakdowns by year on how long students who had received bilingual or English-as-a- second-language (ESL) services did so before making the transition out of these services (see table 7). Illinois was the only state that had data broken down by type of program (ESL or bilingual) (see table 8). We asked school district officials to answer the following question: “Did OCR staff, as a whole, convey the impression that they favored English- only instruction, they favored bilingual education, they favored another language program, or they were neutral on the question?” Of the 225 districts responding, 77 percent replied that OCR did not convey an impression that it favored any particular type of instruction. However, 23 percent indicated that OCR did convey a preference: 18 percent indicated that, in their opinion, OCR favored bilingual 4 percent indicated that, in their opinion, OCR favored English-only 1 percent indicated that, in their opinion, OCR favored another type of language program. (See table 9.) We asked school districts a number of questions about the type of program they had that was specifically designed to meet the English-language needs of students with limited English proficiency (solely bilingual education, English-only instruction, both bilingual and English-only instruction, or another type of language program) before and after the OCR investigation. We also asked about any changes in the type of program used by the district as a result of OCR actions. Ten school districts added bilingual instruction to their English-language learning program after OCR intervention. Of these, six indicated that before OCR's investigation they had not planned to change the type of language program they used; three indicated that before the OCR investigation they had planned to change the type of program they used and that the changes that resulted from OCR's investigation were consistent with the changes they had planned to make; and one district did not indicate whether or not it had planned to change the type of language program used before the OCR investigation. One of the 10 school districts indicated that it felt pressured by OCR to change the type of language program it was using. Our analysis indicated that of the 89 school districts that indicated they had English-only programs before OCR's investigation, 10 added bilingual education to their English-only programs and no school district changed from English-only to solely bilingual. Table 10 lists these 10 school districts and their corresponding OCR regional offices and provides details about the changes made in the districts' English-language acquisition programs. In addition to those named above, Malcolm Drewery, Behn Miller, Ellen Soltow, and Virginia Vanderlinde made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
Experts disagree about the best methods to teach student who speak little English. Even though different approaches to English language instruction may be effective, many variables may influence a given school's program choices. Moreover, there is no clear time line for acquiring English proficiency. Local decisions about the amount of time needed to attain proficiency and the amount of language support that should be provided may differ. Of the two main instructional approaches, English-based instruction is more common than instruction in a student's native language. Most students spent four or less years in these programs. School districts are required to ensure that English-language instruction is adequate and to provide these children with equal educational opportunities. The Office of Civil Rights (OCR) has adopted procedural requirements for criteria for judging the adequacy of local English-language instruction programs in meeting those needs. In three policy documents, OCR set forth requirements that school districts must meet to pass a three-pronged test established by the courts. When the adequacy of local English-language instruction programs is questioned, OCR investigates and, if problems are found, enters into agreement with the district specifying how the district will address the issues.
The sale or transfer of U.S. defense items to friendly nations and allies is an integral component in both U.S. national security and foreign policy. The U.S. government authorizes the sale or transfer of military equipment, including spare parts, to foreign countries either through government-to­ government agreements or through direct sales from U.S. manufacturers. The Arms Export Control Act and the Foreign Assistance Act of 1961, as amended, authorize the DOD foreign military sales program. The Department of State sets overall policy concerning which countries are eligible to participate in the DOD foreign military sales program. DOD identifies military technology that requires control when its transfer to potential adversaries could significantly enhance a foreign country’s military or war-making capability. The transfer or release of military technology to foreign countries involves various agencies such as the Department of State and DOD, which are responsible for controlling, in part, the transfer of such technology. The Defense Security Cooperation Agency, under the direction of the Under Secretary of Defense for Policy, has overall responsibility for administering the foreign military sales program, and the military services generally execute the sales agreements with the individual countries. A foreign country representative initiates a request by sending a letter to DOD asking for such information as the price and availability of goods and services, training, technical assistance, and follow-on support. Once the foreign customer decides to proceed with the purchase, DOD prepares a Letter of Offer and Acceptance stating the terms of the sale for the items and services to be provided. After this letter has been accepted, the foreign customer is generally required to pay, in advance, the amounts necessary to cover costs associated with the services or items to be purchased from DOD and then is allowed to request spare parts through DOD’s supply system. The foreign military sales policy and oversight for the Department of the Army are the responsibility of the Deputy Assistant Secretary of the Army for Defense Exports and Cooperation. The Commander, U.S. Army Materiel Command, is the Army’s executive agent for implementing, administrating, and managing the foreign military sales program. The U.S. Army Security Assistance Command performs the executive agent’s functions for the U.S. Army Materiel Command. The United States Army Security Assistance Command’s responsibilities start with the initial negotiation of a foreign military sale and end with the transfer of items and completion of all financial aspects of the sales agreement. The command uses an automated system called the Centralized Integrated System for International Logistics to support the U.S. Army’s management of the foreign military sales program. The command originally developed the system in 1976, and in October 1997, the Defense Security Cooperation Agency transferred the Army’s system to the Defense Security Assistance Development Center. The command retained responsibility for defining system-user requirements, designing new processes, and directing programming modifications to the system’s applications. However, the overall responsibility for providing system information technology maintenance support, such as writing and testing the programs and coordinating infrastructure support, was transferred to the Defense Security Assistance Development Center. Foreign military sales requisitions for Army spare parts and other items initially are processed through the system. For blanket orders, the system uses the security classification code to restrict the spare parts available to foreign military sales customers. Once the system validates a requisition, the requisition is sent to a supply center to be filled and shipped. The Army’s requisition process for foreign military sales of parts and other items is shown in figure 1. The Army’s internal controls over foreign military sales using blanket orders are not adequate, placing classified spare parts, as well as unclassified items containing military technology, at risk of being shipped to foreign countries, who are not eligible to receive them. We found that the Army (1) lacked control edits in its system and allowed the substitution and release of classified spare parts under blanket orders for shipment to foreign countries, and that a written policy does not exist to determine the actions needed to recover these items; (2) lacks adequate control edits in its system to prevent the release of some unclassified spare parts and other items containing military technology, and that a written policy does not exist to determine the actions needed to recover these items; and (3) has not conducted periodic tests to validate that its system is accurately reviewing and approving blanket orders. As a result of these inadequate internal controls, classified spare parts, as well as unclassified items containing military technology, were shipped to foreign countries that may not be eligible to receive them under blanket orders. The Army lacked control edits in its system and allowed the substitution and release of classified spare parts under blanket orders for shipment to foreign countries. The Army and DOD policies prohibit the release of classified spare parts, under blanket orders, to foreign countries. We identified 3 of the 40 requisitions in our review for the period between October 1, 1997, and April 30, 2003, where the Army item manager had released classified parts under 3 separate blanket orders. For these 3 requisitions, the original parts requested were unclassified but not in stock. The item manager substituted 11 classified digital processors for the unavailable parts and then released these parts under blanket orders for shipment to a foreign country. According to Army officials, the foreign countries were not entitled to receive these items under blanket orders. However, according to Army officials, the foreign countries would be entitled to these items because they have the equipment that these classified spare parts support and that these countries could obtain the parts under a different process such as a defined order. Therefore, according to the officials, in this particular case there is no need to retrieve the items. Based on the Army officials’ response, we agree with their decision. Until we identified the problem, Army officials at the United States Army Security Assistance Command, who are responsible for implementing, administrating, and managing the Army’s foreign military sales program, were not aware that these classified parts had been substituted for the originally requisitioned unclassified parts. Based on our review, the Army has modified the system to validate substituted parts selected by item managers. According to United States Army Security Assistance Command officials, they have no written policy to determine the actions the Army needs to take to recover classified spare parts or unclassified items containing military technology that were shipped to foreign countries that are not eligible to receive them. Army officials indicated that they have procedures to recover items shipped in lieu of the items ordered; however, the procedures do not address the recovery of items shipped that the foreign country was not eligible to receive. During our review, the officials did not agree with us that they should have written procedures in place to recover these items indicating that this responsibility belongs in the foreign military sales end-using monitoring program. They suggested we contact the Department of State and the Defense Security Cooperation Agency for additional information on recovering these items. While the Army may not be responsible for recovering these items, the Army would initially be aware that these items were shipped to foreign countries that may not be eligible to receive them, and could initiate recovery of these items. However, in discussions with officials on a draft of this report, officials indicated their current policies and procedures to recover items shipped in lieu of items ordered need to be modified to include items shipped to foreign countries that may not be eligible to receive them. The Army lacks control edits in its system to prevent the release of some unclassified items containing military technology to foreign countries under blanket orders. As a result, the Army has shipped some unclassified items containing military technology to foreign countries that may not be eligible to receive them. Officials from DOD’s Office of the Deputy Under Secretary of Defense Technology Security Policy and Counterproliferation indicated that the Army should have control over unclassified items containing military technology. In addition, the Defense Security Cooperation Agency indicated criteria for releasing these items should be considered on a country-by-country basis prior to releasing any items to a foreign country. The agency also stated that the military departments should use the applicable codes available as a means to help identify spare parts that contain military technology to ensure that the appropriate means are taken and adequate controls are in place to prevent unauthorized releases. Within the 21,663 requisitions for unclassified items containing military technology that were shipped, we found the following requisitions were not identified and reviewed before they were released: (1) 17,175 requisitions were for 381,245 items such as circuit card assemblies, fire control units, and electron tubes that require their inherent military capability to be destroyed or demilitarized prior to their release to the public; and (2) 387 requisitions were for 2,267 items that foreign countries are prohibited from requesting using blanket orders because the spare parts require release authority from inventory control points. Based on our review, the Army had initiated action to modify its system to cancel blanket orders for parts that require release authority from inventory control points. With such a modification, these 387 requests would be canceled. However, the action to modify the system is pending based on the official interpretation of the Army regulation on spare parts that requires release authority from inventory control points. In addition, as previously mentioned, according to United States Army Security Assistance Command officials, the Army has no written policy for recovering classified spare parts and unclassified items containing military technology that were shipped to foreign countries not eligible to receive them. According to Army officials, the foreign countries were entitled to receive these items. Therefore, according to the officials, in these particular cases there is no need to retrieve the items. Based on the Army officials’ response, we agree with their decision. In 1991, the Army had a control edit installed in its system that identified requisitions for parts containing military technology for manual review. This control edit caused thousands of requisitions to be referred for manual review. Army documents indicate that it removed the control edit because according to guidance from the U.S. Army Defense Systems Command and System Integration and Management Activity, the parts containing military technology do not require protected storage. Army documents also indicate that removing the control edit that identified requisitions for unclassified items containing military technology would eliminate an enormous number of labor hours required to research these parts. The system does not refer for review those requisitions for items containing military technology because Army officials stated that DOD has determined that these items are not classified, sensitive, or pilferable; consequently, the items should not be subjected to controlled physical inventory requirements. In 1992, DOD changed selected stock numbers from unclassified to a classification indicating unclassified stock containing military technology to ensure that parts requiring demilitarization could be researched if shortages were reported during depot inventory reviews and do not require protected storage. In our earlier review of the Air Force, we reported that the Air Force did not use control edits to prevent spare parts containing sensitive military technology from being released to foreign countries. The Air Force plans to develop criteria for identifying spare parts containing sensitive military technology and establish appropriate control edits in its automated system so that requisitions for spare parts containing sensitive military technology are identified and referred for review. Also, the Air Force uses criteria, such as federal supply class, to restrict the parts available to foreign military sales customers. For example, we reported that the Air Force restricts countries from requisitioning parts belonging to the 1377 federal supply class (cartridge and propellant actuated devices and components) using blanket orders. There are three codes the Army could use to identify spare parts that contain military technology. These codes are (1) the controlled inventory item code, which indicates the security classification and security risk for storage and transportation of DOD assets; (2) the demilitarization codes assigned by the item manager identifying how to dispose items; and (3) the federal supply class code. Demilitarization codes are assigned to spare parts for new aircraft, ships, weapons, supplies, and other equipment. The demilitarization codes also determine whether the items contain military technology and establish what must be done to the items before they are sold. The Army has not conducted periodic tests to validate that its system is accurately reviewing and approving blanket order requisitions and operating in accordance with the Army’s foreign military sales policies. GAO’s and the Office of Management and Budget’s internal control standards require that a system such as the Army’s be periodically validated and tested to ensure that it is working as intended and the ability to accurately review and approve requisitions is not compromised. In the Federal Information Systems Controls Audit Manual, which lists control activities for information systems, one of the control activities listed involves the testing of new and revised software to ensure that it is working correctly. Also, in the Management of Federal Information Resources, the manual requires that each agency establish an information system management oversight mechanism that provides for periodic reviews to determine how mission requirements might have changed and whether the information system continues to fulfill ongoing and anticipated mission requirements. Furthermore, the Internal Control Management and Evaluation Tool — a tool that assists managers and evaluators in determining how well an agency’s internal control is designed and functioning — lists monitoring as one of five standards of internal controls. Internal control monitoring should assess the quality of performance over time and ensure findings from reviews are promptly resolved. Ongoing monitoring occurs during normal operations and includes regular management and supervisory activities, comparisons, reconciliations, and other actions people take in performing their duties. In our review, we found that a foreign country had requested unclassified parts using blanket orders for which the item manager substituted and shipped classified spare parts. According to DOD officials, had the system validated the substituted classified spare parts, the system would have canceled the orders. United States Army Security Assistance Command officials were unaware of this situation until we identified the problem. Also, we found spare parts where the security classification had been changed from unclassified to classified without Army officials being notified of the change. Based on our review, the Army initiated actions to add control edits to its system to (1) validate substituted spare parts before they are released to foreign countries and (2) review monthly supply catalog updates and cancel open blanket orders when spare parts’ security classification changes from unclassified to classified. Defense Security Assistance Development Center officials indicated that periodic tests of the Army’s system have not been conducted because, in October 1998, the Defense Security Cooperation Agency directed that no additional funds be used to expand the current system. However, Defense Security Cooperation Agency officials stated that this directive does not preclude the Army from periodically testing the system and its logic. According to DOD and Army officials, they have not tested the system’s logic for restricting requisitions since 1999 when they initially modified the system to cancel requisitions for classified spare parts under blanket orders. As part of our review, we tested the system by reviewing Army restrictions applied to historical requisitions on classified spare parts and unclassified items containing military technology and found that the system did not always perform as intended. According to Army officials, there have not been any reviews to assess whether the foreign military sales requisition process for items ordered are processed correctly. The Centralized Integrated System for International Logistics system creates daily reports that identify problems with requisitions, which are then reviewed by Army case managers before continuing through the system. While officials indicated several external audits with GAO and the Army Audit Agency have been recently completed, these audits focused on the overall foreign military sales program and not the requisition process. Based on our observations, these audits do not replace a system test to determine whether the current system is in compliance with existing requisitioning policies and procedures. The Army has not maintained effective internal controls over foreign military sales sold under blanket orders. Specifically, the Army lacked control edits in its system and allowed the substitution and release of classified spare parts under blanket orders for shipment to foreign countries that may not be eligible to receive them. Also, the Army lacks control edits in its system to prevent the release of some unclassified items containing military technology to foreign countries. Moreover, the Army has no written policies to determine the actions needed to recover classified spare parts and unclassified items containing military technology that have been shipped to foreign countries not eligible to receive them. Further, the Army failed to periodically test the Centralized Integrated System for International Logistics system. If the Army had conducted tests to determine whether its system was in compliance with requisitioning policies and procedures, some classified spare parts—as well as unclassified items containing military technology—may not have been released to foreign countries under blanket orders. Without adequate internal controls, classified spare parts and unclassified items containing military technology may be released to foreign countries under blanket orders, thereby providing military technology to countries that might use it against U.S. interests. Recommendations for To improve internal controls over the Army’s foreign military sales Executive Action program and to prevent foreign countries from being able to obtain classified spare parts or unclassified items containing military technology that they are not eligible to receive under blanket orders, we are recommending that the Secretary of Defense instruct the Secretary of the Army to take the following two actions: Modify existing policies and procedures, after consultation with the appropriate government officials, to cover items shipped in lieu of items ordered to also ensure the recovery of classified spare parts that have been shipped to foreign countries that may not be eligible to receive them under blanket orders. Modify existing policies and procedures covering items, after consultation with the appropriate government officials, to cover items shipped in lieu of items ordered to also ensure the recovery of unclassified items containing military technology that have been shipped to foreign countries that may not be eligible to receive them under blanket orders. To improve the Army system’s internal controls aimed at preventing foreign countries from obtaining classified spare parts or unclassified items containing military technology under blanket orders, we are recommending that the Secretary of Defense direct the Under Secretary of Defense for Policy to require the appropriate officials to take the following two actions: Modify the system so that it identifies blanket order requisitions for unclassified items containing military technology that should be reviewed before they are released. Periodically test the system and its logic for restricting requisitions to ensure that the system is accurately reviewing and approving blanket order requisitions. In commenting on a draft of this report, DOD concurred with two of our recommendations and did not concur with the two other recommendations. First, with regard to our recommendation to modify the system so that it identifies blanket order requisitions for unclassified items containing military technology that should be reviewed before they are released, the department concurred. DOD’s comments indicated that the Army will comply with making the specific changes to the system that the Defense Security Cooperation Agency identified as required or that the Army would conduct its own study, given the funding and guidance necessary, to identify items that should be reviewed before they are released. Second, with regard to our recommendation to periodically test the Centralized Integrated System for International Logistics, the department stated that the Army will conduct periodic testing of the system and its logic for restricting requisitions, given the funding and guidance necessary to do so. We also received technical comments and we incorporated them wherever appropriate. With regard to our two recommendations to consult with the appropriate agencies to determine what actions the Army needs to initiate in order to recover (1) classified spare parts and (2) unclassified items containing military technology that have been shipped in error, i.e., shipped in lieu of items ordered, under blanket orders, DOD did not concur. The department said that the Army already has procedures in place to recover classified spare parts and unclassified items containing military technology that have been shipped in error, i.e., shipped in lieu of items, ordered under blanket orders. The procedures include (1) systemic status codes that will advise the case manager that an incorrect item is being shipped by the supply center, at which time the error can be corrected; (2) if the item is still shipped, the case manager can begin retrieval actions by contacting the Security Assistance Office in country; and (3) the customer can initiate a Supply Discrepancy Report upon receipt of the incorrect item to return the item. We acknowledge that these procedures might address wrong items shipped. However, they do not address the intent of our recommendations to recover classified spare parts and unclassified items containing military technology shipped to foreign countries that are not eligible to receive them. If the country requested classified spare parts or unclassified items containing military technology that it is not eligible to receive under blanket orders, it will not likely submit a Supply Discrepancy Report if it had intended to order the items. In addition, we interviewed Defense Security Cooperation Agency and Army officials to determine if the procedures they cited in the agency comments are referring to items shipped in lieu of items ordered instead of shipment of items that foreign countries are not eligible to receive. According to the officials, the procedures are for items shipped in lieu of items ordered and not for the recovery of items that the foreign countries are not eligible to receive. As stated in our report, Army officials told us that they had no written procedures in place to recover classified spare parts or unclassified items containing military technology, because it is not within their responsibility to recover these items. These officials stated that this responsibility belongs to the foreign military sales end-use monitoring program, which includes the Department of State and the Defense Security Cooperation Agency. In following-up with officials on their written comments on the draft of this report, they agreed that they need to modify existing policies and procedures covering items, after consultation with the appropriate government officials, to cover items shipped in lieu of items ordered to also ensure the recovery of classified spare parts and unclassified items containing military technology that have been shipped to foreign countries that may not be eligible to receive them. As a result, we have modified our two recommendations accordingly. To assess and test whether the Army’s internal controls adequately restricted blanket orders for classified spare parts sold to foreign countries, we obtained current DOD and Army guidance on the foreign military sales programs. We also held discussions with key officials from the United States Army Security Assistance Command, New Cumberland, Pennsylvania, to discuss the officials’ roles and responsibilities, as well as the criteria and guidance they used in performing their duties to restrict foreign countries from requisitioning classified spare parts and other items containing military technology under blanket orders. Also, we interviewed the officials on the requisitioning and approval processes applicable to classified spare parts. In addition, we obtained written responses from officials at the Defense Security Cooperation Agency, Washington, D.C., to identify the agency’s roles and responsibilities regarding the policies and procedures relevant to the foreign military sales programs. We also interviewed officials from the Defense Security Assistance Development Center, Mechanicsburg, Pennsylvania, to discuss their roles and responsibilities, as well as the criteria and the guidance they used to maintain and oversee the Army’s Centralized Integrated System for International Logistics system to restrict foreign countries from requisitioning classified spare parts and other items containing military technology under blanket orders. Furthermore, we interviewed officials to determine the functional and operational controls that are used to validate requisitions entered into the system. To test the adequacy of the Army’s internal controls to restrict access to certain unclassified items containing military technology, we obtained DOD and Army guidance on the foreign military sales program. We also reviewed requisitions for unclassified items containing military technology for which the system had approved the shipments under blanket orders. In addition, we interviewed Army officials to obtain their reasons for releasing these items. Also, we obtained records from the United States Army Security Assistance Command on all classified spare parts and unclassified items containing military technology that were purchased using blanket orders and approved for shipment to foreign countries from October 1, 1997, through April 30, 2003. We limited our review to blanket orders because defined orders and Cooperative Logistics Supply Support Agreements specified the parts that countries were entitled to requisition by the national stock number. The records covered 21,703 requisitions for classified spare parts and unclassified spare parts and other items that contain military technology. We tested the system by identifying the 40 requisitions for classified spare parts that were shipped under blanket orders and reviewed the restrictions applied to determine if the system was operating as intended. To assess the Army’s internal controls on the release of unclassified items containing military technology, we reviewed 21,663 requisitions for which the system had approved the shipments under blanket orders. Further, we obtained written responses from DOD officials concerning whether unclassified items containing military technology should be reviewed prior to being released to foreign countries. While we identified some issues concerning the appropriate procedures for such items, in all the cases we reviewed, we found that the items had been ordered and shipped from the Army’s system. To determine whether the Army periodically conducted tests to validate the system to ensure that it accurately identified for review and approval blanket order requisitions to support foreign military sales, we obtained and reviewed documentation identifying the system tests to determine how often they were conducted. Also, we interviewed Army and DOD officials to determine how periodic reviews and tests were performed on the system. We conducted our review from May 2003 through December 2003 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to the Secretary of Defense; the Secretary of the Army; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-8365 if you or your staff have any questions concerning this report. Key contributors to this report were Lawson (Rick) Gist, Jr.; Carleen Bennett; Latrealle Lee; Elisah Matvay; Arthur James, Jr.; and Ann DuBois.
From 1993 through 2002, the Department of Defense (DOD) delivered over $150 billion in services and defense articles--including classified spare parts and unclassified items containing military technology--to countries through foreign military sales programs. GAO was asked to review whether the Army's key internal controls adequately restricted blanket orders for (1) classified spare parts and (2) unclassified items containing military technology. GAO was also asked to determine if periodic tests were conducted to validate the Army's system and its logic. The Army's internal controls over foreign military sales are not adequate, placing classified spare parts and unclassified items containing military technology at risk of being shipped to foreign countries that may not be entitled to receive such items under blanket orders. Foreign countries may request items using blanket orders, which are for a specific dollar value and are used to simplify supply actions on certain categories of items. The Army lacked control edits in its system and allowed the substitution and release of classified spare parts under blanket orders for shipment to foreign countries. The Army and DOD policies prohibit the release of classified items, under blanket orders, to foreign countries. GAO identified 3 requisitions in its review, where the item manager released 11 classified digital processors to foreign countries under blanket orders. Because the Army's system did not have control edits in place to validate the substituted parts, classified items were released to foreign countries. Also, the Army has no written policy to determine the actions needed to recover classified items that have been shipped to countries not eligible to receive them. Army officials indicated that the countries were not entitled to receive these items under blanket orders but they could obtain them under a different process; so there is no need to retrieve them, and GAO agreed with their decision. Also, the Army has modified the system to validate substituted parts selected by item managers. The Army lacks control edits in its system to prevent the release of some unclassified items containing military technology requisitioned under blanket orders. Within the 21,663 requisitions that were shipped without a review, GAO found that 387 requisitions were for 2,267 restricted items that foreign countries are prohibited from requesting using blanket orders because the parts require release authority from inventory control points. Also, the Army has no written policies to recover items that have been shipped to countries not eligible to receive them. Army officials said the countries were entitled to request these items, so there is no need to recover the items. The Army has not conducted periodic tests, as required, to validate that its system is accurately reviewing and approving blanket order requisitions. GAO's and the Office of Management and Budget's internal control standards require that a system such as the Army's be periodically tested to ensure that it is working as intended. According to DOD and Army officials, they have not tested the system's logic for restricting requisitions since 1999. Also, the officials stated that the Defense Security Cooperation Agency, in October 1998, directed that no additional funds be used to expand the current system. However, according to the agency, the Army is not prohibited from periodically testing the system.
Freight railroads are an important component of the nation’s transportation system, operating over 700 million train-miles in 2010. The freight railroad industry is primarily composed of 7 large railroads (called class I railroads) and about 570 smaller class II and III railroads. Within the industry, class I railroads predominate, representing about 93 percent of total freight revenue and about 68 percent of total rail mileage operated in the United States in 2009. Class II and III railroads include regional and short line railroads. Regional railroads typically operate 400 to 650 miles of track spanning several states, while short line railroads typically perform point-to-point service over short distances. According to the American Short Line and Regional Railroad Association (ASLRRA), the average length of short line service is 90 miles, and over 58 percent of short line carriers connect with more than 1 class I railroad. As the association points out, short line railroads generally operate the first mile and last mile of U.S. freight rail commerce. Because railroads operate across millions of train-miles every year, safety is an important concern. In general, railroad safety has improved over the last 10 years. For example, the approximately 1,800 freight train accidents reported to FRA in 2010 represents a decrease of nearly 40 percent from the approximately 3,000 train accidents reported in 2001. Similarly, the number of accidents per million train-miles for all railroads reported to FRA decreased to 2.6 in 2010 from 4.2 in 2001 (see fig. 1). Yet this decline is not equal for railroads of all sizes: In 2010, the rate reported for class III railroads, 7.1 accidents per million train-miles, was more than twice the rate reported for all railroads. FRA attributed the difference in accident rates to differences in operations between larger railroads (which generally operate over longer distances and perform little switching) and smaller ones (which generally operate over shorter distances and perform frequent switching). Because RSIA was passed by Congress in late 2008, FRA officials told us it is too early to tell what effect, if any, requirements contained in the law may have had on railroad accident rates. They said it may well take years to identify any particular effects. In this report, we do not attempt to draw any correlations between safety outcomes (such as changes in accident rates) and changes to hours of service requirements contained in RSIA. accidents (see fig. 2). As for train accidents overall, the rate for accidents caused by human factors has generally decreased over the last 10 years, and again, the rate is higher for class III railroads than for either class I or class II railroads (see fig. 3). Although there is a general downward trend, FRA attributed the decrease since 2008 to changes it made that year to certain safety regulations to increase railroads’ accountability for implementing and complying with sound operating procedures. For example, as of January 1, 2009, every railroad was to have a written program of operational tests and inspections in effect, and the programs were to emphasize those operating rules that cause or are likely to cause the most accidents and incidents. 12 consecutive hours of time on duty or 12 nonconsecutive hours on duty if broken by an interim release of at least 4 consecutive hours, in a 24-hour period that begins at the beginning of the duty tour. 12 consecutive hours of time on duty or 12 nonconsecutive hours on duty if broken by an interim release of at least 4 consecutive hours uninterrupted by communication from the railroad likely to disturb rest, in a 24-hour period that begins at the beginning of the duty tour. May not be on duty as a train employee after initiating an on-duty period on 6 consecutive days without 48 consecutive hours off duty free from any service for any rail carrier at the employee’s home terminal. Employees are permitted to initiate a 7th consecutive day when the employee ends the 6th consecutive day at the away-from-home terminal, as part of a pilot project, or as part of a grandfathered collectively bargained arrangement. Railroads permitted to communicate with covered employees during rest time though some communications may be considered service for the railroad. A railroad may not communicate with covered employees during the statutory minimum off-duty period of 10 consecutive hours, except in cases of emergency. If an employee’s rest is disturbed, then the statutory minimum off-duty period begins again from the point of interruption. Limited to 276 hours of time on duty, in deadhead transportation to a point of final release, or any other mandatory activity for the railroad carrier during a calendar month. 8 consecutive hours (10 consecutive hours if time on duty reaches 12 consecutive hours). 10 consecutive hours of time off duty free from communication from the railroad likely to disturb rest, with additional time off duty if on-duty time plus time in or awaiting deadhead transportation to final release exceeds 12 hours; 48 consecutive hours off duty, free from any service for any railroad carrier, after initiating an on-duty period for 6 consecutive days. Covered employees may initiate a 7th consecutive day of service if the end of a 6th day of service was at an away-from-home terminal as part of a pilot project, or as part of a grandfathered collectively bargained arrangement. If 7 consecutive days are permitted, mandatory off-duty period extended to 72 consecutive hours. Individual railroads are primarily responsible for their own safe operation. However, FRA is the primary federal agency responsible for formulating railroad safety policies and regulations and for monitoring and enforcing railroads’ compliance with hours of service and other requirements. FRA has issued statutory interpretations related to covered freight railroad employees’ duty and rest time, as well as regulations governing hours of service recordkeeping. FRA has also adopted what it views as a data- driven, risk-based approach to monitoring and enforcement. Under the National Rail Safety Action plan, implemented between 2005 and 2008, FRA used accident, incident, and other safety data to establish a framework to direct its regulatory and compliance efforts at the highest priority risks. The plan outlines a number of initiatives aimed at reducing the main types of train accidents, including those caused by human factors or track defects. One of these initiatives, the National Inspection Plan (NIP), uses accident and inspection data to focus inspections on areas that, according to the data, are likely to have safety problems before a serious accident occurs. The NIP provides guidance to each of FRA’s eight regional offices on how its inspection resources should be allocated. Additionally, the Office of Railroad Safety issues the National Safety Program Plan, which provides a means of planning special- emphasis activities, such as inspection activities and initiatives that cross regional boundaries and are directed at issues of concern for railroads operating in multiple regions. To provide oversight, FRA conducts periodic inspections and takes enforcement action.  FRA inspections address five areas, called disciplines—operating practices, track, hazardous materials, signal and train control, and motive power and equipment (such as locomotives and freight rail cars). Each inspection discipline includes a number of activities related to specific requirements. For example, inspectors in the operating practices discipline—who perform about 80 percent of hours of service inspections—assess railroads’ compliance with hours of service requirements for train and dispatching service employees. Typically, inspections are conducted at railroads’ operating sites. For example, inspections of hours of service recordkeeping and inspections for compliance with hours of service limitations take place at duty stations or facilities where records are maintained.  To take enforcement action, FRA inspectors may cite violations and recommend assessment of civil penalties. FRA’s enforcement policy, which is designed to concentrate enforcement efforts on the areas with the greatest potential safety benefits, specifies that before assessing penalties, inspectors should consider the seriousness of the condition or act, the potential safety hazards, and the current level of compliance of the railroad, among other things. FRA has statutory authority to assess civil penalties in the range of $650 (minimum) to $25,000 (ordinary maximum) for ordinary violations of its regulations. FRA may assess a penalty at the statutory aggravated maximum penalty of $100,000 “when a grossly negligent violation or a pattern of repeated violations has caused an imminent hazard of death or injury to individuals, or has caused death or injury.” In addition to these activities, FRA conducts other types of safety oversight aimed at reducing train accidents, such as monitoring railroad safety data, investigating accidents, and reviewing and investigating complaints, as well as providing training for small railroads. Furthermore, FRA funds research and development to support its safety oversight, by, for example, assisting in the development of new regulations and revision of existing regulations. FRA also has authority to review and approve petitions for waivers of compliance with safety requirements, including exemptions from the hours of service laws for railroads with 15 or fewer covered service employees and waivers of one requirement of the hours of service law, the consecutive day work limits. Finally, FRA is authorized to approve pilot projects that may be conducted to demonstrate the potential safety benefits of alternatives to current safety requirements. As of July 2011, FRA had 592 rail safety positions, including about 400 inspectors. In addition, about 170 state inspectors work with FRA as part of the State Rail Safety Participation Program. As of 2009, the railroad industry had about 170,000 employees, 140,000 miles of track in operation, and over 1.3 million freight rail cars. Overall, FRA inspects about 0.2 percent of railroad operations each year. Its goal is to inspect all railroads at least once a year, but it does not always assess a railroad’s compliance with all activities related to the requirements in each discipline during each inspection. When the major hours of service changes in RSIA took effect in July 2009, the nation was in the midst of a serious economic recession, and the railroad industry was experiencing decreases in revenues, traffic, and staffing levels. For example, operating revenues for class I railroads decreased from $61.2 billion in 2008 to $47.8 billion in 2009 before recovering to $58.4 billion in 2010. Revenue ton-miles for class I railroads followed a similar pattern, decreasing from 1.8 billion in 2008 to 1.5 billion in 2009 before increasing to 1.7 billion in 2010. In addition, the number of class I railroad T&E employees decreased from about 65,000 in December 2008 to just under 57,000 in December 2009 before increasing again to about 62,000 in December 2010. RSIA’s new hours of service requirements have led to changes in covered T&E employees’ work schedules. Both the limits on consecutive work days without required rest (referred to hereafter as consecutive work day limits) and the new requirements for rest, including the requirements for increasing the minimum rest at the end of a shift from 8 to 10 hours, and for this rest to occur during the 24 hours before the start of a new shift and be undisturbed (referred to hereafter as the increased rest requirements) have contributed to the schedule changes. Factors other than RSIA, such as the economic recession, could have played a role in these changes. We attempted to mitigate for the effects of the economic conditions by avoiding choosing months during which the demand for rail service was rapidly declining and we also only analyzed employees that worked in both May 2008 and May 2010 under the assumption that these employees would likely be performing similar work. Class I and II railroad officials we spoke with said RSIA’s consecutive work day limits have led some railroads generally to substitute a schedule with 5 consecutive work days followed by 48 hours of rest, known as a “5 by 2” schedule, for the previously more common schedule with 6 consecutive work days followed by 24 hours of rest, known as a “6 by 1” schedule. Now, use of the 6 by 1 schedule requires an FRA-approved waiver of compliance with hours of service requirements. For affected employees, this schedule change means that during the course of a 7- day period, a day of rest has taken the place of a day of work. RSIA’s requirements for increased rest also contributed to the schedule changes. Although RSIA made 10 hours of rest mandatory, some railroad officials we spoke with said they had instituted 10-hour rest periods for covered T&E employees before RSIA took effect. However, this policy generally applied only at home terminals, not at away-from-home terminals. Railroad officials told us the work schedule changes responded to RSIA provisions and also addressed economic factors. Our analysis of hours worked by the same class I covered T&E employees and covered T&E employees of the participating class II railroads showed a per-employee increase of about 10 hours in the time available for rest for the class I employees in May 2010 compared with May 2008 and a per-employee increase of about 17 hours for the class II employees in May 2010 compared with May 2008. This increase is statistically significant. The extent to which covered employees used the additional time to rest is unknown. Railroad officials told us that some employees have used the extra rest time to work a second job or to do other activities that may not involve rest. For example, an official with a class III railroad told us many of its covered T&E employees have farms that they work when they are not working on the railroad. The increased time available for rest under RSIA also led covered T&E employees to work fewer hours. The same analysis that we used to determine the increase in available rest time showed the same per- employee decrease in hours worked in May 2010 compared with May 2008—about 10 hours for class I employees and about 17 hours for the selected class II employees—both of which are a statistically significant change. For both the class I and class II covered T&E employees included in our analysis, the total hours worked per employee decreased from 156 in May 2008 to about 146 in May 2010 for class I employees and from 169 in May 2008 to about 153 in May 2010 for class II employees. In addition, for class I covered T&E employees the total number of work shifts (which includes covered and noncovered service for the railroad) per employee decreased from 18 in May 2008 to 17 in May 2010 and class II covered T&E employees saw a decrease from 19 in May 2008 to 18 in May 2010. Still another effect of RSIA’s increase in rest time may be an increase in the amount of time some covered T&E employees spent at terminals other than their home terminal. In responding to our rail industry survey, 6 out of 7 class I railroads reported an increase in the time affected employees spent at away-from-home terminals. In addition, 7 of 14 class II railroads reported that they had away-from-home operations, and of these, 4 reported an increase in the time spent away from home. For the most part, increased time spent away from home was not an issue for the 153 class III railroads that responded to this question in our survey. Of these 153 class III railroads, 10 reported an increase in time at away- from-home terminals for their covered employees. According to both class I railroad and rail labor officials we spoke with, some of the affected covered employees are not happy with the increased time away-from- home, and the officials suggested that the undisturbed rest requirement be reduced from 10 hours to 8 hours at away-from-home terminals to allow covered employees to return home sooner. Initial indications are that RSIA’s changes generally reduced the fatigue potential for covered T&E employees. According to our analysis of covered T&E employee work schedules, the potential for covered employees to work at high risk of fatigue—a level associated with reduced alertness and an increased risk of errors and accidents— decreased after RSIA took effect. More specifically, our analysis of the May 2008 and May 2010 work schedules for class I and class II covered T&E employees using an FRA-validated fatigue model showed that the percentage of total time worked at high risk of fatigue decreased by 29 percent (3 percentage points) for the class I employees and 36 percent (5 percentage points) for the class II employees (see fig. 4). Further information on fatigue science and our use of fatigue models appears in appendix II. RSIA’s consecutive work day limits and requirement for 10 hours undisturbed rest time both may have contributed to the reductions in work-related fatigue indicated by our analysis.  Effects of consecutive work day limits on fatigue. In its March 10, 2011, Notice of Proposed Rulemaking Regulatory Impact Analysis on commuter and intercity passenger rail hours of service requirements, FRA stated that working an increasing number of consecutive days tends to result in reduced sleep as an employee sacrifices time for sleep to attend to personal activities. This tendency would apply to both freight and passenger railroads. FRA’s proposed requirements for limiting consecutive days of work for commuter and intercity passenger rail covered T&E employees are based on research from other industries that shows some evidence of increased fatigue risk over successive workdays. In FRA’s view, the proposed consecutive work day limits for commuter and intercity passenger rail covered employees were reasonable and necessary because of the increased fatigue risk from working a high number of consecutive days without rest. Rail labor representatives we spoke with also told us they see RSIA’s consecutive work day limits as beneficial because they provide a break for employees in their work schedules.  Effects on fatigue of RSIA’s increased rest requirements. As noted above, the total time available for rest has increased offering more opportunity for employees to rest. In addition RSIA’s requirement that rest be undisturbed may be an additional benefit for employees rest. According to rail labor representatives we spoke with, crew calls from railroads during employees’ rest periods were a concern before RSIA took effect. The representatives told us that covered employees had complained to them about unnecessary contact by railroads during their rest periods and said that this contact had been disruptive to their rest. Since RSIA has taken effect, they said, such complaints have virtually ceased. It is still early in the implementation process, yet class I railroad officials and rail labor officials we spoke with said that RSIA’s requirements, including longer, undisturbed rest time should contribute to a better rested workforce. The effects of RSIA’s hours of service changes on fatigue levels for class III railroads may depend on their operations. We did not analyze class III covered T&E employee work schedules, since their hours of service and employee records were largely paper-based. However, interviews with class III railroads indicated that for some class III railroads, particularly those that had scheduled daytime operations, fatigue may not have been an issue prior to RSIA. In interviews, railroad officials with 2 class III railroads said that fatigue was not an issue for their employees, because they offered service Monday through Friday during the daytime, with occasional Saturday service, depending on customer needs. Both of these railroads had FRA-approved waivers of compliance with hours of service requirements that permitted 6 by 1 work schedules, so that periodically scheduling a sixth day of service was not a concern. In addition, according to these officials, their covered T&E employees generally travel a maximum of 25 to 50 miles, and their work schedules always begin and end at the home terminal. In responding to our rail industry survey, 98 out of 153 (64 percent) of the class III railroads responded that they had changed crew schedules as a direct result of RSIA. This change in crew schedules may indicate previous crew schedules were not compliant with RSIA provisions and changes to the schedules could have improved fatigue. Again, any improvements would likely be due in part to RSIA’s consecutive work day limits and increased rest requirements. Fatigue science has shown that the risk of fatigue is greater for nighttime work than for daytime work. For example, research on human circadian rhythms (the natural wake and sleep patterns of the human body) has shown that people by nature get tired at night and are more likely to have higher quality, more restorative sleep at night than they are during the day. Working at night can upset these circadian rhythms and result in sleep disruption and potential health problems. Fatigue research has also shown that fatigue increases, or alertness and performance decrease, during night work and that fatigue risk is substantially greater for successive night shifts than for successive day shifts. Eliminating nighttime work in the freight railroad industry would not be practicable, and RSIA’s requirements had little effect on the amount of time covered T&E employees work at night. According to our analysis of the May 2008 and May 2010 work schedules for covered class I and class II T&E employees, the number of per-employee work hours that occurred at night for these employees decreased 4 hours per employee for class I employees and 2 hours for class II employees after RSIA took effect. In general, the freight railroad industry operates every day of the year, 24 hours a day. FRA noted in its Regulatory Impact Analysis for the proposed commuter and intercity passenger rail hours of service rules, that unlike freight service passenger service may be less affected by night work fatigue factors because most scheduled commuter and intercity passenger rail service does not operate during night hours. Additionally, our analysis of the work schedules for these 2 months showed little change in the percentage of covered class I T&E employees whose schedules involved night work—47 percent in May 2008 and 45 percent in May 2010. For the covered class II T&E employees, that percentage was the same in both months—34 percent. Although fatigue science has shown that the risk of fatigue is higher for night work than for day work, RSIA does not differentiate between the two in its hours of service requirements for freight railroads. FRA, however, has differentiated between the two when approving petitions for waivers of compliance with hours of service requirements. For example, when railroads have petitioned for a waiver of compliance with hours of service requirements to allow their employees to work a 6 by 1 schedule with both day and night shifts, FRA has approved such a schedule for daytime shifts, but not for shifts that include the hours between midnight and 6 a.m. FRA does not approve these shifts because of the higher risk of fatigue associated with them. FRA has also differentiated between day and night work in the final hours of service rules for covered train employees providing commuter and intercity passenger rail transportation. Specifically, the rule would not require FRA review and approval, including an assessment of fatigue risk, for work schedules that fall within the parameters of preapproved daytime work schedule templates (generally between 4 a.m. and 8 p.m.). Schedules that include work between 8 p.m. and 4 a.m., must generally be analyzed using an FRA approved fatigue model to assess the potential fatigue risk. FRA review and approval is needed on schedules where fatigue risk is deemed too great. For such schedules, railroads must generally take mitigating action to bring the risk from fatigue to an acceptable level. Additionally, limitations are placed on the number of consecutive days that a covered commuter or intercity passenger railroad T&E employee may work, with the limitations depending on the time of day of the assignments within the series of consecutive days. In making this distinction between nighttime and daytime work assignments, FRA has taken into account the fact that work at night presents a greater risk of fatigue. Our analysis of the class I and selected class II covered T&E employee work schedules for May 2008 and May 2010 shows that the extent that employees worked hours at night was highly correlated with employees spending at least 20 percent or more of their work time at high risk of fatigue. In our analysis the proportion of employees with 20 percent or more of their work time at high risk of fatigue for class I T&E covered employees decreased from 14 percent in May 2008 to 10 percent in May 2010 and from 18 percent in May 2008 to 12 percent in May 2010 for participating class II railroad covered T&E employees. Even though fatigue risk was reduced after the implementation of the new hours of service requirements under RSIA, our findings on the correlation between night work and work hours spent at high risk of fatigue—along with the fatigue model results discussed previously that showed the decline in high risk of fatigue based on total hours worked —indicate that because RSIA did not directly limit the hours worked at night or incorporate night work into the freight requirements, fatigue might not be addressed under the new requirements to the fullest extent possible. Taking hours worked at night into consideration for freight hours of service requirements could hold promise for mitigating the risk of fatigue. In addition to analyzing actual work schedules from the class I and class II railroad T&E employees, we analyzed three consecutive sets of two hypothetical 6 by 2 work schedules—the maximum number of consecutive work days allowed under RSIA when not returning home from an away-from-home terminal—using a fatigue model to further assess the effects of night work on fatigue. One schedule included only daytime hours with 10 hour shifts from 8 a.m. to 6 p.m. and the other included nighttime hours with 10 hour shifts from 8 p.m. to 6 a.m. According to our analysis, the percentage of time at high risk of fatigue was greater for the hypothetical night work schedule than for the hypothetical day work schedule. The day work schedule had no time spent at high risk of fatigue, while the night work schedule had a total of 67 hours, or 37 percent of total work time, spent at high risk of fatigue. Furthermore, the risk of fatigue for the nighttime work schedule was high for all but one of the work days in the all-night-work schedule, while no work day in the all-day-work schedule fell into the high risk category (see fig. 5). The peak fatigue score shown in the figure is the highest fatigue score achieved on a work schedule day analyzed by the fatigue model. This does not mean the whole scheduled work time was spent working at the peak fatigue level. For example, on the nighttime hours schedule on day 2 according to the model example employee 2 would have spent 32 minutes working at high risk of fatigue with a peak fatigue score of 72. However, according to the model output on day 6 example employee 2 spent 8 hours 6 minutes out of a 10 hour shift working at high risk of fatigue with a peak fatigue score of 119. As our analysis of these hypothetical schedules indicates, consecutive daytime shifts may present a lower risk of fatigue than consecutive nighttime shifts. According to our survey results, RSIA’s hours of service requirements led to a number of effects on railroads’ operations, as would be expected with any significant change in statutory or regulatory requirements aimed at improving safety by reducing covered employee fatigue. These effects included changes in how crews and trains are scheduled, increases in staffing levels to maintain operations, and reductions in some railroads’ ability to meet customer needs. In general, according to our survey results, smaller railroads found some of the changes more burdensome than did larger railroads. In addition, some railroads incurred one-time or ongoing financial costs, or both, to implement the changes. According to our survey results, RSIA’s hours of service requirements— especially its consecutive work day limits and increased rest requirements—substantially changed the way railroads schedule crews. For example, all 7 class I, 8 of 14 class II (about 57 percent), and 98 of 152 class III (about 64 percent) railroads reported changing crew schedules as a direct result of RSIA’s hours of service requirements. Such changes would be expected, given the new requirements. Prior to RSIA, covered T&E employees on some railroads often worked well beyond 6 or 7 consecutive days. Officials we spoke with at 1 class I railroad said its train crews often worked 8 consecutive days followed by 3 days off, and officials at another class I railroad said most of its employees worked 6 consecutive days with 1 day off, although covered employees often worked 7 days followed by 3 days off or 11 days followed by 4 days off. According to officials we interviewed at 1 class II railroad, a small portion of its covered train employees (about 15 percent) worked up to 22 consecutive days followed by 8 days off. After RSIA, covered employees could no longer work for more than 6 or 7 consecutive days without taking required rest. According to railroad officials, the requirement for 48 hours’ rest following 6 consecutive work days has been particularly challenging, and some officials told us they try to avoid working employees 6 consecutive days. Our survey results indicate that the changes in crew schedules led to changes in train schedules. Specifically, in responding to our survey, 4 of 7 class I railroads reported changing train schedules as a direct result of RSIA’s hours of service changes, while 5 of 14 (about 36 percent) class II railroads and just under half (70 of 153) of class III railroads reported making this change. Changes in train schedules particularly affected smaller railroads. According to officials we interviewed from 1 class II railroad, RSIA’s changes, particularly the additional time needed for employees to return to work, made it difficult to maintain train schedules and to respond to changes in train operating plans, which are often caused by factors such as mechanical problems and traffic levels. RSIA’s changes reduced their flexibility in such situations. An official from another class II railroad told us that RSIA’s hours of service changes meant the railroad had to reduce train service from 7 days a week to 6 days because it did not have enough people available to offer service 7 days a week. While this railroad has since hired people and said it expects to resume 7-day service, it was not able to do so for over a year. In some instances, train connections were also affected. For example, a class II railroad official we interviewed said that delays on some of the company’s long-distance trains, which the official attributed to RSIA’s changes, led to delays on local trains that connected with the long- distance trains. Officials from another class II railroad said RSIA’s changes caused them to hold trains out of their rail yard because, until March 2011, they did not have enough people to handle them. While RSIA’s consecutive day limits and increased rest requirements were focused on reducing fatigue and improving safety, a majority of the railroads responding to our survey reported that the resulting changes in crew and train schedules imposed burdens on them, and some of these railroads reported that the changes increased their costs. As shown in figure 6, the burden on railroads from changing train schedules could be very great, especially for smaller railroads. Three of the 4 class I railroads responding to this survey question reported a moderate to substantial burden, and over 55 percent of the responding class II and III railroads reported a substantial to very great burden. For example, as officials we interviewed from a holding company that owns over 30 smaller railroads said, the burden imposed by changing crew and train schedules was very great for some railroads—such as those that serve grain producers during the harvest season—that need to run trains without interruption at certain seasons to meet demand. According to the officials, RSIA’s consecutive work day limits and requirement for more rest between shifts make uninterrupted service like this very difficult to provide. Finally, according to our survey results, changes to crew and train schedules entailed financial costs, particularly for class I railroads. Of the 7 class I railroads, 5 reported incurring financial costs from changing crew schedules and 4 reported incurring such costs from changing train schedules. Fewer class II and III railroads reported incurring such costs, although half (76 of 152) of the class III railroads that reported changing crew schedules reported incurring financial costs for doing so. In some cases, these may have been one-time costs, such as for upgrading hours of service timekeeping systems to accommodate new crew schedules. In other cases, they may have been recurring costs, such as for hiring new employees or bringing employees back from furlough to address issues related to crew or train schedules (discussed later in this report). Costs for additional staff could also be related to service increases responding, at least in part, to improvements in the economy that followed RSIA’s implementation in July 2009. In general, we did not ask the railroads we surveyed to identify specific dollar amounts incurred as a result of RSIA’s hours of service changes or to indicate how those amounts may have affected railroad earnings or profits. We did ask the railroads to identify the average annual wages and benefits for employees hired or brought back from furlough as a result of RSIA’s changes (discussed later in this report). During our interviews, some railroad officials told us it was difficult to separate the financial effects of RSIA’s changes from those of general economic conditions. Nevertheless, even though we did not determine the specific financial effects of RSIA’s changes on railroads, it is likely the changes affected the costs, revenues, and earnings for some railroads, at least temporarily. As discussed earlier, such effects are not unexpected given the magnitude of RSIA’s hours of service changes and the many actions required by railroads to comply with the law. In implementing RSIA’s hours of service changes to improve safety and comply with the law, some railroads reported increasing their staffing levels in response to the changes they made in crew schedules. In general, according to railroad officials we spoke with, staffing levels increased with the changes in crew schedules because, with RSIA’s consecutive work day limits and increased rest requirements, covered T&E employees were less available to work. More specifically, because of RSIA’s requirements for 48 instead of 24 hours’ rest after 6 consecutive days on duty, and for 10 instead of 8 hours’ rest between shifts, employees were generally less available for work and more staff were needed to maintain regular operations. For example, officials we spoke with from a class II railroad said RSIA’s requirements for 10 hours’ rest between shifts, and for this rest to be undisturbed, could increase the time that covered employees were unavailable for work by between 2 and 4 hours and meant, for this railroad, that more staff were needed to provide pre-RSIA service levels. Additionally, according to an official we spoke with from a class I railroad, RSIA’s changes meant that this railroad needed about 200 more T&E employees than it previously did to run the same amount of business. Although we tried to isolate RSIA’s effects on railroads’ staffing by asking railroads to identify the extent to which they hired new employees or brought employees back from furlough as a direct result of RSIA, we cannot exclude the possibility that some of the changes they reported were also due to improvements in general economic conditions that took place from 2009 to 2010. To address staffing needs, railroad officials we spoke with told us they called on T&E employees without regular crew assignments, hired new employees, or brought employees back from furlough to help fill the staffing gaps. T&E employees without regular crew assignments are listed on what are called extra boards and are on call to meet crew needs as they arise, giving railroads flexibility to meet staffing needs when regular crews are not available to work. All 7 class I railroads use extra boards, and some smaller railroads may also use them. In addition, railroads reported hiring new people or bringing people back from furlough. Some railroad officials we spoke with said these people were at least initially assigned to extra boards. In responding to our survey, 5 of 7 class I and 7 of 14 class II railroads reported they hired or brought T&E employees back from furlough as a direct result of RSIA’s requirements. Proportionally, fewer small railroads reported hiring or bringing employees back from furlough—about 30 percent (46 of 152) of the class III railroads responding to our survey. In some instances, hiring decisions at smaller railroads may have reflected broader economic conditions rather than specific operating needs. For example, an official we spoke with from a class III railroad said the company was apprehensive about long-term hiring because, given the risks of a sudden decline in orders, it might have to lay employees off after investing in their hiring and training. According to our survey results, the number of T&E employees railroads hired or brought back from furlough varied and increased some railroads’ costs. Overall, as would be expected, larger railroads reported hiring or bringing back more employees than smaller railroads. For those railroads we surveyed that reported hiring or bringing people back from furlough, the number of people ranged between 120 and 500 each for the 4 class I railroads, 5 and 40 each for the 7 class II railroads, and 1 and 30 each for the 45 class III railroads. In hiring or bringing T&E employees back from furlough, the railroads incurred ongoing financial costs. According to our estimates, based on the average annual wages and benefits of T&E employees reported by the railroads we surveyed, the average annual cost for the 4 class I railroads ranged from about $11 million to $50 million, and for the 7 class II railroads, it ranged from about $350,000 to $3 million. While RSIA’s requirements affected some railroads’ need for staff, the requirements had other effects on staffing as well, including reduced flexibility in using managers and reduced ability to provide guaranteed and other work hours to covered employees:  Reduced flexibility to use managers to perform covered and noncovered service. Our survey results indicated that RSIA’s changes may have reduced the ability of some managers to perform covered and noncovered service. Most of the larger railroads we surveyed—5 of 7 class I railroads and 12 of 14 class II railroads— reported no reductions in the ability of managers to perform covered service. In contrast, about 36 percent (54 of 151) of class III railroads reported a reduction in managers’ ability to perform covered service, and about 30 percent (46 of 151) reported a reduction in managers’ ability to perform noncovered service. In general, this issue is of particular importance for smaller railroads. ASLRRA officials told us, that on small railroads, the same person often performs many different functions and it is not unusual for railroad managers to operate trains in place of employees who are sick or on vacation in addition to performing their managerial responsibilities. However, the officials said, if managers do perform such work, they come under RSIA’s hours of service limitations, including the monthly cap on total work hours. All of their work hours, including the time spent performing both covered and noncovered service, then falls under the 276-hour cap. The officials said, in some instances, this restriction could prevent managers from performing their regular managerial work.  Reduced ability to provide guaranteed work hours to covered employees. Some railroads, generally by collective bargaining agreement, guarantee a minimum number of work hours or days to employees over a certain period (e.g., 2 weeks). In some cases, the railroads do this to retain a certain class of employee, such as T&E employees. In general, employees are paid for the guaranteed hours or days whether they perform the work or not. For a railroad, not providing work during the guaranteed hours may mean having to pay for work not performed. For an employee, not meeting guaranteed hours or days may mean fewer hours worked, even though the employee may be paid for the time not worked. In responding to our survey, 4 of the 7 class I railroads reported they were not able to meet guaranteed work hours as a direct result of RSIA’s requirements, whereas smaller proportions of class II and III railroads reported this issue (see fig. 7).  Potentially reduced ability to provide work hours to covered employees each month. RSIA’s monthly cap on total work hours (276 hours) may have altered how railroads are able to use their workforce and the number of hours employees work. A number of railroad officials we interviewed said this cap did not affect their company. However, the majority of railroads we surveyed reported taking employees temporarily out of service as a direct result of RSIA’s hours of service changes. This may have resulted from controls railroads implemented to prevent covered T&E employees from exceeding the total monthly work hour cap. For example, in follow-up work on our survey, we learned that all 7 class I railroads established internal thresholds to monitor employee work hours to ensure that employees did not exceed this cap. These thresholds ranged from 250 to 264 hours. We did not determine through our survey and interviews how many T&E employees may have been taken out of service because of these thresholds. However, the number of employees reported by our survey respondents as reaching or exceeding the total monthly work hour cap in any particular month in 2010 was small and ranged from 0 to 26 for all railroads surveyed. Whether or to what extent the internal thresholds influenced this number, prevented employees from exceeding the monthly cap, or limited work hours is unknown. Some of the labor organizations we spoke with expressed concerns about how railroads responded to RSIA’s hours of service requirements. For example, officials we interviewed from one organization that represents T&E employees said that a significant portion of its membership had suffered some salary loss from the changes in hours of service requirements and added that covered employees were not being scheduled for more than 252 hours of work in a month in order to avoid reaching or exceeding RSIA’s cap on total monthly work hours. The officials said this practice can cost employees as much as 24 hours’ pay in a month. Officials from unions representing conductors, signalmen, and yardmasters expressed similar concerns. They primarily attributed reductions in work hours and lost compensation to RSIA’s impact on crew schedules as well as to the requirement for 10 hours’ undisturbed rest and the monthly work hour cap. For example, an official with a union representing yardmasters told us the requirement for 10 hours’ undisturbed rest precludes employees that work in rail yards from working swing (third) shifts as well as regular shifts 7 days a week and this restriction deprives employees of work and reduces earning opportunities. Union officials also told us the internal threshold some railroads use to address the monthly work hour cap serves as an artificial cap and essentially deprives employees of additional work hours and earnings. The changes that some railroads made to implement RSIA’s hours of service changes and improve safety may also have limited their ability to provide service and meet customer needs. As figure 8 shows, over half of all railroads (98 of 174 railroads) responding to our survey question reported their ability to meet customer needs was reduced as a direct result of RSIA’s hours of service changes. In particular, class I (4 of 7) and class III (87 of 153) railroads reported a reduction. Railroad officials we spoke with largely attributed these effects to RSIA’s consecutive work day limits and requirements for increased rest. By affecting crew and train schedules, the officials noted, the requirements have sometimes acted to limit railroads’ flexibility to provide train service when and where needed, especially on weekends. We did not determine the effects of RSIA’s requirements on changes in railroad customer service, such as whether railroads lost customers or customers changed modes of transportation following service changes. RSIA’s focus was on improving railroad safety but effects on customer service may have occurred. Officials we spoke with at a class II railroad said that, in some instances, the hours of service requirements have led to about a 50 percent loss in weekend crew starts and negative effects on customer service. On weekends, the officials said, some entire industries do not receive train service because people are not available to operate the trains. Officials we spoke with at several other smaller railroads also told us their weekend service had been affected. For some railroads, reduced customer service may have been temporary. For example, some railroads have petitioned FRA for waivers of compliance from hours of service requirements so they will have the flexibility to provide service for 6 days (e.g., Monday through Saturday), followed by 24 hours’ rest rather than the RSIA-mandated 48 hours, if customer needs dictate. In addition, hiring new employees or bringing employees back from furlough may have permitted some railroads to return to service levels that may have decreased initially because the railroads lacked available employees. Officials at one class II railroad told us they believed their ability to keep up with business levels was severely hampered by the lack of covered employees caused by RSIA’s hours of service changes. To address the issue, they brought back all their previously furloughed employees and hired even more covered employees. As a result, by March 2011, the officials believed the railroad was getting back to crew levels that were sufficient to meet business needs. Some shippers and receivers that use rail to meet their transportation needs told us their service had been affected by RSIA. We did not formally survey shippers or receivers that use rail to transport their goods about the possible effects of RSIA’s hours of service changes, but responses to questions sent out on our behalf by a trade association (the National Industrial Transportation League) that represents shippers and receivers of a wide mix of commodities, including steel, paper, and agricultural products, indicated that the changes had affected some of them. Of the 28 shippers and receivers that responded to the questions, 10 said their service had been affected by RSIA’s hours of service changes, and 7 said their weekend service had been affected. Among the problems with service cited by these shippers and receivers were less predictable service, train crew shortages, and switches missed because crews were unavailable or had “timed out on the clock.” The responses to these customer service problems varied but included increasing rail fleets, switching to trucks to compensate for rail inefficiencies, and increasing inventory or adjusting or shutting down production schedules. To implement RSIA’s hours of service changes, railroads also reported making administrative changes. For example, railroads reported modifying or creating new recordkeeping systems to account for time covered by hours of service requirements, spending more time reviewing hours of service records, and handling more claims for lost work opportunities. To accommodate RSIA’s new hours of service requirements and create the records necessary to comply with the law, railroads of all sizes reported modifying their timekeeping systems or, in some cases, creating new systems. According to our survey results, most large railroads (all 7 class I railroads and 12 of 14 class II railroads) primarily reprogrammed or updated their existing timekeeping systems, which are generally electronic (see fig. 9). Among other things, officials we spoke with at some railroads said they established ways to track employees’ total work and limbo or deadhead hours in a month and, in some cases, incorporated alerts to prevent covered employees from being contacted during undisturbed rest periods. Designing a way to prevent contact was sometimes more difficult than expected because, as officials we spoke with at one class I railroad said, people other than crew schedulers, such as company doctors, security personnel, and payroll personnel, may try to contact a covered employee during a day, and the system has to preclude all such contacts during an undisturbed rest period. In contrast, many small railroads we surveyed reported creating new timekeeping systems. Over half (94 of 151) of the class III railroads reported creating new timekeeping systems. Officials we interviewed at some of the class III railroads said their companies have paper-based hours of service timekeeping systems but use electronic spreadsheets to track covered employees’ hours of service. In some cases, the electronic spreadsheets were updated to keep track of such things as total monthly work hours. Additionally, according to some railroad officials we spoke with, their changes were sometimes part of a broader effort to better manage both hours of service and other aspects of their business, such as financial management. To demonstrate compliance with RSIA’s hours of service changes, some railroads reported spending more time preparing or reviewing hours of service records—work that officials said sometimes limited their ability to perform other tasks, such as operating their business. Survey respondents who addressed this question, including those from all 7 class I railroads and over 70 percent of class II and III railroads (10 of 14 and 115 of 152, respectively), reported the time required for recordkeeping or recordkeeping review increased as a direct result of RSIA’s hours of service requirements. Not unexpectedly, the increased time to prepare or review hours of service records imposed burdens on railroads. In responding to our survey, 6 of the 7 class I railroads reported the additional time for recordkeeping or recordkeeping reviews presented some to a moderate burden, while half (5 of 10) of the class II railroads and about 40 percent (44 of 111) of the class III railroads responding to this question reported a substantial to a very great burden. Over time, the increased efforts to prepare and review hours of service records will likely become part of the normal routine of a railroad. In addition, creating such records is part of helping ensure compliance with the law and achieving its intended safety benefits. However, at least temporarily, some railroads we spoke with said the increased record preparation and review time affected how their business is operated. For example, officials we spoke with at 3 class III railroads, all with paper-based hours of service records, said the additional information that must be tracked for hours of service records was a burden and the time spent on tracking left less time for other activities, including running the railroad. This was one of these railroads’ main issues with RSIA’s changes. An official we spoke with at a class I railroad also told us RSIA’s changes added an extra layer of reporting to the company’s hours of service process, primarily to accommodate RSIA’s total monthly work hour caps. Finally, in responding to our survey, some railroads reported that the timekeeping changes imposed financial costs. In some cases, these may have been one-time costs, and in others, they may have been recurring costs. According to our survey results, all 7 class I railroads, 12 of the 14 class II railroads, and 88 of 150 class III railroads incurred financial costs from introducing or revising hours of service records or recordkeeping systems. We did not collect information on the specific costs incurred. However, some railroad officials we interviewed said the costs ranged into the millions of dollars. According to officials from one class I railroad we spoke with, it spent about $3 million in 2009 for programming changes, including changes to its crew monitoring system. Officials we spoke with at another class I railroad told us it spent about $2 million for programming and upgrades, including converting from paper to electronic records for its signal employees. According to the officials, the cost was primarily for company employees, not a consultant, to do the reprogramming and was a one-time cost. At some other railroads, the costs were for work performed by a mix of in-house staff and outside consultants. Some of the costs were recurring. For example, an official from a class III railroad told us his company spends an extra $500 a month for a manager to review and verify the accuracy of hours of service records. After RSIA took effect, some covered employees filed claims for lost work or compensation—that is, requests for payment for work hours or compensation lost because of RSIA’s consecutive work day limits or other requirements. Such claims might arise when, for example, an employee who formerly worked a 6 by 1 shift could no longer do so because RSIA requires 48 hours’ rest after 6 consecutive days on duty. In responding to our survey, 5 of 7 class I, 6 of 14 class II, and 22 of 152 class III railroads reported that the number of claims for missed work opportunities (hours) or compensation increased as a direct result of RSIA’s hours of service changes. The remaining class I and II railroads and 128 of the class III railroads (about 84 percent) said either the number of such claims stayed the same or the issue was not applicable to them. We did not collect data on the number of claims filed. However, some railroad officials we spoke with said the number of claims filed doubled or tripled from the normal level. For example, a class II railroad official we spoke with estimated that the number of claims filed at his company each month for lost work hours increased from about 5 before RSIA took effect to 10 to 15 afterwards. This official also said the number of claims subsequently went back to 2–3 per month. A class I railroad official told us T&E employees had filed over 500 claims at this company between July 2009 and May 2010, most of which the company was holding in abeyance until it had decided how to handle them. We do not know how many claims may have resulted in payments to employees or other forms of relief. As noted, one class I railroad we spoke with had not decided at the time of our review how to resolve the 500 claims filed by its employees, in part because the railroad was still considering the status of collective bargaining agreements in relation to RSIA’s legal requirements. Officials from this railroad estimated each claim filed averaged approximately $200 and the railroad’s potential liability in paying these claims was about $100,000. At other railroads, paying compensation may have been more routine. For example, an official we spoke with at a class II railroad, which was trying to avoid working employees 6 consecutive days, said the railroad had, in virtually every instance, paid claims for compensation filed by T&E employees who had been skipped over for work assignments because they were approaching 6 consecutive days of work. To plan its oversight of railroads’ compliance with hours of service requirements, FRA applies the same risk-based approach that it uses to assess compliance generally. This approach relies on a risk-based model that FRA implemented in 2006. The model analyzes FRA’s inspection data, together with accident and incident data reported by the railroads through the Accident and Incident Reporting System, and then generates the National Inspection Plan (NIP), which is designed to target FRA’s inspections at the greatest safety risks. The NIP allocates inspection resources for each FRA region by inspector discipline (such as operating practices and track), and FRA regions then assign resources to activities (such as hours of service and drug and alcohol control), within each discipline with input from inspectors familiar with each railroad’s operations. In addition, FRA regional officials can modify the NIP’s allocation of resources among disciplines based on local input, both initially and after 6 months. According to FRA headquarters and regional officials, decisions about how to allocate resources among inspection disciplines and activities are based on factors such as complaints, an inspector’s knowledge of a railroad’s operation at a given location, and the time and resources available to conduct inspections. This reliance on local input reflects FRA’s views that regional officials and inspectors have detailed knowledge about railroads’ operations that may not be captured in the data used to develop the NIP and that their input results in a stronger inspection plan than one based solely on data analysis. FRA incorporates data from its Accident and Incident Reporting System into its risk assessment model to help determine the relationship between noncompliance with safety requirements and risk. Specifically, FRA establishes codes for a wide range of violations or conditions, and when railroads report an accident or incident, they enter two codes into the system—one for the primary cause and the other for a contributing cause of the accident or incident. Neither hours of service violations nor, more broadly, fatigue are among the coded options that railroads can choose to enter. Instead, the options include a large number of actions or conditions that FRA considers potentially related to fatigue, such as “failure to release hand brake on cars” and “failure to comply with restricted speed.” When FRA investigates an accident or incident with these codes entered as causes, it then attempts to determine whether fatigue was a factor. According to FRA, it does not have a code for hours of service violations because, in its experience, there is not necessarily a relationship between hours of service and fatigue—a fatigued individual can be in compliance with hours of service requirements or, conversely, a violation of hours of service requirements can occur without an individual being fatigued. According to FRA, it does not have a code for fatigue because it already collects information on fatigue when it investigates an accident or incident. It is too soon since RSIA was implemented to determine if the priority that FRA assigns to overseeing railroads’ compliance with hours of service requirements has changed or should change. The new hours of service requirements did not take effect for freight railroads until July 16, 2009, and we collected the inspection data through September 30, 2010, a span of 14 months. Hence, the period covered by our audit work is too short for us to identify any trends in inspection results or enforcement actions taken since RSIA’s changes went into effect. Furthermore without trend information, there is little basis at the current time to know whether the priority FRA assigns to overseeing hours of service is best aligned with potential safety risks. FRA inspectors conduct hours of service inspections and complaint investigations to determine if covered employees have worked longer than limited by law. FRA inspectors also review railroads’ hours of service recordkeeping to assess their compliance with FRA regulations that specify, for example, how and when the hours worked by covered employees are to be recorded. According to our analysis of FRA inspection data, FRA inspectors conducted somewhat fewer hours of service and hours of service recordkeeping inspections of freight railroads in fiscal year 2010—the one complete year for which we have data since RSIA took effect—than they did in fiscal year 2008, the last full year before RSIA was implemented. (See table 2.) However, the data for fiscal year 2010 show increases, especially for hours of service inspections, over the data for fiscal year 2009, the transition year. Furthermore, as the table shows, the annual numbers for both types of inspections have varied over the years, especially for hours of service inspections, and there is no indication thus far of a change in FRA’s emphasis on hours of service. The data for fiscal year 2010 are consistent with the statements of some FRA officials, who told us FRA placed no special emphasis on hours of service issues after RSIA was implemented and has not changed its hours of service inspections since the change in the law. According to the officials, inspections focus on factors that cause accidents, and hours of service issues have caused few, if any, accidents in recent years. Most railroads responding to our survey also reported that they did not see a change in FRA’s handling of hours of service issues. FRA did, however, identify hours of service in the National Safety Program Plan for fiscal year 2010 as a special-emphasis activity for four of FRA’s eight regional offices and for the Office of Railroad Safety at FRA headquarters. Yet in three of these regional offices, the efforts are focused on signal employees rather than T&E employees, the largest group of covered employees subject to hours of service limitations. Overall, from fiscal year 2005 through fiscal year 2010, hours of service and hours of service recordkeeping inspections accounted for a very small percentage of FRA inspections of freight railroads—less than 1 percent of all FRA inspections conducted on freight railroads each year during this period, as indicated in table 2. Furthermore, although operating practices inspectors conducted about 83 percent of the hours of service and about 79 percent of the hours of service recordkeeping inspections, these inspections accounted for less than 3 percent of all operating practices inspections conducted at freight railroads during fiscal years 2005 through 2010 (see fig. 10). While our analysis does not indicate any notable change in FRA’s overall emphasis on compliance with hours of service and hours of service recordkeeping requirements, it may show proportionally greater attention to the class I railroads, especially for hours of service (see fig. 11). As previously noted, class I railroads account for over two-thirds of the total rail mileage operated in the United States. For both hours of service and hours of service recordkeeping, the number of inspections increased for the class I railroads and decreased for the class II and III railroads from fiscal year 2009 to fiscal year 2010. Again, however, the data are for a single year, and it is unclear whether any observed change will persist. According to FRA officials, there are no plans to require additional hours of service or hours of service recordkeeping inspections unless there is evidence of an increase in noncompliance on the part of the railroads, or there is an increase in complaints about violations of the hours of service laws. With just one full year’s worth of data since RSIA took effect, we could not discern any changes in FRA’s hours of service enforcement priorities. One indicator—the portion of defects identified during inspections that resulted in violations being processed for enforcement—showed no consistent direction, fluctuating variously up or down in fiscal year 2009 from fiscal year 2008 and then reversing direction the following year. Another indicator—enforcement actions taken—also fluctuated, with the number of hours of service enforcement actions going up in fiscal year 2009 from fiscal year 2008, and then dropping again in fiscal year 2010. Meanwhile, hours of service recordkeeping enforcement actions took the opposite path during the same period, first dropping and then rising (see table 3). Besides establishing new hours of service requirements, RSIA provided for pilot projects and waivers of compliance with hours of service requirements, both of which would create opportunities for FRA and railroads to analyze the effects on safety of approved alternatives to the new hours of service requirements. FRA has been unable to implement two pilot projects mandated under RSIA because no railroads have chosen to participate, and has not exercised its pre-RSIA authority to approve voluntary pilot projects designed to examine the fatigue- reduction potential of alternatives to the current hours of service laws because of flaws in the applications it received. FRA also has the authority to approve petitions for waivers of hours of service requirements in certain circumstances and has approved waiver petitions for some railroads. RSIA required FRA to conduct the mandated pilot projects by October 2010, and to report on the voluntary pilot projects no later than December 31, 2012; however, RSIA does not require FRA to analyze or report on the safety effects of approved waiver petitions, and FRA has not taken steps to do so. RSIA required FRA to conduct at least two pilot projects of sufficient size and scope to analyze specific practices that could be used to reduce fatigue for T&E and other railroad employees covered by hours of service requirements. The first pilot project called for the railroad to give a covered employee at least 10 hours’ advance notice of a shift assignment. Advance notice of 2 to 4 hours is typical in the industry today. The second pilot project would have created defined shifts for covered employees who receive unscheduled shift calls, such that those employees would be subject to call every other shift, instead of at any time. FRA has not been able to implement either of these mandated pilot projects because no railroad has expressed interest in implementing either project. According to FRA officials, the agency lacks authority to compel railroads to participate. According to both FRA and railroad officials, railroads have not chosen to participate in the pilot projects mandated in the legislation because doing so could put a participating railroad at a competitive disadvantage. More specifically, both projects would decrease a railroad’s flexibility to assign covered train employees to report as circumstances warrant—in the first case by requiring advance notice of at least 10 hours, rather than the typical practice of 2 to 4 hours notice, and in the second case by reducing the pool of employees on call by half. Because freight railroads try to work to accommodate their customers, often with last-minute scheduling changes, it is important for them to remain flexible so they can compete with other railroads and other modes of transportation, such as trucks. While FRA was unable to conduct the two pilot projects mandated in RSIA, it still has authority to approve voluntary pilot projects. This authority, which predates RSIA, allows FRA to approve joint petitions from railroads and nonprofit employee labor organizations representing directly affected covered service employees of the railroads for waivers of compliance with the hours of service law in order to demonstrate the possible benefits of implementing alternatives to strict adherence to the law, including requirements for maximum on-duty and minimum off-duty periods. According to FRA officials, there was little interest in obtaining waivers for pilot projects prior to the passage of RSIA. Since May 2009, however, FRA has received five petitions for waivers of compliance with hours of service requirements in order to implement voluntary pilot projects. FRA dismissed two of these petitions, because they were not filed jointly, as required, by a railroad and the employee labor organizations representing the affected employees. FRA approved two other petitions for pilot projects requesting waivers, but both were designed to provide administrative alternatives rather than alternatives to the requirements concerning maximum on-duty and minimum off-duty periods. In approving these petitions, FRA noted that because the proposed pilot projects were administrative in nature, they would not impinge on the likely performance or safety of the railroads. Finally, FRA rejected one petition that was designed to identify alternatives to the hours of service laws for addressing fatigue. This petition, filed by the ASLRRA on behalf of its members, sought approval for a pilot project that would, among other things, develop and identify alternative methods to mitigate the risk of fatigue without strict adherence to the new hours of service requirements. While acknowledging that ASLRRA raised salient issues for short-line and small railroads, FRA rejected the petition, noting that it lacked a thorough explanation of the conditions and controls under which the pilot project would be operated to ensure the safety of railroad operations and participating employees. Moreover, according to FRA, the petition failed to identify what additional relief from the hours of service laws was necessary to implement the pilot project. Other than the petition filed by ASLRRA, FRA has received no petitions for waivers of compliance with hours of service requirements in order to implement voluntary pilot projects that could demonstrate the fatigue- reduction potential of alternatives to RSIA provisions. Information gathered from railroads operating monitored pilot projects could be analyzed to assess the effectiveness of specific practices being used to reduce fatigue. Such information could also be used to examine the effects on safety of, for example, increasing rest requirements for some shifts that extend into night hours, providing fewer hours of rest for employees resting away from their home terminal, or decreasing rest requirements for those covered employees working only regular daytime shifts. The results of such analysis could be used to inform the RSIA- required report to Congress by December 2012 on the effectiveness of the voluntary pilot projects. Even though no pilot projects currently afford opportunities for gathering and analyzing data on the safety effects of alternatives to the new hours of service requirements, FRA could obtain such data from railroads operating with approved waivers of compliance with the new hours of service requirements. FRA has the authority to approve petitions for waivers of the statutory requirements related to the consecutive work day limits if a collective bargaining agreement provides for a different arrangement, and such an arrangement is in the public interest and consistent with railroad safety. As of June 30, 2011, FRA had received 17 petitions for waivers of compliance with hours of service requirements and had fully approved 8 of them, including 1 filed by ASLRRA that covers 142 of its member railroads. In total, 157 railroads have approved waivers of compliance with hours of service requirements, 2 of which are class I railroads. The remainder are class II or III railroads. The approved waivers recognize that the risk of fatigue is greater for night shifts than for day shifts, as discussed earlier in this report. Specifically, all of the approved waivers allow scheduled shifts of 6 consecutive work days followed by 24 hours’ rest (6 by 1 schedules), rather than the 48 hours’ rest (6 by 2 schedules) required by law, provided that the shifts during those 6 consecutive days do not extend into the hours between midnight and 6 a.m. Table 4 provides information on the disposition of the waiver petitions submitted to FRA from May 2009 through June 2011. RSIA did not require FRA to collect data or report on the safety effects of approved waiver petitions, as it did for the voluntary pilot projects, and FRA has not taken steps to do so. According to an FRA official, establishing a level of fatigue among employees working under the conditions of one of the approved voluntary waivers would require an evaluation of the employees’ work and rest schedules using a fatigue model such as FAST. The easiest way to collect such data, the official said, would be to have inspectors evaluate these employee schedules at randomly selected railroads. The official acknowledged that having data about railroads operating under waivers could help determine the feasibility of alternatives to RSIA’s current requirements, such as a modification of the requirement for 48 hours’ rest after 6 consecutive work days for certain scheduled shifts. Since RSIA was only recently implemented, it is still too early to determine whether its changes to hours of service requirements will materially affect freight railroad safety. Initial indications are positive, as our analysis of selected covered T&E employee work schedules shows. Rest time for these employees has increased, and the amount of time they work at a high risk of fatigue has decreased—up to about 36 percent for some railroads. However, as can be expected of changes in laws to improve safety, these benefits have also resulted in some costs to both railroad employees and the industry. Our work shows that some covered employees saw reductions in their work hours and, according to information from our survey of the railroad industry and related interviews, many railroads made changes in crew and train schedules, incurred additional costs to hire new employees or bring employees back from furlough to maintain operations and comply with the law, and saw reductions in their ability to provide service to customers when and where needed. More important, although the time spent working at a high risk of fatigue decreased for some T&E employees, RSIA did not address work performed during night hours, which, according to both scientific literature and our analysis of covered T&E employee work schedules, represents a major factor in fatigue risk. Therefore, opportunities for reducing the risk of fatigue remain, especially since night work is integral to freight rail operations. Moreover, we believe further analysis of the safety implications of both day and night work, and of actions that could be taken to mitigate the associated fatigue risks, could point to opportunities for trade-offs that would reduce the overall risk of fatigue yet potentially allow for a relaxation of RSIA provisions that railroads and employees said were particularly burdensome to them—such as the consecutive work day limits before mandatory rest. The federal government also plays an important role in helping promote safe railroad operations through its inspection and enforcement actions. FRA’s risk-based approach to oversight is intended to align the agency’s inspection and enforcement resources with risks. The NIP provides a good foundation for doing this, including the use of local input to ensure resources are focused on the specific risks that may lead to accidents. As we saw from the data, it is too soon to determine if the emphasis FRA has so far given to hours of service requirements best aligns with the risks associated with the RSIA changes and this will bear watching going forward. Additionally, in our view, FRA is missing opportunities to better identify the potential costs, benefits, and safety implications of alternatives to the current hours of service requirements. While voluntary pilot projects were envisioned in RSIA and offer the opportunity for FRA and railroads to try alternative approaches and learn from them, interest from the industry has, to date, been low. Realizing the full benefits from pilot projects will require additional outreach to the rail industry and other stakeholders to generate ideas on how pilot projects could be structured so they generate interest and participation, including ways to minimize potential competitive disadvantages to participants. Both pilot projects and waivers could generate information that would be of use in aligning oversight resources with risks, analyzing fatigue issues, and deciding how to reduce fatigue risks in the railroad industry, as well as informing FRA’s December 2012 report to Congress on voluntary pilot projects. To ensure that FRA’s implementation of hours of service requirements in the freight railroad industry maximizes opportunities to reduce the risks of accidents and incidents related to fatigue, we recommend that the Secretary of Transportation direct the Administrator of FRA to take the following action:  Evaluate and develop recommendations about the relative impact of consecutive days worked and work performed during night hours on the potential for fatigue and risk of accidents in the freight railroad industry. This evaluation should attempt to determine if taking night work into consideration in the hours of service limitations (such as by requiring more rest after night work) would enable some relaxation of the current limits on consecutive days worked before rest is required in such a way that the same or better overall reduction in fatigue risk occurs while mitigating negative effects on employees and railroad operations. In performing this evaluation, FRA should consider scientific and medical research related to fatigue and fatigue abatement and data from pilot projects and waivers of compliance with hours of service requirements that relate to fatigue levels and consecutive days worked and work performed at night. FRA should also communicate the results of the evaluation to appropriate congressional committees for their consideration. To improve FRA’s targeting of its inspection resources and understanding of the effect of work hours on fatigue, we recommend that the Secretary of Transportation direct the Administrator of FRA to take the following action:  Work with the railroad industry to identify pilot projects that could be implemented to test the fatigue reduction potential of alternatives to the current hours of service laws. Also, collect safety indicator and accident and incident data from participants in pilot projects and railroads with waivers of compliance with hours of service requirements to determine the effects of such pilot projects and waivers on covered employee fatigue and participant safety performance. FRA should then incorporate the results of both efforts into the risk assessment process used to determine the allocation of inspection resources and report the results to appropriate committees of Congress. We provided a draft of this report and the e-supplement to DOT for review and comment. We met with FRA officials, including the Deputy Chief Counsel, on September 19, 2011. DOT expressed concerns about a portion of our second recommendation that it incorporate activity-level data into the NIP’s development of inspection priorities and that it add a code for hours of service issues to the Accident and Incident Reporting System. According to DOT, the NIP provides a comprehensive framework to manage hundreds of competing inspection activities—including hours of service inspections—and incorporating activity-level data as we suggested would imply a level of precision that does not exist. DOT also emphasized the value of FRA inspectors’ input into the priority-setting process and suggested that an increased reliance on data could reduce FRA’s flexibility and efficiency in responding to and managing local issues. In addition, DOT considered adding an hours of service code to the Accident and Incident Reporting System redundant, since railroads are already required to report excess service hours to FRA every month. Furthermore, FRA said that adding such a code would not be helpful, since an hours of service violation may not indicate fatigue. According to FRA officials, a covered employee could be fatigued while complying with hours of service requirements or a covered employee could be noncompliant with hours of service requirements without being fatigued. FRA officials told us that accidents and incidents generally occur because someone misaligned a switch, failed to observe a signal, or failed to take some other physical action. These may or may not have been caused by fatigue but adding a code for hours of service would not indicate fatigue levels. FRA officials noted that several cause codes in the Accident and Incident Reporting System can indicate fatigue and that FRA investigators follow up to assess the role of fatigue when railroads identify those codes as causes of accidents. FRA officials also noted that a review of recent reports on rail accidents, including reports from the National Transportation Safety Board, found none that identified hours of service as a cause of an accident. After we met with FRA officials, they provided additional information on how FRA uses activity-based inspection data (including hours of service data) to develop the NIP and furnished us with a list of codes included in the Accident and Incident Reporting System that show some correlation with fatigue. In light of our discussions with FRA and our analysis of the information it subsequently provided, we withdrew the portions of our second recommendation that FRA incorporate activity-level data into the risk assessment process and add one or more codes to the Accident and Incident Reporting System to identify the role of hours of service in railroad accidents. The information provided by FRA shows that hours of service activity-based data is being used to develop the NIP and adding one or more codes to the Accident and Incident Reporting System for hours of service might not be helpful in identifying broader issues of the role fatigue plays in accidents. Rather, such information is more likely to come from FRA’s accident investigations, which can also identify if violations of hours of service requirements play a role in rail accidents. FRA officials also raised concerns about the wording of some definitions used in our survey of the rail industry about hours of service issues. FRA questioned whether the rail industry was familiar enough with requirements of the law so that the definitions we used did not result in inaccurate responses. We did not change the wording of the definitions contained in the survey presented in our e-supplement to this report as a result of FRA’s comments because the e-supplement is meant to present the survey as it was made available to respondents. We believe the information provided by the survey is accurate and respondents understood our survey and RSIA requirements sufficiently to provide appropriate responses. To this end, we fully pretested the survey prior to administering it; pretest participants raised no substantive concerns about the terms defined in the survey. Our e-supplement product (GAO-11- 894SP) contains additional information about FRA’s comments. DOT also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Transportation, the Administrator of FRA, and the Director of the Office of Management and Budget. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To better understand the changes to freight railroad hours of service requirements made by the Rail Safety Improvement Act of 2008 (RSIA), we reviewed the (1) impacts of the hours of service changes on the covered train and engine (T&E) workforce, including potential impacts on fatigue; (2) operational and administrative impacts of the hours of service changes on the railroad industry; and (3) actions taken by the Federal Railroad Administration (FRA) to oversee compliance with hours of service requirements and implement RSIA provisions related to hours of service pilot projects and waivers. The scope of this engagement was limited to the freight railroad industry. The RSIA hours of service requirements became effective for these railroads on July 16, 2009. We did not include commuter and intercity passenger railroads, since at the time of our work FRA was in the process of developing new hours of service requirements for these railroads. Our scope included freight railroads of all sizes. The freight railroad industry is divided into three classes: I, II, and III, based on their operating revenues. In 2009, annual operating revenues were at least $378.8 million for class I railroads, between $30.3 million and $378.8 million for class II railroads, and less than $30.3 million for class III railroads. The class designation differs slightly from another designation that FRA uses for accident and incident reporting, under which railroads are divided into groups. FRA’s group 1 is equivalent to class I. The division between groups 2 and 3 is based on the total number of annual work hours reported to FRA. Group 2 railroads report 400,000 or more total annual work hours but are not class I railroads, and group 3 railroads report less than 400,000 total annual work hours. According to FRA officials, groups 2 and 3 are not necessarily the same as classes II and III, but the differences may not be large. For reporting purposes we use the class designation because (1) it is a common means of identifying railroads and (2) the railroads included in class II or III may not be significantly different from those in group 2 or 3, respectively. The following describes some of the key methodologies we used to address our objectives. To address the impacts of the hours of service changes on the covered T&E workforce, including potential impacts on fatigue, we collected and analyzed covered T&E employee work schedule data and used the Fatigue Audit InterDyneTM (FAID) biomathematical fatigue model. We initially requested covered T&E employee work schedules for all 7 class I, 15 class II, and a sample of 86 class III railroads. However, not all the class II and III railroads we contacted had electronic records; instead, most maintained paper-based hours of service records. We determined that the process for collecting and transcribing the paper-based work schedules into electronic format for analysis was not feasible given our time and resource constraints. Accordingly, we focused our data collection on electronic records from class I railroads and those class II railroads that responded to our inquiries and could provide electronic hours of service records. In addition, we conducted focused telephone interviews with 69 randomly sampled class III railroads with 5 or more full- time-equivalent employees covered by hours of service requirements to obtain information about their operations. For our analysis of electronic hours of service records, we included all work schedules for all covered T&E employees that had work schedule data for both May 2008 and May 2010 and had at least 7 days of scheduled work in both these months. All 7 class I railroads submitted the requested records, and 6 class II railroads provided electronic records that met our requirements. The final data set covers the May 2008 and May 2010 work records for 52,205 class I covered T&E employees and 963 class II covered T&E employees. We selected May 2008 and May 2010 for our analysis because they represent months before and after RSIA’s implementation. In addition, choosing the same month for both years helps to avoid any seasonal differences in the rail industry. We also discussed the time frames for our analysis with rail industry representatives, and they generally agreed with our selection. To assess the reliability of the data provided, we performed tests to detect and eliminate anomalies such as duplicate records, overlapping shifts, shifts with start or end time errors, and data for employees who did not work in both time periods. Where appropriate, we contacted railroads to correct these anomalies. We also sent a questionnaire to the railroads to obtain information about the quality control procedures for their electronic systems. We determined that the data were sufficiently reliable for the purposes of this report. Our analysis of employee work schedules was focused on answering key questions designed to identify the effects of RSIA’s changes on the covered T&E workforce, such as whether total work time changed. We analyzed class I and class II work schedules separately because this approach allowed us to recognize there may be operational differences between the two classes of railroads. Among other things, we examined work schedule data to determine the total hours worked, total shifts worked, total rest time, and total hours worked at night in both time periods for both classes of railroad. Total hours worked and total shifts worked were measures we used to determine if there were impacts on the amount of work performed. We used total hours of work and total hours worked at night along with fatigue model outputs as measures for estimating the impact of night work on fatigue risk levels. We estimated fatigue risk levels for work schedule data using the FAID model, a biomathematical fatigue model that has been used for fatigue analyses of railroad work schedules. FRA has validated FAID, as well as the Fatigue Avoidance Scheduling ToolTM (FAST), for use in analyzing the railroad employees’ fatigue risk levels—the only two models that FRA had validated for such use at the time of our work. The FAID model is commonly used in the railroad industry for fatigue analysis, and FRA has used FAST to conduct fatigue analyses for regulatory purposes (such as reviews of petitions for waivers of compliance with hours of service requirements). We performed separate fatigue model analyses for class I and class II railroads and included in our analyses all the work schedules in our final data set for both class I and class II employees. In conducting our fatigue analyses, primarily using the FAID model, we established a tolerance level—that is, a fatigue score that, if breached, indicates a potentially unacceptable level of fatigue risk. We selected a fatigue score of 70 as the threshold for high risk of fatigue, scores of 61 through 69 for elevated risk of fatigue, and scores below 60 for acceptable risk of fatigue. Fatigue experts, the rail industry, and FRA differ on acceptable fatigue risk score thresholds. We selected 70—a conservative score—as our threshold for high risk of fatigue partly because FRA, in its FAID validation and calibration report, said a FAID score greater than 70 would indicate extreme fatigue and partly because we wanted to err on the side of caution in our use of the model for fatigue analysis. After our analysis was performed FRA issued its final hours of service rule for passenger rail employees that set the high fatigue threshold for FAID at 72. Additionally, we used the FAID model as its producer and fatigue experts directed, that is, to analyze aggregate-level data to determine fatigue risk among the covered workforce or to analyze generic examples of work schedules to determine fatigue risk. We did not use the fatigue models to determine fatigue risk for individual covered employees. To better understand the relationship between night work and fatigue we examined whether the number of hours employees worked at night was correlated with spending time at high risk of fatigue according to the outputs of the FAID fatigue model. In particular we calculated the correlation between night hours and the incidence of employees spending at least 20 percent of their hours at high risk of fatigue. We chose the 20 percent of time at high risk to be consistent with FRA’s commuter and intercity rail final hours of service rule, in which a fatigue model indicating that 20 percent or more time spent at high risk of fatigue would trigger further mitigation of a rail work schedule by railroads and approval of mitgation by FRA. The correlation coefficient was 0.53. Both the data we collected and the analysis we performed have limitations. As we discussed earlier, an economic recession began in 2008 and it affected the demand for rail services significantly. For the months we collected rail workers’ schedules, May 2008 and May 2010, overall rail operations were different, with considerably higher levels of overall rail service in 2008. Although our findings on the differences in work schedules across these time periods may be, in part, a reflection of the differences in the macroeconomic environment, we attempted to mitigate that factor in two ways. We avoided choosing months during which the demand for rail service was rapidly declining. In May 2008, the recession was not greatly impacting the rail industry and by May 2010, demand was picking up from its lows during late 2008 and 2009. Also, we only used employees that worked in both months under the assumption the same employees would likely be performing similar work tasks in the two periods. Additionally, the fatigue models have limitations. In particular, fatigue models are developed around an average person as the base point. Fatigue models do not consider situations specific to an individual that could influence whether an individual’s fatigue levels and score are the same as or different from those calculated by the model. Finally, the models incorporate assumptions about sleep time and sleep quality, since it is not possible to determine how long or how well a specific individual sleeps during non-work time. Even with these limitations, we determined that the rail data and fatigue model results were sufficiently reliable for the purposes of this report. To identify the operational and administrative impacts of the RSIA hours of service changes on the railroad industry, we conducted a web-based survey of railroads. To identify survey participants, we used FRA’s 2009 Accident and Incident Reporting database. In general, federal regulations require that all U.S. railroads report monthly to FRA on accidents and incidents that occur on their railroads. Exceptions include such railroads as those that operate freight trains only on track inside an installation that is not part of the general railroad system of transportation and rail mass transit operations in urban areas that are not connected to the general railroad system of transportation. Reports are to be made about accidents or incidents that occurred during a month or to indicate that no accidents or incidents occurred. As noted, FRA reports accident and incident data for class I (excluding the National Railroad Passenger Corporation, or Amtrak), class II, and class III railroads. To ensure the reliability of FRA’s database for our purposes, we (1) reviewed federal regulations to better understand which railroads are required to report accident and incident data and which railroads might be exempt from reporting, (2) reviewed relevant database documentation, including FRA’s guidelines for reporting accidents and incidents, to understand what data are reported and what controls are used to ensure the reported data are accurate and reliable, and (3) interviewed FRA officials and reviewed FRA written responses to our questions to understand the controls FRA used to ensure the data were accurate and reliable. Based on these steps, we believe the database was sufficiently reliable for our needs. We used FRA’s Accident and Incident Reporting database, since it contained the most recent data available on U.S. railroads at the time we performed our work. We excluded from the database passenger-related railroads, tourist and historic railroads that had limited operations, railroads exempt from reporting, and newly started railroads that had not yet built up a record of accidents and incidents. The universe of freight railroads surveyed included all 7 class I railroads, all 15 class II railroads, and all class III railroads that had five or more full-time-equivalent employees (based on work hours reported to FRA) covered by hours of service requirements in 2009. We chose five full-time-equivalent employees as our threshold to, among other things, eliminate railroads that (1) might be too small to have hours of service impacts, (2) operate only part of the year (seasonal or intermittent operators), and (3) have only a few employees who may be used as needed for operations only within a plant or other manufacturing facility. The survey selections were designed to include participants from all three classes of railroads representing large, medium, and small entities. We calculated the percentage of full-time-equivalent employees covered by hours of service requirements based on discussions with FRA officials and estimates of the percentage of the railroad employee population covered by hours of service requirements used by FRA in previous rulemakings. Out of the 561 class III railroads in the database, we calculated there were 234 with five or more full-time-equivalent employees covered by hours of service requirements. We determined that two of these railroads were not eligible for the survey because one was not a railroad (it was a centralized dispatching center for several railroads) and one had ceased operations in 2009 and, according to an official from this railroad, had no experience with the RSIA hours of service changes. In total, we surveyed 254 railroads—7 class I railroads, 15 class II railroads, and 232 class III railroads. To develop our survey questions, we relied on a comprehensive list of questions that we used to interview railroads about hours of service issues before we conducted the survey. We identified key issues from the railroads’ responses to the questions and used these to develop the survey questionnaire. In addition, we conducted four pretests of the survey, one with a class I railroad, one with a class II railroad, and two with two class III railroads. Two pretests were done in-person and the other two were done over the telephone. The railroads were selected to get a variety of large, medium, and small railroads. During the pretests, we obtained feedback on such things as the type of questions being asked, the clarity of the questions, and whether additional issues should be included. We used this feedback to revise the survey instrument, including adding questions to cover additional issues and clarifying certain survey questions. After completing the survey questions, we sent an e-mail announcement of the survey to the 256 railroads initially included in our survey (including the 2 that we subsequently excluded as ineligible) on January 10, 2011. These railroads were notified that the questionnaire was available online and were given unique passwords and usernames on January 13, 2011. We sent follow-up e-mail messages on February 1, February 16, and March 16, 2011, to those railroads that had not yet responded. We conducted the survey from January 13, 2011, to April 15, 2011. Because we did not survey a sample of railroads, our survey has no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, difficulties in interpreting a particular question, or the type of information available to some respondents but not others, could introduce unwanted variability into the survey results. We took steps in the data collection and data analysis stages to minimize such nonsampling errors. As we previously indicated, we collaborated with GAO survey specialists to design a draft questionnaire and pretested versions of the questionnaire with four members of the survey population. From these pretests, we made revisions as necessary to reduce the likelihood of nonresponse and reporting errors on our questions. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error and addressed such issues, where possible. A second, independent analyst checked the accuracy of all computer analyses to minimize the likelihood of errors in data processing. In addition, GAO analysts answered respondents’ questions and resolved difficulties that respondents had in answering our questions. The overall response rate for this survey was 72 percent, with 7 out of 7 of class I railroads, 14 out of 15 class II railroads, and 163 out of 232 class III railroads responding. An analysis of the distribution of variables for the respondents related to the size of the railroads was compared to the distribution of these variables in the entire population of railroads, and no important distributional differences were found. In addition to the data from the survey provided in this report, each survey question, along with responses to it, is presented in GAO-11-894SP, an electronic supplement to this report. To determine the extent to which FRA conducts inspections of railroads’ compliance with hours of service and hours of service recordkeeping requirements, we obtained information from FRA’s Railroad Inspection System for PC. This system allows inspectors to enter inspection data via their personal computers in order to maintain electronic records. FRA provided data for all inspections conducted by FRA inspectors from fiscal year 2005 through fiscal year 2010. We then excluded data for inspections of all entities that were not freight railroads. From the remaining data, we identified the number of hours of service and hours of service recordkeeping inspections that were conducted on freight railroads during this 6-year period. We analyzed inspection results by class of railroad and determined the frequency with which deficiency findings identified during inspections resulted in an enforcement activity. To identify the enforcement actions FRA has taken in response to noncompliance with hours of service and hours of service recordkeeping requirements, we obtained data from FRA’s Railroad Enforcement System. FRA’s Office of Railroad Safety enters the information related to violations that have been recommended for citation against railroads and others in the Violation Generation Tracking System database, which populates the Railroad Enforcement System, which in turn is used by attorneys and staff to support the enforcement process. The data we obtained included all enforcement actions taken by FRA from the start of fiscal year 2005 through the end of fiscal year 2010. From this information, we identified all hours of service and hours of service recordkeeping violations involving freight railroads. We reviewed the data to identify the extent to which FRA pursues enforcement actions for hours of service violations, as well as the dollar amount it assesses in the form of fines and penalties. To assess the reliability of the inspection and enforcement data provided by FRA, we reviewed previous GAO reports about FRA’s databases and FRA’s efforts to ensure the data’s reliability and conducted electronic testing of required data elements to identify omissions, anomalies, or obvious errors. In addition, we interviewed agency officials knowledgeable about the data quality control procedures and the data produced by the systems. We also determined whether the databases we used had been audited either internally or by external organizations. We determined that the data were sufficiently reliable for the purposes of this report. To further address our objectives, we reviewed laws and regulations related to hours of service issues and reviewed various studies and other documents. To address impacts of RSIA’s hours of service changes on the covered T&E workforce, including potential impacts on fatigue, we reviewed literature related to fatigue and work schedules and reviewed two reports prepared by FRA to validate the usability of the fatigue models FAST and FAID to assess the fatigue risk associated with railroad covered employee work schedules. These two reports provided information on such topics as how the models assess fatigue levels, assumptions used in making such assessments, how fatigue scores relate to the probability of accidents, and limitations of the model results. This information guided our use of the models to assess fatigue risk in the work schedules we reviewed. We also reviewed FRA’s March 22, 2011, Notice of Proposed Rulemaking on new hours of service requirements for commuter and intercity passenger railroads, FRA’s Regulatory Impact Analysis associated with this rulemaking, and the final hours of service rules which were issued in August 2011. In particular, we were interested in FRA’s evaluation of fatigue risk associated with consecutive days worked and work performed during night hours. To address FRA’s actions to ensure compliance with hours of service requirements, we reviewed documentation related to the National Rail Safety Action Plan, National Inspection Plan, and National Safety Program Plan. We also obtained selected FRA regional inspection plans to identify how inspection resources are allocated at the local level. Finally, we obtained data on petitions filed by railroads and others from May 2009 through June 2011 for waivers of compliance with hours of service requirements. These data included information on who filed petitions, when they were filed, and what their status was as of June 2011. We verified these data with FRA and confirmed the status of each petition with FRA officials. To address our objectives, we also interviewed relevant individuals and organizations, including the following:  Federal officials, including those from the National Transportation Safety Board, FRA headquarters, and FRA regions 3, 4, 5, and 6. We selected these regional offices because we were already doing other work in the regions and the offices are geographically dispersed across the country. These four regional offices accounted for 60 percent of the hours of service inspections conducted from fiscal year 2005 through fiscal year 2010, and their territories cover all or parts of 23 states. We discussed with FRA the methods and procedures used to assess the fatigue risk in the railroad industry, the potential operational and administrative impacts of RSIA’s hours of service changes on the railroad industry, and the processes and procedures FRA uses to ensure compliance with hours of service requirements. We also discussed FRA’s actions to implement pilot projects related to hours of service and FRA’s handling of petitions for waivers of compliance with hours of service requirements and the status of these petitions.  Fatigue and sleep research experts. We interviewed officials from the firms involved in developing the FAST and FAID models, the Institute of Behavioral Research and InterDynamics, Inc., respectively, as well as fatigue and sleep research experts. Our discussions with the model developers focused on how and why the models were developed, what assumptions were used in the modeling process, how we should use the models to assess fatigue risk in the railroad industry, and what limitations might be associated with the model results. After we acquired the models, officials from these companies also trained us in how to use the models and how to interpret their results. We also interviewed four experts in fatigue research. We spoke with these individuals about issues related to work and fatigue and factors relating to the potential for fatigue risk. We also solicited their views about fatigue models in general and the two fatigue models we acquired to analyze covered employee work schedules.  Railroad and railroad trade association officials. We interviewed officials from all 7 class I railroads, 6 class II railroads, and 6 class III railroads as well as officials from a holding company that was the parent company for 39 class III railroads and 1 class II railroad. We discussed such issues as the effects of the hours of service changes on railroads and the covered workforce and the federal role in hours of service. We also discussed hours of service issues with officials from the Association of American Railroads, which represents the interests of class I railroads and the National Railroad Passenger Corporation, and the American Short Line and Regional Railroad Association, which primarily represents the interests of class II and III railroads. We also spoke with officials from the American Public Transportation Association about work they were doing to develop hours of service requirements for commuter and intercity passenger railroads. We were particularly interested in their views on the relationship between railroad work schedules and the potential for fatigue.  Representatives of labor organizations. We interviewed representatives from the Brotherhood of Locomotive Engineers and Trainmen, American Train Dispatchers Association, United Transportation Union, Transportation Communications Union, Brotherhood of Railroad Signalmen, National Conference of Firemen and Oilers, and the Transportation Trades Department of the AFL- CIO. These organizations represent various employees that would be covered by railroad hours of service requirements, including train and engine employees, signalmen, and dispatchers. According to these organizations, they represent over 100,000 employees covered by hours of service requirements. We solicited their views on the effects of RSIA’s hours of service changes on their members, the benefits of these changes, and the federal role in monitoring and enforcing hours of service changes. We also solicited their views on waivers and exemptions to hours of service requirements for which railroads have applied. We conducted this performance audit from April 2010 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix discusses (1) why concerns about fatigue in the modern workplace have increased, (2) the nature of biomathematical models that have emerged to better understand sleep-work schedules and fatigue, and (3) GAO’s use of biomathematical fatigue models for analyzing the effect on workers of RSIA’s hours of service changes. Over the past several decades, technology has enabled, and the globalization of society has increasingly come to expect, round-the-clock activities. Society has become “24/7.” Planning for sleep is difficult when work schedules are unpredictable, and work that takes place outside normal business hours often requires people to sleep when humans are normally awake. These characteristics of the modern work world have led to a growing concern about human fatigue and its consequences in the workplace. These issues are particularly important to the rail industry, since rail workers often work on short notice and rail operations often occur at night. When a person does not get enough sleep, certain areas of the brain involved in cognition are affected, engendering fatigue and an associated state of diminished capacity. This diminished capacity can have a variety of ramifications that may be of concern. For example, when fatigued, humans have more difficulty maintaining attention, become less communicative, and have reduced situational awareness. They are then at greater risk of committing errors in their work, which can ultimately lead to more accidents. Concern about these effects has led to the development of tools for better understanding worker fatigue, predicting its extent, and mitigating its effects. Over the past several decades, a science has developed that examines the nature of human sleep and the effects of sleep deprivation. More recently—in about the past 20 years—a variety of researchers have developed tools that are designed to use data on individuals’ sleep-wake patterns to estimate a variety of outcomes such as fatigue, cognition, and accident risk. Most of the current models are based on or informed by what is known as the “two-process model” of sleep regulation, developed in the early 1980s. Generally, the two-process model posits that alertness is a function of two primary factors:  The status of sleep/wake balance. The first factor rises and falls based on time spent sleeping and time spent awake. The model essentially posits that a person’s alertness decays during waking hours and is restored with sleep and that the patterns of decay and restoration are reasonably predictable. The longer a person is awake, the more fatigued that person will become, and the associated reduction in alertness increases the risk of errors and accidents. Alertness can only be restored through sleep, and the model generally assumes that the first few hours of sleep contribute the most to recovery. That is, sleep intensity is greatest when sleep debt is at its greatest, which is during the first few hours of sleep.  Circadian influence. The second factor is related to circadian rhythm. Essentially, humans are hard-wired to sleep during the night and to be awake during the day. When people do sleep during the day, their rest is seldom as restorative as night sleep. First, it is apparently difficult to sleep during the day when core body temperatures are higher. Second, day sleep may be more prone to disruptions that limit its benefit. Third, the circadian pressure to sleep is highest at night, so humans tend to be less alert and more prone to lapses in attention at times when sleep normally occurs. Thus, fatigue—independent of the first factor, which addresses the extent of sleep deficit—tends to accrue more quickly when people work at night. While most of the current biomathematical models of fatigue incorporate these factors into their analysis, they differ in how the factors are structured. In particular, there is variation in the assumptions about function form and other mathematical underpinnings to the models. For example, there may be differences in how the rate of decay in cognition with waking hours is formulated, or the manner in which circadian factors are accounted for in the formulas. Moreover, the models vary in the specific outputs they provide. And finally, they vary in terms of the inputs necessary. Some require actual sleep histories, while others infer how much sleep a person would be likely to obtain from the person’s work hours. The development of biomathematical fatigue models is very recent, and the models have critical limitations that are important for interpreting and using their outputs. The models provide a suggestion about the alertness of humans generally, not of individuals. Individuals vary widely in how they fatigue for a variety of reasons including differences in their personal circadian rhythms, their health, and social responsibilities. GAO was asked to examine whether and how RSIA’s changes to the hours of service laws affect rail worker fatigue. To do this, we determined that, despite the limitations of the current biomathematical fatigue models, there was merit in using them to study predicted fatigue based on railroad workers’ history of work schedules before and after the rules were implemented. In particular, we determined, through discussions with several experts in sleep-fatigue research, that using the models to assess the change in scores for a set of workers after the new law was implemented was a reasonable use of these models because our focus is not on the scores of any particular workers, but rather on the trend in overall scores given changed scheduling patterns. Through discussion with Federal Railroad Administration (FRA) officials and others we determined that, at the time of our work, there were only two current biomathematical fatigue models that had been validated by FRA for use in assessing fatigue in the rail industry and that were appropriate to acquire for possible use in our analysis. We acquired both models. The first, the Fatigue Avoidance Scheduling ToolTM (FAST), was originally developed for military use by Dr. Steven R. Hursh, et al.; the second, the Fatigue Audit InterDyneTM (FAID) tool, was developed by Gregory D. Roach, Adam Fletcher, and Drew Dawson from the Centre for Sleep Research, University of South Australia. In particular, the FRA validation examined whether the models’ predicted level of fatigue correlated with rail accidents deemed to have a “human” causal component, but not with rail accidents that had no identified human cause. FRA found this relationship for both models. Both of these models require similar data on railroad workers’ work schedules and provide generally similar outputs. In particular, both models require data, by employee, on work shift start and end times for the period of time to be evaluated. The FAST model also requires information on the locations of each shift’s start and end, and average commute times for those locations. Both models operate by taking this information and inferring the likely sleep employees are obtaining between their work shifts. After estimating how much time and at what time of day employees are awake and asleep, the models estimate— depending on the model—scores for elevated fatigue or reduced effectiveness. In addition to the individual named above, Sara Vermillion, Assistant Director; Amy Abramowitz; Elizabeth Eisenstadt; Lorraine Ettaro; Patrick Fuchs; Kathleen Gilhooly; Christopher Jones; Richard Jorgenson; Mitchell Karpman; Amanda Miller; Dae Park; and Betsey Ward made key contributions to this report.
The Rail Safety Improvement Act of 2008 (RSIA) overhauled requirements for how much time certain freight railroad workers can spend on the job (called "hours of service"). Changes included limiting the number of consecutive days on duty before rest is required, increasing minimum rest time from 8 to 10 hours, and requiring rest time to be undisturbed. RSIA also provided for pilot projects and waivers. RSIA's changes became effective for freight railroads in July 2009. GAO was asked to assess (1) the impact of these changes on covered train and engine (T&E) employees, including implications for fatigue, (2) the impact of the changes on the rail industry, and (3) actions the Federal Railroad Administration (FRA) has taken to oversee compliance with hours of service requirements and implement RSIA provisions for pilot projects and waivers. To perform this work, GAO analyzed covered employee work schedules and used models to assess fatigue, surveyed the railroad industry, analyzed FRA inspection and enforcement data, and interviewed federal and railroad officials as well as fatigue and sleep experts. According to GAO's analysis of covered employee work schedules, RSIA's requirements led to changed work schedules, increased rest time, and reduced risk of fatigue for covered T&E employees. RSIA's consecutive work day limits and rest requirements contributed to work schedule changes and increases in rest time. Increased rest time also led to equivalent decreases in the hours that covered employees worked. Overall, GAO found, using an FRA-validated fatigue model, that the time covered employees spent working at a high risk of fatigue-- a level associated with reduced alertness and an increased risk of errors and accidents--decreased by about 29 percent for employees of class I railroads (those with the largest revenues) and by about 36 percent for employees of selected class II railroads (those with smaller revenues). GAO's analysis also shows that there are further opportunities to reduce fatigue risk. Specifically, RSIA's changes did not result in material decreases in night work, yet scientific literature and GAO's analysis show night work represents a major factor in fatigue risk. As might be expected from changes aimed at improving safety by reducing covered employee fatigue, the railroad industry reported that RSIA's hours of service changes had operational and administrative effects on it, some of which increased some railroads' one-time or ongoing costs. GAO did not determine how RSIA's changes affected railroads' earnings; but the act took effect as the economy was starting to recover from the recession that began in late 2008. Through its industry survey and interviews, GAO found that RSIA's changes affected railroad operations, including changes to crew and train schedules and increases in staffing levels. Railroad officials GAO spoke with attributed these changes to RSIA's consecutive work day limits and rest requirements, both of which acted to reduce people's availability to work. To maintain operations while complying with the law, railroad officials told GAO they, among other things, hired new employees or brought employees back from furlough. GAO estimated that adding people--120 to 500 each by some class I railroads--increased these railroads' annual costs by $11 million to $50 million. Administrative effects reported by railroads included a need for railroads to revise their hours of service timekeeping systems. FRA uses a risk-based approach to oversee compliance with hours of service and other safety requirements, analyzing inspection and accident data to help target inspections to activities where noncompliance is associated with a greater risk of accidents. GAO's analysis of inspection and enforcement data for the years before RSIA took effect and for the following year show it is too early to determine if FRA has changed the priority it assigns to overseeing hours of service requirements or if a change in priority is warranted. FRA has not been able to implement RSIA-required pilot projects because no railroads have chosen to participate. Nor has it approved voluntary pilot projects designed to test the fatigue-reduction potential of alternatives to RSIA requirements. FRA has approved petitions for waivers of compliance with hours of service requirements for some railroads, but is not required by RSIA to collect data on the safety effects of the approved alternatives. Data from pilot projects--if implemented-- and waivers could be used to improve FRA's assessment of fatigue issues. FRA should, among other things, assess the fatigue risk of work performed during night hours and develop data from pilot projects and waivers to help assess fatigue issues. The Department of Transportation raised concerns about findings related to the oversight process and provided additional clarifying information. Based in part on this additional information, GAO withdrew part of a recommendation. GAO also made other clarifications in the report.
The Trust Fund provides the primary source of funding for FAA and receives revenues principally from a variety of excise taxes paid by users of the national airspace system. The excise taxes are imposed on airline ticket purchases and aviation fuel, as well as the shipment of cargo. Revenues deposited in the Trust Fund are subject to congressional appropriations. In addition to Trust Fund revenues, in most years, General Fund revenues have been used to help fund FAA operations. As figure 1 shows, Trust Fund revenues have fluctuated since fiscal year 2000. A number of factors, such as external events and general economic conditions, contributed to this fluctuation in revenues because they affect the number of tickets purchased, the fares paid by passengers, the amount of fuel purchased, and the value of air cargo shipped. For example, revenues declined early in the decade because of a series of largely unforeseen events, including the September 11, 2001, terrorist attacks, that reduced the demand for air travel, resulting in a steep decline in airline industry revenue. Similarly, during the recent recession, Trust Fund revenues declined from $12.4 billion in fiscal year 2008 to $10.9 billion in fiscal year 2009, in part because of the 7 percent decline in domestic stic passenger traffic during that period. passenger traffic during that period. The Trust Fund is the primary source of funding for FAA’s capital programs and also provides funds for FAA’s Operations account. The capital accounts include (1) the Facilities and Equipment (F&E) account, which funds technological improvements to the air traffic control system, including the modernization of the air traffic control system, called the Next Generation Air Transportation System (NextGen); (2) the Research, Engineering, and Development (RE&D) account, which funds research on issues related to aviation safety, mobility, and NextGen technologies; and (3) the Airport Improvement Program (AIP), which provides grants for airport planning and development. In addition, the Trust Fund has provided all or some portion of the funding for FAA’s Operations account, which funds the operation of the air traffic control system and safety inspections, among other activities. Finally, the Trust Fund is used to pay for the Essential Air Service (EAS) program. In fiscal year 2010, FAA’s expenditures totaled about $15.5 billion, with Trust Fund revenues covering about $10.2 billion, or 66 percent, of those expenditures. As figure 2 shows, while total FAA expenditures grew about 60 percent from fiscal year 2000 through fiscal year 2010, the Trust Fund’s revenue contribution only increased 12 percent, while the contribution of general revenues from the U.S. Treasury has increased to cover a larger share of FAA’s operations expenditures. We discuss this change in more detail in the next section of this statement. Since the Trust Fund’s creation in 1970, revenues have in the aggregate generally exceeded spending commitments from FAA’s appropriations, resulting in a surplus. This surplus is referred to as the Trust Fund’s uncommitted balance—the balance in the Trust Fund that remains after funds have been appropriated from the Trust Fund and contract authority has been authorized. As of the end of fiscal year 2010, the Trust Fund’s uncommitted balance was about $770 million (see fig. 3). As figure 3 shows, the Trust Fund’s uncommitted balance has declined since reaching $7.35 billion in fiscal year 2001. This decline is largely a result of how Congress determines the amount of appropriations that should be made from the Trust Fund. Starting with the Wendell H. Ford Aviation Investment and Reform Act of the 21st Century (AIR-21) in 2000 and continuing with Vision 100, Congress has based FAA’s fiscal year appropriation from the Trust Fund on the forecasted level of Trust Fund revenues, including interest on Trust Fund balances, as set forth in the President’s baseline budget projection for the coming fiscal year. Each year’s forecast, and accordingly FAA’s appropriation, is based on information available in the first quarter of the preceding fiscal year. For example, the revenue forecast for fiscal year 2011 is prepared in the first quarter of fiscal year 2010. These revenue forecasts can be uncertain because it is difficult to anticipate, a year in advance, events that may significantly affect the demand for air travel or fuel usage, the fares that passengers pay, and other variables that affect Trust Fund revenues. In fact, as figure 4 shows, FAA’s forecasts of Trust Fund revenues (including both tax revenues and interest earned by the Trust Fund’s cash balance) have exceeded actual Trust Fund revenues (including interest) in 9 of 11 years, and in aggregate, these forecasted revenues have exceeded actual tax revenues by over $9 billion over that period. Accordingly, appropriations from the Trust Fund, which are based on these revenue forecasts, have also exceeded actual revenues, thus drawing the us drawing the uncommitted balance lower over the course of the last decade. uncommitted balance lower over the course of the last decade. Until recently, FAA generated a forecast for the President’s budget using models based on historical relationships between key economic variables, such as the growth rate of the economy, and aviation measures, such as passenger traffic levels and passenger fares, that affect Trust Fund revenues. The responsibility for forecasting Trust Fund revenues shifted from FAA to the U.S. Department of the Treasury (Treasury), which already had responsibility for other federal excise tax revenue forecasts, in fiscal year 2010. We have recently been asked by the Senate Commerce, Science, and Transportation Committee to examine the Trust Fund revenue forecasting process and how it might be improved; we expect to begin our review this year. The Trust Fund’s uncommitted balance, which exceeded $7.3 billion at the end of fiscal year 2001, dropped to $299 million at the end of fiscal year 2009—the lowest balance over the past decade. One of the greatest declines in the uncommitted balance occurred in 2002 following the sudden drop-off in aviation activity after the terrorist attacks of September 11. In addition, the declines in passenger traffic and aircraft operations and reduced fuel consumption in 2009 resulted in actual revenues to the Trust Fund that fell significantly below forecasted levels in fiscal year 2009 and an uncommitted Trust Fund balance that approached zero. In response, the fiscal year 2009 omnibus appropriation increased the general revenue contributions to FAA’s operations and decreased FAA’s appropriation from the Trust Fund by approximately $1 billion compared with what was originally outlined in the President’s fiscal year 2009 proposed budget for FAA. These additional general revenues kept the Trust Fund’s uncommitted balance from going negative, thereby avoiding budgetary challenges for FAA. As a result, general revenues accounted for 24 percent of FAA’s expenditures in fiscal year 2009 and reached 34 percent in fiscal year 2010 (see fig. 2). If the uncommitted balance is nearly depleted and actual Trust Fund revenues continue to fall below forecasted levels, there is a risk of overcommitting available resources from the Trust Fund—meaning revenues could be insufficient to cover all of the obligations that FAA has the authority to incur. A low uncommitted balance signals to FAA that limited revenues are available to incur new obligations while still covering expenditures on existing obligations and increases FAA’s challenge in moving forward with planned projects and programs. FAA officials have noted that they closely monitor the Trust Fund’s available cash and FAA’s obligations to ensure that enough cash and budget authority are available to cover FAA’s expenditures and obligations. In the short term, if there were a risk of overcommitting Trust Fund resources, FAA officials noted that they might delay obligations for capital programs if the Trust Fund did not have adequate revenues to cover those obligations without additional funding authorized and appropriated from the General Fund. According to FAA officials, they would first defer some capital program obligations so they could continue to fund operations, such as air traffic control and safety inspections. These actions would ensure that the agency did not incur obligations or expenditures in excess of the Trust Fund’s cash balance, which could potentially lead to a violation of the Antideficiency Act. Later this month, in the President’s budget, the administration will release its newest estimate of the Trust Fund’s fiscal year 2011 year-end uncommitted balance. Congress may choose to increase FAA’s authorized funding level in the near term to allow FAA to further develop NextGen, the new satellite- based air traffic management system that is designed to replace the current radar-based system. NextGen improvements include new integrated systems, procedures, aircraft performance capabilities, and supporting infrastructure needed for a performance-based air transportation system that uses satellite-based surveillance and navigation and network-centric operations. These improvements are intended to improve the efficiency and capacity of the air transportation system while maintaining its safety so that it can accommodate anticipated future growth. FAA has generally identified the NextGen capabilities that it plans to implement in the near term to midterm, through 2018. FAA’s capital investment is expected to be $11 billion to $12 billion through 2018. This cost does not include research, the airport and associated airfield improvements, or the aircraft equipage that is necessary to realize all benefits. In addition to FAA’s capital investment costs, FAA estimates that the equipage necessary to realize significant capabilities implemented through 2018 will cost in the range of $5 billion to $7 billion. Decisions about the long-term direction for NextGen (beyond 2018) have yet to be made, and two key planning documents—the NextGen Integrated Work Plan and Enterprise Architecture—contain a wide variety of possible ideas and approaches. Therefore, the costs of the system over the long term are uncertain, but have been estimated to be in the $40 billion range (combined public and private investment in ground infrastructure and avionics). FAA’s proposed budget for NextGen activities is $1.14 billion in fiscal year 2011, up from the $700 million spent in fiscal year 2009 and the $868 million spent in fiscal year 2010. In addition, as we have previously reported, NextGen’s ability to enhance capacity will partly depend on how well airports can handle greater capacity. FAA’s plans call for building or expanding runways at the nation’s 35 busiest airports to help meet the expected increases. However, even with these planned runway improvements and the additional capacity gained through NextGen technologies and procedures, FAA analyses indicate that 14 more airports will still need additional capacity, which could require additional Trust Fund resources. Additionally, the Future of Aviation Advisory Committee recently proposed to the Secretary of Transportation that the federal government undertake a significant financial investment to accelerate efforts to equip aircraft and train staff to use key NextGen technologies and operational capabilities, including performance-based navigation (PBN), automatic dependent surveillance—broadcast (ADS-B), ground-based augmentation system (GBAS) and data communications. The amount of investment required will depend on how any financial incentives are structured. Financial assistance can come in a variety of forms, including grants, cost- sharing arrangements, loans, loan guarantees, tax incentives, and other innovative financing arrangements. One financing option proposed by the NextGen Midterm Implementation Task Force to encourage the purchase of aircraft equipment is the use of equipage banks, which would provide federal loans to operators to equip their aircraft. Another financing option, proposed in various forms by a variety of stakeholders, would involve setting up an equipage fund using private equity backed by federal loan guarantees. While the details of different proposals vary, they would all allow operators who purchase equipment through the fund to defer payments on the equipment until FAA makes improvements required for the operators to benefit from the equipment. As we have previously reported, prudent use of taxpayer dollars is always important; therefore, any financial incentives should be applied carefully and in accordance with key principles. For example, mechanisms for financial assistance should be designed so as to effectively target parts of the fleet and geographical locations where benefits are deemed to be greatest, avoid unnecessarily equipping aircraft (e.g., those that are about to be retired), and not displace private investment that would otherwise occur. Furthermore, it is preferable that the mechanism used for federal financial assistance result in minimizing the use of government resources (e.g., some mechanisms may cost the government more to implement or may place the government at greater risk than others). Given the uncertainty inherent in forecasting revenues and the decline in the uncommitted balance of the Trust Fund, we have suggested that Congress should work with FAA to develop alternative ways to reduce the risk of overcommitting budgetary resources from the Trust Fund. Better matching of actual revenues to the appropriation from the Trust Fund would help to ensure that Trust Fund revenues are sufficient to cover all the obligations that FAA has the authority to incur, thereby reducing the risk of disruptions in funding for aviation projects and programs. One approach would be to appropriate less than 100 percent of the forecasted revenues, especially until a sufficient surplus is established to protect against potential disruptions in revenue collection. This change would reduce the likelihood that FAA would incur obligations in excess of the cash needed to liquidate these obligations and thus reduce the risk of delaying or terminating projects. The House of Representatives’ FAA reauthorization bill proposed in the 111th Congress includes a provision that would limit the budgetary resources initially made available each fiscal year from the Trust Fund to 90 percent, rather than 100 percent, of forecasted revenues for that year; then 2 fiscal years later, when actual revenues would be known, any amount that exceeded 90 percent of forecasted revenues in the second previous year would be appropriated from the Trust Fund to FAA. Congress would need to provide additional general revenues in the first 2 years to make up the difference. Another approach would be to target a minimum level for the Trust Fund’s uncommitted balance and base appropriations on the goal of maintaining that target level. This change would make it more likely that uncommitted resources would be available to FAA in the event that actual revenues fell short of forecasted revenues in a future year. Either approach would result in fewer Trust Fund resources available for FAA for some period of time, requiring additional general revenues to make up the difference, unless FAA’s overall resources are reduced. In the longer term, future Trust Fund revenues under the current tax structure may be lower than previously anticipated. For example, in January 2011, the Congressional Budget Office forecast about $25 billion less in Trust Fund revenues over the next 6 years (through fiscal year 2017) than it forecast in 2007 for that same time period. Given the decline in expected future revenues, appropriations from the Trust Fund under current law will be lower in future years than previously projected unless new revenue sources are found. To maintain appropriations consistent with what earlier revenue forecasts would have afforded, Congress could take action such as increasing general revenue contributions or increasing Trust Fund revenues. For example, we suggested that if Congress determines that the benefit of added revenue to the Trust Fund warrants taxation of optional airline service fees, such as baggage fees, then it should consider amending the Internal Revenue Code to make mandatory the taxation of certain or all airline-imposed fees and require that the revenue be deposited in the Trust Fund. The Future of Aviation Advisory Committee also recommended that the Secretary of Transportation commission an independent study of the federal aviation tax burden on passengers, airlines, and general aviation to determine if existing levels of taxes and fees sufficiently balance the Department’s statutory mandates to “encourage efficient and well- managed air carriers to earn adequate profits and attract capital...;” “promot, encourag, and develop civil aeronautics and a viable, privately-owned United States air transport industry;” and “ensur that consumers in all regions of the United States, including those in small communities and rural remote areas, have access to affordable, regularly scheduled air service.” The committee recommended that the study address the following questions: How do the federal taxes imposed on the U.S. aviation industry compare to those imposed on other modes of transportation? Is the existing level of aviation taxes and fees levied efficiently and effectively for the services provided by the federal government? Are there more efficient ways to collect and administer existing aviation taxes and fees that would save taxpayer and aviation industry dollars? Would regular consultation between those departments and agencies that administer aviation taxes and fees prior to implementing any changes to tax rates and policies result in (1) a more efficient and rational aviation tax system and (2) the desired industry and social outcome? What is the appropriate balance between General Fund financing and Trust Fund financing of capital and operating costs of the national aviation system, recognizing the significant role commercial and general aviation play in fostering economic growth and development? Based on the results of the study, the committee recommends that the Secretary pursue appropriate legislative and regulatory actions that may be needed to ensure that existing and any new aviation taxes and fees applied to passengers, airlines, and general aviation are effective and collected efficiently, appropriately recognizing the role commercial and general aviation play in fostering economic growth and development. Thank you, Mr. Chairman, that concludes my statement. I will be pleased to answer any questions that you or other Members of the Committee might have. For future questions about this statement, please contact me at (202) 512- 2834 or [email protected]. Individuals making key contributions to this report were Paul Aussendorf, Assistant Director; Amy Abramowitz; Jessica Bryant-Bertail; Lauren Calhoun; Carol Henn; Bess Eisenstadt; Heather Krause; Hannah Laufe; Maureen Luna-Long; and Andrew Von Ah. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the status of the Airport and Airway Trust Fund (Trust Fund). Established in 1970, the Trust Fund helps finance the Federal Aviation Administration's (FAA) investments in the airport and airway system, such as construction and safety improvements at airports and technological upgrades to the air traffic control system, as well as FAA operations, such as providing air traffic control and conducting safety inspections. FAA, the Trust Fund, and the excise taxes that support the Trust Fund (which are discussed later in this statement) must all be periodically reauthorized. The most recent reauthorization expired at the end of fiscal year 2007. Proposed reauthorization legislation was considered but not enacted in the 110th and 111th Congresses, although several short-term measures were passed to extend the authorization of aviation programs, funding, and Trust Fund revenue collections. The latest of these extensions--the Airport and Airway Extension Act of 2010, Part IV--was enacted on December 22, 2010, extending FAA programs, expenditure authority, and aviation trust fund revenue collections through March 31, 2011. The financial health of the Trust Fund is important to ensure sustainable funding for a safe and efficient aviation system without increasing demands on general revenues. This testimony provides an update on the status of the Airport and Airway Trust Fund, including the current financial condition of the Trust Fund, anticipated Trust Fund expenditures for planning and implementing improvements in the nation's air traffic management system that are expected to enhance the safety and capacity of the air transport system, and options for ensuring a sustainable Trust Fund. This statement draws on our body of work on these issues, supplemented with updated information on the Trust Fund from FAA and the Congressional Budget Office. All dollars reported in this statement are nominal, unless otherwise noted. The Trust Fund is the primary source of funding for FAA's capital programs and also provides funds for FAA's Operations account. The capital accounts include (1) the Facilities and Equipment (F&E) account, which funds technological improvements to the air traffic control system, including the modernization of the air traffic control system, called the Next Generation Air Transportation System (NextGen); (2) the Research, Engineering, and Development (RE&D) account, which funds research on issues related to aviation safety, mobility, and NextGen technologies; and (3) the Airport Improvement Program (AIP), which provides grants for airport planning and development. In addition, the Trust Fund has provided all or some portion of the funding for FAA's Operations account, which funds the operation of the air traffic control system and safety inspections, among other activities. Finally, the Trust Fund is used to pay for the Essential Air Service (EAS) program. In fiscal year 2010, FAA's expenditures totaled about $15.5 billion, with Trust Fund revenues covering about $10.2 billion, or 66 percent, of those expenditures. While total FAA expenditures grew about 60 percent from fiscal year 2000 through fiscal year 2010, the Trust Fund's revenue contribution only increased 12 percent, while the contribution of general revenues from the U.S. Treasury has increased to cover a larger share of FAA's operations expenditures. Since the Trust Fund's creation in 1970, revenues have in the aggregate generally exceeded spending commitments from FAA's appropriations, resulting in a surplus. This surplus is referred to as the Trust Fund's uncommitted balance--the balance in the Trust Fund that remains after funds have been appropriated from the Trust Fund and contract authority has been authorized. As of the end of fiscal year 2010, the Trust Fund's uncommitted balance was about $770 million. the Trust Fund's uncommitted balance has declined since reaching $7.35 billion in fiscal year 2001. This decline is largely a result of how Congress determines the amount of appropriations that should be made from the Trust Fund. Starting with the Wendell H. Ford Aviation Investment and Reform Act of the 21st Century (AIR-21) in 2000 and continuing with Vision 100,10 Congress has based FAA's fiscal year appropriation from the Trust Fund on the forecasted level of Trust Fund revenues, including interest on Trust Fund balances, as set forth in the President's baseline budget projection for the coming fiscal year. Each year's forecast, and accordingly FAA's appropriation, is based on information available in the first quarter of the preceding fiscal year. For example, the revenue forecast for fiscal year 2011 is prepared in the first quarter of fiscal year 2010. These revenue forecasts can be uncertain because it is difficult to anticipate, a year in advance, events that may significantly affect the demand for air travel or fuel usage, the fares that passengers pay, and other variables that affect Trust Fund revenues.
Since 1989, DOE has spent about $23 billion cleaning up the environmental contamination resulting from over 50 years of nuclear weapons production. During this time, the agency has completed the restoration of less than 20 percent of the total number of contaminated sites. One reason DOE cites for the slow progress is that it has an insufficient workforce to manage and oversee what it calls “the largest environmental cleanup program in the world.” In 1993, the Environmental Management Program had a contractor-to-federal-worker ratio of 21 to 1—one of the highest ratios in the federal government—and the highest funding per FTE in the federal government, $3.3 million per FTE. In September 1993, DOE requested that OMB designate the Environmental Management Program as a pilot project under GPRA and authorize additional employees as part of that project. The agency asserted that it did not have sufficient staff with the skills needed to oversee contractors and review their cost estimates. To support its position, DOE cited our reports and reports by the Congressional Budget Office, which supported the need for additional federal staff to manage the cleanup program. The reports noted the impact of FTE ceilings that restricted the agency from hiring enough federal employees to manage the cleanup program, the use of support service contractors at substantially greater cost, limitations in staff skills for adequate contract management, and the lack of federal expertise. In addition, a study conducted for DOE in 1993 concluded that federal staff have minimal supervision of agency cleanup projects, and as a result, the cleanup is costing significantly more than comparable private sector and government projects. DOE said that it wanted to increase its oversight of contractors and involve federal employees more in contract management. DOE proposed to hire 1,600 new employees by (1) converting 1,050 support service contractor positions to federal positions and (2) adding 550 federal employees to help manage the environmental program. The agency estimated that the new staff would save $188 million through fiscal year 1996 by better managing contractors’ operations and would produce more tangible environmental results. OMB authorized DOE to hire 1,200 of the 1,600 additional staff requested during fiscal years 1994 and 1995 and the additional 400 in fiscal year 1996. As of May 31, 1995, the agency had hired about 700 new employees. Those hired to date have included project engineers, cost analysts, estimators, and environmental safety and health specialists. However, DOE is considering not hiring all of the approved FTEs because of budget constraints, according to the leader of the Office of Environmental Management’s evaluation team. DOE required field and headquarters offices to include justifications for the initial 1,200 FTEs as part of a competitive bidding process. The offices were required to submit bids containing detailed information on their additional personnel needs and on the savings they anticipate will be achieved from the new staff. DOE evaluated the bids and allocated all 1,200 positions that OMB had approved. The additional 400 FTEs were approved by OMB in May 1995 but had not been allocated at the conclusion of our review. Both field and headquarters offices competed for the 1,200 positions, but DOE used a different process to allocate the new positions to the offices. A team of DOE analysts reviewed the bids submitted by the field offices and then submitted their recommendations to management for review. The team reviewed the bids for compliance with requirements and for the adequacy of the justifications supporting the savings. Senior Environmental Management and other headquarters officials reviewed the bids submitted by the headquarters offices and then made the determinations. The DOE senior management officials included the Assistant Secretary for Environmental Management, the Assistant Secretary for Human Resources and Administration, and the Associate Deputy Secretary for Field Management. By early March 1994, 11 field and 15 headquarters offices had submitted bids for new staff. Collectively, the offices requested 1,575 new FTEs and proposed a total of $1.235 billion in savings. In mid-March 1994, the field offices presented their bids orally to DOE management, the review team, and an OMB representative. In their presentations, field office managers explained their bids and responded to management’s questions. Following the presentations, management asked the field offices to revise and resubmit their bids for final consideration. The revised bids were to respond to numerous questions raised during the presentations. In May 1994, DOE informed the field offices of their FTE allocations and the savings targets they were to achieve. The field offices were allocated 831 FTEs, with a total savings target of almost $876 million for fiscal years 1995 and 1996. The headquarters offices were provided with 369 FTEs and a savings target totaling $14.5 million. These savings do not include the FTEs’ salary and benefit costs, about $70,000 per employee—$84 million annually if all 1,200 new employees are hired. Although DOE’s agreement with OMB stipulated that contractor positions would be reduced in conjunction with the new hires, the offices that were allocated new positions have not received the funds that were previously paid to contractors. Instead, DOE required those offices to absorb the additional costs in existing budgets. Appendix I summarizes the initial and revised bids, the allocation of FTEs, and the decisions on the savings targets for the field and headquarters offices. Most of the cost savings and productivity improvements proposed in the field offices’ bids were not adequately justified, according to DOE’s evaluation team. The evaluation team concluded that the field offices had not adequately justified 87 percent of the savings that they said could be achieved. Despite finding these weaknesses in the justifications, in May 1994 DOE approved most of the savings proposed in the field offices’ bids. In two separate reviews of the field offices’ bids, the evaluation team expressed concerns about the quality of the supporting justifications and the likelihood of achieving the savings through improved productivity. The team concluded that most of the justifications of the savings were inadequate. As a result of the first review in March 1994, the field offices were required to revise their bids. Consequently, the overall 2-year savings target proposed in the initial bids was reduced from $1.221 billion to $1.035 billion. In its review of the field offices’ revised bids, the evaluation team concluded that the justifications were not adequate for almost $900 million—87 percent—of the $1.035 billion in savings targeted for the 2 years. Despite this finding, most of the savings targets were approved. For example, DOE’s Savannah River Site first proposed that it could save $121 million in fiscal years 1995 and 1996. However, $56 million of that amount—46 percent—was due to a reduction in contractor positions that had occurred in a prior year and was unrelated to the savings that would result from the new positions. DOE questioned the $56 million during its review of Savannah River’s first bid but did not subtract that amount from the site’s expected savings. In a similar example, about 48 percent of the Oak Ridge Site’s overall proposed savings was to come from the elimination of about 500 contractor positions. The evaluation team commented that Oak Ridge had not adequately explained the proposed cuts in contractors, and in its second review, the team classified these savings as inadequately justified. However, DOE later approved the productivity savings that were to accrue from the cuts in Oak Ridge’s contractor personnel. In another example, the evaluation team considered almost all of the $549 million in savings contained in the Hanford Site’s first proposal to be unjustified. The team commented that most of Hanford’s proposed savings were unrealistic or apparently based on productivity initiatives unrelated to the new FTEs. Hanford reduced its proposed savings in a revised bid, but the evaluation team’s subsequent review concluded that only 4 percent of Hanford’s revised proposed savings was fully justified. Nonetheless, according to members of the evaluation team, DOE approved almost all of Hanford’s proposed savings because the bids were considered an adequate basis for allocating the FTEs and imposing budget cuts at the field offices. On the basis of the evaluation team’s findings, DOE further reduced the field offices’ total savings targets from $1.035 billion to about $876 million, which still included a substantial amount of savings that was not adequately justified. DOE then set savings targets for both field and headquarters offices of $442 million for fiscal year 1995 and $448 million for fiscal year 1996. DOE believed that the bids were adequate for allocating the FTEs and planned to hold office managers accountable for meeting those goals. Despite the fact that the savings targets were not fully justified, budget reductions are occurring. As shown in figure 1, DOE expects to cut the Office of Environmental Management’s budget by $913 million over fiscal years 1995 and 1996, even though it considered only $136 million of that amount fully justified through the bid process. DOE is assured of lower costs because the agency is incurring major reductions in its cleanup budget—about $913 million in fiscal years 1995 and 1996. Even though these cost savings will occur, DOE has not developed a reporting system that would track and validate whether productivity improvements were a result of the new employees. DOE has developed some of the monitoring and evaluating tools required by GPRA, such as annual plans and reports that will yield broad information about the entire pilot project. By the fourth quarter of fiscal year 1995, procedures to collect, report, and validate the productivity improvements and resulting dollar savings related to the new staff are expected to be in place. DOE then plans to include these productivity improvements in its overall GPRA Environmental Management pilot project reports. GPRA requires agencies with pilot projects to prepare a strategic plan for the program, annual plans for each year of a pilot project, and an annual report that assesses the project’s performance. As of March 1995, DOE had completed the strategic plan and performance plans for fiscal years 1994 and 1995 and was preparing a performance plan for fiscal year 1996. Additionally, the agency was preparing its first performance report, which will cover fiscal year 1994. The agency is reviewing the performance plan for fiscal year 1995 through a series of quarterly management reviews and is tracking field offices’ savings against their savings goals. While the GPRA reports will provide an overall picture of the Environmental Management Program’s performance, additional information is required to track the cost savings and productivity improvements that have resulted from the new staff. Therefore, offices are developing monitoring and evaluation systems intended to determine the success of projects that use the new staff. Some projects are easily tracked, while others are more difficult. For example, some of Oak Ridge’s 77 new employees will manage three specific projects—the removal of cooling towers on the site, demolition of a power house, and cleanup of selected burial grounds. According to DOE, it will save about $16 million from these three projects during fiscal years 1995 and 1996. Since these three projects are specifically identified, measuring the savings will be straightforward. Oak Ridge is also developing baseline cost data for other environmental restoration projects and waste management activities that will use new hires—a more difficult task, according to Oak Ridge staff. The Savannah River Site is putting systems into place to track the progress of the productivity improvements and savings realized by its 128 new staff in the high-level waste program, environmental restoration program, and waste minimization program, among others. These systems were not in place at the conclusion of our review. Other sites are also developing program performance baselines to measure performance against goals. DOE provided written comments on a draft of this report. (App. III contains the full text of DOE’s comments.) The agency said that our draft report fairly represented the process the Office of Environmental Management used in allocating the new positions for the Environmental Management Program. However, the agency pointed out that we emphasized the inadequacy of the justifications supporting the savings projections but did not give credit to the process that made field office managers accountable for achieving the projected savings. We believe that our report adequately addresses managers’ accountability for the projected savings. Specifically, we note in our report that DOE plans to hold office managers accountable for meeting the productivity achievements tied to these savings. DOE also said that tracking the results from the additional positions will be especially difficult because the agency is now streamlining its organization and will be unable to fill all 1,600 positions. Additionally, the agency said that further budget reductions are expected to cause delays in accomplishing needed work and may result in increased life-cycle costs. To perform our work, we met with and obtained data from Environmental Management officials at DOE headquarters and at four of its field offices—the Savannah River Operations Office, Oak Ridge Operations Office, Albuquerque Operations Office, and the Ohio Field Office. We performed our work between July 1994 and June 1995 in accordance with generally accepted government auditing standards. (App. II discusses our objectives, scope, and methodology in more detail.) As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Energy and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-3841 if you have any questions. Major contributors to this report are listed in appendix IV. In June 1994, the then Chairman, Senate Committee on Governmental Affairs, asked to us evaluate the portion of a pilot project of the Department of Energy’s (DOE) Office of Environmental Management that involves the hiring of additional federal employees (full-time equivalents, or FTEs). Our review focused on the following three major questions: What process did DOE use to justify the new hires? Did DOE’s justifications support the claimed cost savings and productivity improvements? How is DOE assuring itself that the established cost savings and productivity improvements will be achieved? We selected four of the largest DOE facilities with major environmental cleanup under way: the Savannah River Site, South Carolina; Oak Ridge Operations Office, Tennessee; Ohio Field Office, Ohio; and Albuquerque Operations Office, New Mexico. At each facility, we reviewed the competitive bid proposals and discussed the proposed savings with program officials. Additionally, we reviewed the four facilities’ implementation plans and performance reports that were submitted to DOE. For DOE’s other seven offices, we reviewed their bid proposals, implementation plans, and performance reports. We interviewed key officials at DOE headquarters who were responsible for developing, managing, and evaluating the pilot project, including the new FTEs. We obtained evaluations of the facilities’ bids and discussed them with agency officials. We also interviewed the Office of Management and Budget officials responsible for approving and overseeing the agency’s pilot project. John P. Hunt, Jr., Assistant Director John M. Gates, Evaluator-in-Charge Marion S. Chastain, Site Senior Sara Bingham, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) process for hiring new employees and improving the productivity of environmental cleanup, focusing on: (1) the process DOE used to justify the new hires; (2) whether DOE justifications support the claimed cost savings and productivity improvements; and (3) how the cost savings and productivity improvements will be achieved. GAO found that: (1) DOE used a competitive bidding process to justify the allocation of 1,200 new positions in its field and headquarters offices; (2) the offices requested 1,575 new staff and estimated that the new staff could save over $1.2 billion dollars in fiscal years (FY) 1995 and 1996, resulting from increased federal oversight of contractors and greater federal involvement in contract management; (3) DOE lowered the 2-year savings estimate to about $890 million, not including the $84 million annually in compensation for the 1,200 new staff; (4) DOE did not adequately justify about $900 million in savings from productivity improvements; (5) although DOE is unsure of the justifications, it is reducing its Environmental Management Office's budget by about $300 million in FY 1995, before seeing if productivity improvements occur; and (6) DOE is developing procedures to measure productivity improvements and resulting cost savings the new staff are expected to achieve.
The electricity industry has been predominantly monopolistic and noncompetitive. Utilities (primarily investor-owned utilities—IOU) build power plants and power lines to provide all of the electricity needed by all existing and future customers in their exclusive service areas. Regulators in the states allow utilities to charge electricity rates that give them a regulated, specified level of return on these investments. IOUs were initially reluctant to provide electricity to rural areas, mostly because the sparse population made it difficult for them to recover their costs and to earn a profit. The federal government has played an important role in the traditional market by selling power to rural America. The Department of the Interior’s Bureau of Reclamation (the Bureau) and the Department of the Army’s Corps of Engineers (the Corps) generate electricity at hydropower plants located at major federal water projects. The Department of Energy’s (DOE) power marketing administrations (PMA) generally sell this power in wholesale markets, mostly to publicly and cooperatively owned utilities that, in turn, sell power to end-use (retail) consumers. The PMAs repay the federal investment in the government’s power plants, power lines, and related assets through the revenues they earn by selling power. The Tennessee Valley Authority (TVA), a federal corporation, generates and markets power throughout Tennessee and parts of six other southeastern states. Moreover, the Department of Agriculture’s Rural Utilities Service (RUS) makes and guarantees loans to rural utilities to finance the construction and development of electric power systems. Although critics question the federal government’s role in providing power or in financing improvements to rural utility systems as markets restructure, the activities continue. However, the traditional structure of the electricity industry has begun to change. Legislation and new generating technologies have introduced increased competition into the market, changing the environment in which the PMAs must operate successfully if they are to repay the federal investment in the power program. Federal and state agencies regulate the activities of electric utilities. Traditionally, electricity service was viewed as a “natural monopoly”: A central source of power was seen as the most efficient way of generating, transmitting, and distributing electricity at a reasonable cost. Under the traditional regulatory compact between electric utilities and their state regulators, electric utilities were guaranteed monopolies within their exclusive service areas and regulated rates of return on their capital investments. In return, these utilities built generating and other facilities to provide all of the electricity needed by all current and future customers in their service areas. Under traditional “cost-of-service” regulation, electricity rates approved by state regulators reflected the utilities’ costs of building new generating plants and operating the power system. As shown in table 1.1, IOUs dominate the electricity markets: Although they account for only about 8 percent of the nation’s almost 3,200 electric utilities, they have over 75 percent of utility sales to ultimate customers and over 77 percent of total utility power generation. Most IOUs sell power at retail rates to several different classes of consumers and at wholesale rates to other utilities, including other IOUs; federal, state, and local government utilities; public utility districts; and rural electric cooperatives. The traditional regulatory role of the federal and state governments was established under the Constitution and developed by federal law. Specifically, the Federal Power Act (formerly the Federal Water Power Act), which was enacted in 1920, and the Public Utility Holding Company Act established a regime of regulating electric utilities that gave specific and separate powers to the states and the federal government. State regulatory commissions (generally called “public utility” or “public service commissions”) regulate utilities’ activities within state boundaries, including the setting of wholesale and retail electric rates. At the federal level, the Securities and Exchange Commission regulates interstate electric utility holding companies by requiring them to register and divest holdings so that each company becomes a single consolidated system serving a specific geographic area. In addition, the Commission regulates how the holding companies issue and acquire securities. Under the Federal Power Act, the Federal Energy Regulatory Commission (FERC), formerly the Federal Power Commission, regulates interstate aspects of the electric utility industry, including financial transactions, wholesale rates, and interconnection and transmission arrangements. In addition to IOUs, 932 customer-owned rural electric cooperatives and 2,014 publicly owned utilities provided power in 1996. Most rural electric cooperatives, usually formed and owned by residents of rural areas, distribute electricity only to their members. Operating throughout the nation except for Connecticut, Hawaii, and Rhode Island, cooperatives constituted 29 percent of all the nation’s electric utilities in 1996. Publicly owned electric utilities are nonprofit state and local government agencies, such as municipal utilities, state authorities, public power districts, and irrigation districts. DOE views publicly owned power as providing competition for IOUs and as charging power rates against which the power rates of IOUs can be compared. In 1996, almost 63 percent of all electric utilities in the nation were publicly owned utilities. Cooperatives and publicly owned utilities buy power from wholesale providers for sale to retail customers. However, some cooperatives and publicly owned utilities also generate their own power and transmit it to other utilities or distribute it to their own retail customers. The generation and share of the national energy supply for these types of utilities are provided in table 1.1. The federal government has played a significant role in the development of electricity markets. Because it was too expensive for IOUs to serve rural areas, federal power agencies provided power to those areas. In addition, the government provided financing to rural utilities to assist them in building and maintaining electricity distribution systems that provide electricity to rural users. In 1996, federal utilities provided almost one-tenth of the nation’s power. As a result of these activities, the federal agencies that generate and/or market electricity and that make or guarantee loans to finance improvements to rural electric systems had incurred a debt of over $84 billion as of September 30, 1996. This debt, it should be noted, can be classified as direct and indirect. The direct debt, totaling over $53 billion, is owed directly to the federal government—for example, RUS’ borrowers owe about $32 billion. The indirect debt, over $31 billion, is owed by the federal agencies to nonfederal parties—for example, TVA owed about $24 billion to nonfederal bondholders. Federal entities that generate and/or market electricity—primarily the Bureau, the Corps, the PMAs, and TVA—provided about 10 percent of the nation’s electricity supply in 1996. The Bureau and the Corps generate hydropower at about 130 federally owned power plants located at federal water projects. Because these projects are managed for multiple purposes (for example, providing water for irrigation, water supplies, navigation, flood control, and recreation), the amount of power generated and marketed is affected by the availability and use of water for these other purposes. Power generated by the Bureau and the Corps is marketed by four of DOE’s five PMAs: the Bonneville Power Administration (Bonneville), plus the three that are the focus of this report: the Southeastern Power Administration (Southeastern), the Southwestern Power Administration (Southwestern), and the Western Area Power Administration (Western). The fifth PMA, the Alaska Power Administration, differs from the others in that it operates its own power plants and distributes power directly to end-use (retail) customers. The PMAs in 1996 provided about 5 percent of the nation’s power. The PMAs’ mission is to market federal hydropower at the lowest possible rates that are consistent with sound business practices. The power the PMAs market is the power that remains after it has been consumed for project purposes—for example, to pump water to fields that are being irrigated. By law, the PMAs are to give priority in the sale of power to “preference customers”—public bodies (such as municipal utilities, irrigation districts, military installations, and other federal agencies) and cooperatives. Each PMA has its own specific geographic boundaries, federal water projects from which it markets power, statutory responsibilities, and operation and maintenance responsibilities. Except for the Alaska Power Administration, the PMAs generally do not own, operate, or control the facilities that generate electric power; the generating facilities are controlled by the operating agencies—most often the Bureau and the Corps. The PMAs, except for Southeastern, do own and operate transmission facilities. Southeastern relies on the transmission services of other utilities to transmit the power it sells to its customers. The PMAs are generally required to recover all costs incurred as a result of producing, transmitting, and marketing power, including repayment of the federal investment in the power generating facilities and other debt, with interest. Certain nonpower costs are also allocated to power revenues for repayment. For example, under the concept of aid-to-irrigation, revenues earned from the sale of power repay the federal investment in irrigation facilities that the Secretary of the Interior deems is beyond the ability of irrigators to repay. According to Bureau officials, power revenues are ultimately expected to cover about 70 percent of the federal investment in completed irrigation facilities. As of September 30, 1996, the PMAs and TVA had an outstanding debt of about $52 billion related to financing the construction and operation of power plants, transmission lines, and related electricity assets, as well as other costs that are allocated to be repaid through revenues earned from the sale of electricity. TVA owed about $28 billion; Bonneville owed about $17 billion; and Southeastern, Southwestern, and Western owed the balance—about $7 billion. Together, DOE’s five PMAs and TVA market power within 34 states. They do not serve Hawaii and states in the Northeast and upper Midwest. Figure 1.1 shows the service areas of the PMAs. The Congress established the first PMA, Bonneville, by passing the Bonneville Project Act of 1937 to market federal power in the Pacific Northwest. (See app. III for a more detailed discussion of Bonneville.) In 1943, the Secretary of the Interior established Southwestern under the President’s war powers. The Flood Control Act of 1944 provided the authority to create PMAs and also gave the Secretary of the Interior jurisdiction over the Corps’ electric power sales. The Secretary of the Interior established Southeastern in 1950 and Alaska in 1967. The last PMA, Western, was authorized by the Department of Energy Organization Act of 1977, when the four existing PMAs were transferred from the Department of the Interior to DOE. The largest individual federal power producer, however, is TVA, which by some measures is the largest utility in the nation. Providing about 5 percent of the nation’s power, TVA generates its own power and markets it in wholesale markets, as well as directly to large industrial customers. TVA also approves the retail rates charged by the 159 municipal and cooperative utilities that are its primary customers. In 1933, the Congress created TVA as a multipurpose, independent federal corporation to develop the resources of the economically depressed Tennessee River Valley: TVA was to improve navigation, promote regional agricultural and economic development, and control the flood waters of the Tennessee River. To those ends, TVA erected dams and hydroelectric power facilities on the Tennessee River and its tributaries. Today, the power program is by far TVA’s largest activity, with about $5.7 billion in annual operating revenues in fiscal year 1996. TVA’s hydroelectric facilities, coal-fired power plants, nuclear generating plants, and other power facilities—with a total generating capacity of over 28,000 megawatts (MW)—provide electricity to nearly 8 million people in Tennessee and parts of Alabama, Georgia, Kentucky, Mississippi, North Carolina, and Virginia. (See app. I for a more detailed discussion of TVA.) In addition to authorizing the sale of federal power in rural areas, the Congress passed laws to encourage the development of nonfederal power systems. IOUs were historically reluctant to serve sparsely populated areas because of the heavy capital costs involved in installing power systems and serving relatively few customers. As a result, in 1935, scarcely 1 in 10 farm households in the United States had electricity. The Rural Electrification Act of 1936 authorized the Rural Electrification Administration (now RUS) to provide loans and credit assistance to organizations that generate, transmit, and/or distribute electricity to small rural communities and farms. From fiscal years 1992 through 1996, RUS made or guaranteed 880 loans to rural utilities, some of which buy power from the PMAs. The outstanding balance on RUS’ loans and loan guarantees was about $32 billion as of September 30, 1996. (See app. II for a more detailed discussion of RUS.) From 1935 through the mid-1960s, little change occurred in the way utilities satisfied demand for electricity and were regulated. For decades, they were able to meet increasing demand at decreasing prices because they achieved economies of scale through capacity additions and technological advances. During much of this period, demand for electricity grew at a faster rate than the gross national product. However, in 1976, electricity growth did not exceed overall economic growth, and in 1982 electricity consumption declined. These adverse trends for the electric utility industry were caused by such events as (1) the Northeast power blackout of 1965, which raised concerns about reliability; (2) the Arab oil embargoes of the 1970s, which resulted in increases in fossil fuel prices; and (3) the passage of the Clean Air Act of 1970 and its 1977 amendments, which required utilities to reduce pollutant emissions. Because of the decline in the rate of growth in demand for electricity, utilities could no longer assume that prior patterns in demand-growth would continue into the future. How to satisfy the future demand for power became an increasingly uncertain issue. In addition, since the late 1970s, statutory and technological changes have created a climate for change in traditional electricity markets. In general, electricity markets are starting to evolve from domination by large, monopolistic IOUs to competition among IOUs, nonutility generators, power marketers, and others. In the future, electricity markets may evolve into ones in which electricity is a commodity. In addition, states are taking action to ensure that retail consumers will be able buy power from a variety of competing sources. In 1978, the Public Utility Regulatory Policies Act and the Fuel Use Act encouraged the growth of a nonutility sector of the electricity business. These laws were passed to lessen the nation’s dependence on foreign oil and encourage alternative sources of power. The Public Utility Regulatory Policies Act required commercial utilities to buy power from nonutility generators, called “qualifying facilities.” These entities had to meet certain criteria specified by FERC for such matters as their ownership and operating efficiency. In addition, the act introduced the pricing of electricity on a competitive basis: As more nonutility generators entered the market, FERC began approving certain wholesale transactions that had rates that resulted from a competitive bidding process. Many of the qualifying facilities generated power in nontraditional ways—for instance, by using small hydropower plants, cogeneration, or renewable sources. Under the Fuel Use Act, electric utilities could not use natural gas to fuel new generating technology; however, these “qualifying facilities” could. They were able to take advantage of new generating technologies, such as combined-cycle gas turbine generation that can be built with less capital than larger power plants. Although the Fuel Use Act was repealed in 1987, qualifying facilities and small power producers had already gained a portion of the total electricity supply. For instance, according to the association of IOUs, in 1995 nonutility generators built about 60 percent of the nation’s new electric generating capacity. The Energy Policy Act of 1992 was perhaps the most significant legislative catalyst for increased competition. It expanded nonutility markets by creating a new category of power producers—“exempt wholesale generators.” Like qualifying facilities, exempt wholesale generators do not sell their power in retail markets and own only very limited transmission facilities. Although FERC does not regulate exempt wholesale generators under the Public Utility Regulatory Policies Act, it regulates most of them as public utilities under the Federal Power Act. Under FERC’s regulations, exempt wholesale generators may charge market-based rates if they and their affiliates lack market power. Unlike the requirement under the Public Utility Regulatory Policies Act that utilities purchase power sold by qualifying facilities, there is no federal mandate that utilities buy exempt wholesale generators’ power. The Energy Policy Act also allows FERC, upon application, to order wholesale wheeling of electricity if such an order does not, among other things, unreasonably impair reliability. It is now possible for a municipal utility that is served by an IOU to seek cheaper power from a neighboring utility. The Energy Policy Act also authorized FERC to set transmission rates at levels that permit the utilities to recover all of the costs incurred in providing transmission services, including legitimate, verifiable, and economic costs. In April 1996, pursuant to its authorities under the Federal Power Act, FERC issued a ruling on transmission access. Order 888 requires public utilities that own, control, or operate facilities that transmit electricity in interstate commerce to offer both point-to-point and network transmission services under terms and conditions that are comparable to those that they provide for themselves. Public utilities must offer those services through open-access, nondiscriminatory transmission tariffs containing minimum terms and conditions. In addition, Order 888 allows utilities the opportunity to seek recovery of certain stranded costs from those customers wishing to leave their current supply arrangements. However, according to the Deputy Director, FERC’s Office of Electric Power Regulation, the open-access provisions of Order 888 do not apply to the PMAs, among other entities. Therefore, FERC cannot order the PMAs to provide open transmission services on a general basis. Operating under its authority under the Federal Power Act, FERC can order the PMAs to provide transmission only on a case-by-case basis. However, to facilitate a unified national approach to open-access transmission, DOE directed its PMAs that have transmission facilities to publish generally applicable open-access transmission tariffs, including ancillary services, in a manner comparable to the service tariffs and other measures required of transmission owners and operators that are regulated under FERC’s final rule. In December 1997, Southwestern and Western filed open-access transmission service tariffs with FERC, pursuant to Order 888. The tariffs are to govern future access to available electric transmission and, according to DOE, are consistent with the tariffs of other wholesale transmission providers. Bonneville had filed its tariffs earlier. In response to the uncertainties about how the electricity market will change and how fast, utilities have begun to implement new strategies to compete. Some are acquiring other utilities or merging with them. After years of virtually no mergers, many mergers have been completed or proposed since the Energy Policy Act was enacted in October 1992. For example, for IOUs alone, from October 1992 to January 1998, over 40 mergers had been proposed and 17 had been completed, according to the Edison Electric Institute—the national trade association for IOUs. Utilities are also restructuring themselves and decreasing their operating costs through reorganizations and layoffs. Some utilities are changing how they plan to satisfy future demand for electricity and changing the types of resources they acquire. Because of uncertainty about market conditions, instead of continuing to plan to meet long-term load forecasts, utilities are focusing more on meeting more immediate demand for power. Thus, utilities are now tending to buy resources that are flexible and allow them to adapt quickly to changing market conditions, such as smaller natural gas-fired power plants and purchased power. Utilities are also retiring power plants if they believe those plants may become uneconomic after the industry is restructured. In responding to competitive challenges, utilities are trying to compete for the business of other utilities’ wholesale customers and defending their business with existing customers. For example, as cited in our 1995 TVA report, Virginia Power cut one wholesale customer’s rates by 5 percent to fend off the marketing efforts of a neighboring utility. Federal power suppliers have also taken actions to become more competitive. For example, after the departure of half of its industrial load, TVA froze its rates from 1986 through 1997, although a rate increase was approved for 1998. Moreover, Western recently announced a decrease of over 20 percent, effective October 1, 1997, in the composite rates of power it markets from hydropower plants in the Central Valley Project in California. In addition, according to DOE’s Power Marketing Liaison Office, Western began a process in fiscal year 1995 to restructure itself. The goals of this program included reducing federal and contractor staff from fiscal year 1994 levels by 24 percent, saving $25 million in costs annually, and reducing Western’s organizational units. For its part, Southwestern has adopted a program to reduce overhead costs by reducing targeted administrative positions, reducing the number of managers and supervisors, and eliminating one field office. Electricity markets are not yet fully competitive but are moving in that direction. Although markets for wholesale transactions are becoming competitive, retail markets are still uncompetitive. Supporters of restructuring argue that markets will not be truly competitive until both wholesale and retail markets are transformed. In addition, other issues that need to be resolved include deciding (1) how stranded costs are to be recovered, (2) how electricity is to be transmitted in competitive markets, (3) how electricity is priced in these markets, and (4) how consumers at the retail level are to be offered a choice of power suppliers. Once restructuring is complete, retail electricity rates may fall between 6 percent to 19 percent by the year 2015, depending on the intensity of competition, among other factors, according to DOE’s EIA. Arguably the most significant issue that policymakers will face is how to recover the stranded costs associated mainly with building large baseload power plants and other assets under the old regulatory regimen. IOUs erected large amounts of nuclear generating capacity and entered into long-term purchased power contracts to serve existing and future loads. Under the traditional covenants between IOUs and their regulators, the capital and operating costs associated with those assets were recovered through rates. Now, with power generation costs dropping and prospects that competition will affect market prices, these high-cost plants are becoming uneconomical and the costs associated with them may be “stranded.” Estimates of the investment in such assets nationwide range from $10 billion to $500 billion. The issue of how to recover stranded costs—that is, who should pay—is being debated. In addressing the recovery of stranded costs in the context of retail competition, some states have proposed “sharing the pain”: Utilities could recover or offset the stranded costs by taking mitigating actions (for example, by implementing accelerated depreciation of generating assets, writing off the book value of stranded assets, adjusting dividends to investors, or decreasing operating expenses); ratepayers could pay through rate increases that regulators hope will be temporary; or bonds could be sold to the public to pay off the stranded costs and to avoid rate increases. However, some consumer groups believe that since utilities incurred the costs, they should bear the burden of repayment. For example, an attempt to securitize the costs of a nuclear power plant failed in Connecticut’s legislature because opponents, including consumer groups, believed the issuance of bonds amounted to a “bailout” of the utility. Staffs of state public utility commissions have argued that because IOUs incurred stranded costs under the old regulatory compact, IOUs should be allowed to recover at least some of these costs before they must charge market prices for power. How stranded costs are divided between utilities and their ratepayers, the period of time allowed for their recovery, and how much the recovery of stranded costs affects rates will determine when retail markets become competitive and to what degree. To promote competition, new methods must be found to transmit power. Under current transmission arrangements, wholesale customers frequently do not find it economical to buy power from a distant utility because it must be transmitted over the power lines of intervening utilities, each of which adds a transmission or wheeling tariff to the price of the power. For example, in 1995 during our review of the financial viability of TVA, we found that although an IOU in the Southeast offered power that was competitively priced, transmitting it to TVA’s customers through one intervening utility might increase the price by about 10 percent, rendering its delivered price uncompetitive. In addition, according to DOE officials, some of the power transmitted is lost over distances. To facilitate competitive transmission of power, many state regulators and FERC are advocating the establishment of “independent system operators” (ISO). Utilities in a given geographic area would transfer the operation of their transmission assets to an independent party that would transmit electricity reliably, safely, and efficiently in a nondiscriminatory fashion. For example, California has established an Independent System Operation Restructuring Trust to award funding to parties that will assist in establishing an ISO to begin providing service in 1998. The PMAs are also participating in the formation of ISOs. For example, Western is negotiating with other utilities in the Southwest to establish the Desert Southwest Transmission and Reliability Operator (an ISO) as well as to participate in the California ISO. Concerns exist that such arrangements may be problematic from legal and constitutional viewpoints. According to Western officials, however, in Western’s agreements with other utilities pertaining to the ISO, Western is taking care to ensure that its obligations under federal law and its contractual agreements with preference customers are protected. For example, Western officials believe that, under language provided by the PMA and accepted by FERC on Western’s participation in the California ISO, nothing in the ISO’s tariff shall compel any person or federal entity to violate federal statutes or regulations or compel any federal agency to exceed its statutory authority as defined in applicable federal statutes, regulations, or orders lawfully promulgated thereunder. These provisions also state that if any provision of the tariff requires any person or federal entity to give an indemnity or impose a sanction that is unenforceable against a federal entity, the ISO shall submit to the Secretary of Energy or DOE official a report of the situation. The Secretary or other official will take the steps necessary to remedy the situation to the extent possible. State public utility commissions are also taking steps to facilitate competitive pricing of power. They have supported establishing power pools or exchanges. Under these arrangements, members buy and sell power through the pool or exchange it at a price that reflects market demand and that promotes competition between utilities and other suppliers. For example, under one method, generating companies could bid to sell their power to the pool. The pool would then establish hourly or spot prices based on these bids. In California, the power pool will publish prices every hour or half hour, to be viewed by electric customers, investors, and power marketers. With these visible price signals, wholesale and retail buyers will be able to make efficient purchasing decisions and adjust their consumption of power from peak to off-peak periods when prices drop. As of February 1998, all 50 states and the District of Columbia had considered reforming their respective retail markets, according to the National Regulatory Research Institute and records obtained from state regulatory agencies. At that time, at least 17 states had actually implemented plans to restructure the industry by enacting restructuring legislation or by adopting final orders. Regulators in these states hope that industrial, commercial, and ultimately residential consumers will be able to choose their power supplier, rather than being tied to one utility. These states hope to establish retail choice at all levels by 1998 at the earliest and 2005 at the latest. Supporters of retail competition hope that it will nearly complete the restructuring process for electricity markets and foster competitive pricing throughout the nation. At the time we completed our review, states such as Montana, New Hampshire, and New York had asked utilities to implement pilot retail choice programs so that broad issues that could affect widespread competition later could be identified. Several states, such as Michigan, Pennsylvania, and Rhode Island, were implementing retail competition in phases—for instance, extending it first to industrial and commercial customers and then to residential customers. As mentioned previously, some states were addressing the issue of stranded cost recovery. In addition, at least 8 of these 17 states were also encouraging utilities to continue their “social” programs—such as energy efficiency and conservation programs, use of renewable sources of power, and low-income energy assistance programs. These programs can be funded by charging consumers a nonbypassable fee or by instituting a tax or surcharge on all energy services. Also, to foster competition and decrease utilities’ market power, public utility commissions were requiring utilities to “unbundle” their services—that is, to divest themselves of, or otherwise transfer, the generation, transmission, and distribution of power. When restructuring is completed, states expect that retail customers will enjoy a variety of options for taking advantage of retail competition. For instance, the California Public Utility Commission expects that customers will use metered information about how much power they are using at specific times of day and how much that power costs. They could then decide which supplier to buy from during specific times to minimize costs. They may be able to negotiate directly with a supplier or use the services of an energy marketer or broker. In Maine, it is envisioned that consumers that are unwilling to shop for alternative suppliers will be able to adopt the “standard service option” from their existing utility. The existing utility will use a competitive bidding process in order to buy power for its ratepayers at prices that are comparable to today’s prices. Other options envisioned for Maine’s ratepayers include signing contracts with power marketers or aggregators that are short term, thus enabling them to buy power at a low price but with a risk of rate hikes or rate instability. They will also be able to buy power under longer term contracts at more expensive but more stable rates. Ratepayers will also be able to purchase “green power” (i.e., power from nonpolluting sources such as renewable sources). “Those states that are aggressively pursuing competitive restructuring are invariably high-cost states with little to lose. On the other hand, as a lower-cost state, Virginia may have little to gain and much to lose by being on the leading. . .edge of this restructuring movement. We should also take note of the slow pace of those mostly low-cost states surrounding Virginia—North Carolina, Tennessee, Kentucky, West Virginia, and Maryland. Consequently, Virginia should pursue a cautious and measured approach to adopting competitive initiatives, fully exploiting non-painful learning opportunities through observing the successes and failures of retail experiments and restructuring efforts in the more aggressive states.” Furthermore, in Nebraska, a state where all electric power is provided by public entities and where power rates are among the nation’s lowest, the state’s largest electric utility has asked a federal appeals court to overturn FERC Orders 888 and 889. The utility challenged the orders on the grounds that FERC does not have the legal authority to impose on the utility the same regulatory regime that it imposes on private investor-owned electric utilities because the utility is a political subdivision of the state of Nebraska. Federal agencies that generate or market electricity and that make or guarantee loans to finance improvements to rural power systems incurred a debt of about $84 billion as of September 30, 1996. Three agencies that market federal electricity—the Southeastern, Southwestern, and Western—are responsible for $7 billion of this debt. They face an uncertain future as electricity markets become increasingly competitive. In response, the Chairmen of the House Committee on Resources and the Subcommittee on Water and Power asked GAO to focus on these three PMAs and to (1) examine whether the government operates them and the related electric power assets in a businesslike manner that recovers the federal government’s capital investment in those assets and the costs of operating and maintaining them and (2) identify options that the Congress and other policymakers can pursue to address concerns about the role of these three PMAs in restructuring markets or to manage them in a more businesslike fashion. GAO’s options also apply to the Corps and the Bureau, which generate most of the power these PMAs market. Although GAO’s options apply only to these agencies, the report also provides information about TVA, RUS, and Bonneville in appendixes I, II, and III, respectively. We also included in this report information from generalized reports on how federal agencies can be operated in a more businesslike fashion. See Related GAO Products at the end of this report for a list of the products used to prepare this report. We conducted our review from April 1997 through February 1998 in accordance with generally accepted government auditing standards. Appendix IV contains a detailed description of our objectives, scope, and methodology. We provided a draft of this report to DOE’s Power Marketing Liaison Office that represented the views of Southeastern, Southwestern, and Western; the Department of the Interior (including the Bureau); the Department of Defense (including the Corps); Bonneville; and FERC. Their comments and our responses are included in appendixes VI, VII, VIII, IX, and X, respectively. Federal laws and regulations generally require that the PMAs recover the full costs of producing and marketing federal hydropower. The PMAs generally follow these laws and regulations; however, in some cases federal statutes and DOE’s rules also prohibit or are ambiguous about the recovery of certain costs. As we reported in September 1997, for fiscal years 1992 through 1996, as a result of its involvment in the electricity-related activities of Southeastern, Southwestern, and Western (the three PMAs), the federal government incurred “net costs” of $1.5 billion—the amount by which the full costs of providing electric power exceeded the revenues from the sale of power. In addition, the availability of many federal power plants to generate electricity is below that of nonfederal plants because, among other factors, the federal plants are aging and because the federal planning and budgeting practices, including those used by the Bureau and the Corps, do not always ensure that funds are available so that repairs can be made when they are needed. The resulting declines in performance decrease the marketability of federal power. The net cost to the Treasury and the performance problems of the federal power plants—when combined with competitive pressures on electricity suppliers to decrease their rates at a time when some federal hydropower project’s environmental costs need to be recouped by the PMAs—create varying degrees of risk that some of the federal investment at certain federal generation and transmission projects and rate-setting systems will not be repaid. For example, although the recovery of most of the federal investment in the three PMAs’ hydropower-related facilities is relatively secure, up to $1.4 billion of the federal investment for projects or rate-setting systems pertaining to these PMAs, out of a total federal investment of about $7.2 billion, is at some risk of nonrecovery. As noted in two of our recent products, the revenues of the government’s power generating and marketing activities are not recovering all of the costs associated with the program. These activities operate at a net cost (loss) to the U.S. Treasury. For the three PMAs that are the focus of this report, net costs of $1.5 billion were incurred for fiscal years 1992 through 1996. These net costs fall into several categories: (1) net financing costs, (2) unrecovered employee benefits, (3) unrecovered construction costs, and (4) other costs. We estimate that the net financing costs for the three PMAs’ appropriated debt in fiscal years 1992 through 1996 was about $1.2 billion, including $208 million in fiscal year 1996. These costs stem primarily from appropriated debt provided by the federal government at low interest rates with favorable repayment terms. Appropriated debt carries a fixed interest rate and cannot currently be refinanced. Also, the Treasury cannot require the PMAs to repay the debt before it matures. The interest the PMAs pay on their outstanding appropriated debt is often substantially below the rate the Treasury incurred to provide funding to the PMAs. The PMAs’ average interest rate on outstanding debt was 3.5 percent, whereas the Treasury’s weighted average interest rate on outstanding bonds was 9 percent to provide funding to the PMAs. The PMAs have incurred substantial amounts of appropriated debt at low interest rates primarily because, in accordance with the appropriate DOE order, they repay high-interest debt first, and because the appropriated debt they incurred before 1983 was generally at the below-market interest rates in effect at the time. For current PMA and operating agency employees, the federal government incurs a portion of the cost for Civil Service Retirement System pensions and almost all of the cost for postretirement health benefits. For fiscal years 1992 through 1996, we estimate that the net cost to the federal government of providing these benefits was about $82 million for the three PMAs, including $16 million in fiscal year 1996. The PMAs plan to begin recovering the full annual cost of pension and postretirement health benefits in fiscal year 1998. We found that the three PMAs had incurred costs or had costs allocated to them by the operating agencies for which full costs were not being recovered through the PMAs’ rates. These costs were for the few projects that were not yet completed, were under construction, or were canceled. In some cases, this situation occurred because the power generating projects had never operated as designed. In accordance with DOE’s guidance, the PMAs set rates that exclude the costs of nonoperational parts of the power projects, including capitalized interest. For example, at the Corps’ Russell Project (located on the Savannah River, which serves as the border between Georgia and South Carolina), partially on line since 1985, litigation over large fish kills has kept four of the eight turbines from becoming operational. As a result, over half of the project’s construction costs—about $500 million—have been excluded from Southeastern’s rates. The net costs of these construction projects for fiscal year 1996 represent capitalized or unpaid interest incurred in that year. For construction projects designed to generate power marketed by the three PMAs, we estimate that for fiscal years 1992 through 1996, the cumulative net costs are $138 million, including $30 million in 1996. The PMAs believe that in most instances, including the Russell project, these net costs will be recovered in future years. The three PMAs also incurred other net costs that totaled $157 million for fiscal years 1992 through 1996, for such purposes as environmental mitigation and irrigation. In an example involving environmental mitigation, at the Central Valley Project’s Shasta Dam in California, the 1991 Energy and Water Development Appropriations Act specified that any increases in Western’s costs to purchase power because of bypass releases to preserve fisheries downstream should not be allocated to power; instead, they were paid for by appropriated funding. These costs totaled about $15.3 million in fiscal year 1996 and about $53.8 million for fiscal years 1992 through 1996. In another example of net costs related to irrigation, in May 1996 we estimated that about $454 million in (1) the federal investment in hydropower facilities allocated to irrigation at the Bureau’s Pick-Sloan Missouri Basin Program and (2) a portion of the costs associated with storing water for these projects were not likely to be recovered without congressional action. The principal of $454 million had grown to $464 million as of September 30, 1996. As, by law, interest on this amount is not being paid, we estimated that about $70.6 million in interest was unpaid for fiscal years 1992 through 1996. The availability of federal power plants to generate power is below that of other power plants. Many federal plants are aging (the Bureau’s plants average about 50 years in service and the Corps’ about 30 years), which increases the need for repairs. At the same time, the Bureau’s and the Corps’ planning and budgeting processes do not always provide funding to repair the federal power assets when the funding is needed, causing some repairs to be delayed and the power plants to become less available to provide power. According to the representatives of the PMAs’ power customers and our previous work, the maintenance needs of the Bureau’s and the Corps’ hydropower plants are often underfunded or maintenance is delayed. Furthermore, data from both operating agencies show that their power plants are generally less available to generate power than power plants operated by other generators of electricity. For example, according to the Bureau’s 1996 benchmarking study, while the agency’s power plants exceeded the performance of the industry in terms of wholesale firm rate, production costs/kWh, and the number of full-time operation and maintenance employees per generating unit, they lagged behind other nonfederal and federal hydropower producers in availability, forced outage, and scheduled outage factors. However, the availability of the Bureau’s hydropower plants over the last 3 years has been above the average availability of the last 15 years. In our 1996 testimony, we reported that in the Corps’ South Atlantic Division, the availability of hydropower plants declined from about 95 percent in 1987 to 87 percent in 1995. In addition, the 1995 availability of the Corps’ units is below the industry average (89 percent availability) in the Bureau’s benchmarking study. Several hydropower plants have been off line for several years because of forced outages. However, DOE’s Power Marketing Liaison Office notes that maintenance problems differ by region, district, or division within the operating agencies and that problems in one area should not be extrapolated to all areas. The planning and budgeting processes that federal agencies—including the Bureau and the Corps—use are not conducive to predictable planning and funding of needed repairs. Pursuant to key laws, including the Antideficiency Act, the Adequacy of Appropriations Act, and the Budget Enforcement Act, federal agencies cannot enter into obligations prior to an appropriation and cannot exceed appropriations unless they have specific statutory authority to do so. Thus, they cannot enter into contracts that obligate them to pay for goods or services unless sufficient funds are available to cover the costs in full. Therefore, agencies must budget for the full costs of contracts up front. Agencies cannot enter into a contract unless it is authorized by law and an appropriation covers the contract’s cost. Moreover, fixed spending limits, or caps, apply to all discretionary spending through 1998, including spending for capital items. As we reported in 1996, agency officials often pointed to the poor condition of federal power plants as evidence of a need for more capital spending and reformed budgeting. Some observers add that increased capital spending is needed to generate operational savings in the future. They believe that in an era of constrained federal budgets, spending on capital projects is limited because it entails heavy initial costs and the budget “scoring” for such projects occurs in a single year, while the benefits of it extend for many years. PMAs and their customers stated that they view the federal planning and budgeting processes as not being well adapted to a commercial activity, such as operating a power system. Under current planning and budgeting systems, the project and field locations of the Bureau and the Corps identify, estimate the costs of, and develop their budget proposals, not only for hydropower but also for such facilities as dams, irrigation systems, and recreational facilities. Hydropower repairs may be assigned lower priorities than other items. Budget requests also have been subject to 10-percent to 15-percent reduction targets at the operating agencies. Under these conditions, the operating agencies, the PMAs, and the PMAs’ hydropower customers believe that funding for needed repairs is at best uncertain and at times is unavailable when needed. To ensure that the funding of hydropower maintenance and repair activity receives the funding priority they believe it deserves, customer groups are encouraging the operating agencies to consult them about budgeting and planning for operation and maintenance. Customer groups are also encouraging the federal agencies to seek alternative funding. In most cases, the customers are willing to provide up-front financing for repairs if they are granted more input to planning and budgeting decisions, according to DOE’s Power Marketing Liaison Office. In our September 1997 report, we found that the risk exists that some portion of the government’s investment in its power generation and sales program may not be recovered. The total amount of investment in the assets of the power generating and marketing programs of the operating agencies, the three PMAs, Bonneville, and TVA was about $52 billion. This risk stems from several factors, two of which have been addressed already in this report. First, the large net costs of the federal hydropower program will continue if action is not taken to recover all of the costs of operating the program. Second, the degraded availability of the generating assets contributes to this risk of nonrecovery because it decreases the marketability of federal power. Other factors also add to the risk of nonrecovery. One factor is that the onset of market competition puts pressure on suppliers to keep their electric rates low or to decrease them. At the same time, the PMAs are being pressured to raise some rates because of the costs at certain projects for mitigating the damage to fish and wildlife habitat from hydropower generation. Moreover, when the operating agencies have had to curtail power generation at particular projects to protect the environment, the PMAs have had to purchase power to fulfill their contracts—another factor that puts upward pressure on the PMAs’ rates. Nationwide electricity rates have dropped over 25 percent after inflation since 1982. According to DOE’s Energy Information Administration, retail rates fell from a nationwide average of 8.7 cents per kWh in 1982 to 6.3 cents in 1996 (constant 1992 dollars). This decrease has been caused by factors that include declining fuel prices, an increasing number of fully depreciated power plants, more efficient power generation, and competition from nonutility generators. According to various industry analysts, the restructuring of electricity markets will cause market rates to continue to decline. In addition, according to the Energy Information Administration, retail rates nationwide in 2014 may be about 6 percent to 19 percent below the levels they would have been if competition had not begun. In some cases, wholesale power is available today at about 2 cents per kWh. For example, according to the customer group of the Colorado River Storage Project, in May 1997 one Western customer signed a 20-year contract with an IOU to purchase firm power at a rate not to exceed 1.8 cents per kWh. In contrast, Western’s composite rate for power from the project was about 2 cents per kWh. If the PMAs’ customers can buy less expensive power from sources other than the PMA, the fixed costs associated with the federal government’s power assets will need to be recovered from a decreasing number of customers, placing increased pressure on the PMA to increase its rates. This pressure, in turn, will encourage additional customers to seek power from other sources. At the same time that wholesale and retail rates are declining, the PMAs are being pressed to raise rates at some projects, primarily because of the need to address concerns about damages to the environment and endangered species. As a result, the three PMAs’ hydropower programs have lost revenues, have had to buy more costly replacement power to fulfill their contacts with their power customers, and in some cases have had to spend millions of dollars to mitigate environmental effects. For example, according to DOE’s Power Marketing Liaison Office, about one-third of the 1,356 MW capacity at the Bureau’s Glen Canyon Dam in Arizona, whose power is marketed by Western, could be lost because power generation has been restricted to protect recreational resources and endangered fish species. The Bureau estimates that Western has lost more than $100 million in revenues. At the same time, Western’s costs to buy power to replace the lost generating capacity have averaged about $44 million per year. Furthermore, at the Bureau’s Shasta power plant, in California, whose power Western also markets, restrictions on the turbine operations and cold water bypasses to protect the winter run of the chinook salmon resulted in about $50 million in additional costs to purchase power for Western since 1987. Moreover, the shutdown of some units at the Corps’ Russell project because of litigation over fish kills resulted in Southeastern’s losing $36.1 million in revenues per year since fiscal year 1994. As we recently reported, some portion of up to about $1.4 billion in federal investment is at varying degrees of risk of not being recovered through power revenues at three generation projects, one transmission project, and two rate-setting systems pertaining to the three PMAs. As of September 30, 1996, the three PMAs had accumulated over $7.2 billion in debt for constructing and upgrading the Bureau’s and Corps’ generating facilities whose power the three PMAs market, the PMAs’ transmission facilities, and the Bureau’s irrigation facilities, which are largely repaid with power revenues. In general, the recovery of most of this investment is seen as relatively secure because the three PMAs are generally competitively sound: Their cost to generate power, measured in terms of average revenue per kWh, was 40 percent or more below nonfederal utilities for 1995. However, at some projects, congressional action will be needed to ensure that large amounts of federal investment are recovered. For example, at the Pick-Sloan Program, $464 million in federal investment in power facilities and reservoir storage cannot be recovered until the associated irrigation projects come into commercial service. Because most of these irrigation projects are infeasible, the $464 million cannot be repaid. Without congressional action to force a reallocation of these costs from irrigation to power, or a related solution, recovery cannot take place. Recovery of these costs would place upward pressure on Western’s electricity rates—potentially entailing a one-time increase of up to 14.6 percent. At a time that wholesale electric rates are decreasing, such increases in the PMAs’ rates are uncompetitive and could erode the marketability of the federal power if they are numerous and continuous. Table 2.1 contains information about the circumstances surrounding the $1.4 billion at risk. Additional details on the situations at these six projects or systems are presented in appendix V. More competitive electricity markets will offer new benefits to consumers while posing a special challenge to the federal government’s program to generate and market power. With competition at the wholesale and retail levels, ratepayers are likely to enjoy unprecedented opportunities to choose from among several competing suppliers offering a variety of prices and services. However, the problems we have reported in recent years, combined with these market changes, should alert policymakers to take steps to protect the investment in the federal power assets. Even in the absence of market changes, the agencies that provide power are over $50 billion in debt, including about $7 billion for the three PMAs. At the same time, the hydropower assets are degrading in terms of their availability to generate power, thereby making the power they generate less marketable. As competitive markets develop, some PMA customers may opt to buy from other suppliers if the PMAs’ power is perceived as being increasingly unreliable. In addition, although the PMAs’ power is very competitively priced, this advantage may not last. Specifically, competition is expected to cause market rates to fall. At the same time, the PMAs’ rates need to cover the costs of environmental impacts downstream. If the PMAs’ rates increase and the wholesale rates for power fall to the point where the two rates converge, the PMAs may lose customers to other suppliers. At the Central Valley Project and the Colorado River Storage Project, Western’s wholesale power is already priced at levels competitors can challenge. If the PMAs lose customers to other suppliers, then the risk increases that the federal investment in the power program will not be recovered. As documented in this chapter, for the three PMAs’ projects and rate-setting systems, some portion of $1.4 billion is already at risk for nonrecovery. Although most of the risk to the $1.4 billion does not stem from increasing competition, the advent of competition does heighten the risk of nonrecovery. As discussed in the next chapter, options are available to the Congress and the agencies themselves to better recover costs and protect the federal investment, among other benefits. The nation’s electricity markets are undergoing significant changes, as the previous chapters have shown. The speed with which this widespread restructuring may be completed is uncertain; however, it is ongoing and will continue, perhaps at an accelerating pace, as proposals to expand competition to the retail electricity market continue to be made by national and state policymakers, electric utility interest groups, and the Congress. As the industry becomes less regulated and more competitive at both the wholesale and retail levels, nonfederal utilities and power suppliers have taken important steps to become competitive to survive. Federal power agencies also face the challenge of moving to a more competitive environment. The entities to whom the PMAs sell power, aware that they need to supply the cheapest available power to their own retail customers, have begun to pressure the PMAs, the Bureau, and the Corps to adopt business practices that are better suited to the new era. Furthermore, and perhaps most important, these agencies are under pressure to adapt to the new markets to reduce the risk that the multibillion-dollar federal investment in hydropower and other associated programs will not be repaid if federal power ultimately proves to be too unreliable and overpriced to be competitive. In this connection, a widening recognition exists today that options for operating federal hydropower assets need to be considered and ultimately implemented. Three broad options exist for addressing the federal hydropower program’s operations: Preserve the status quo of federal ownership. Maintain federal control of the hydropower assets but manage them in a more businesslike manner. Divest the federal hydropower assets. The federal power program uses low-cost hydropower generated at major federal water projects to help meet the needs of the preference customers, many of which are located in rural areas. The power plants at these water projects are generally operated by the Bureau and the Corps—the operating agencies—and the power that exceeded the project’s operational requirements is marketed by the PMAs, as described in chapter 1. Power is generated and marketed in a way that balances how the water is being used for the other purposes of the projects. Funding for the activities of the operating agencies and the three PMAs is subject to the annual congressional appropriation process under which the agencies obtain their funding for capital investments as well as for operations and maintenance expenses. PMA and operating agency officials and representatives of the PMAs’ customer associations have indicated a need to change how the federal hydropower program is being operated. They stated that the agencies’ planning and budgeting processes do not provide sufficient, predictable, and timely funding to facilitate the repair of the federal power plants. In addition, they pointed to various administrative and legal requirements that they believe cause the PMAs and operating agencies to generate and market power in an unbusinesslike manner. In this connection, they have advocated ways to manage the federal hydropower assets, discussed in the next section, that will address these concerns. Some representatives of the PMAs’ preference customers have advocated defederalizing the PMAs and the federal generating assets as a way of improving their operating efficiency and availability. For example, according to an official of an association of Western’s municipal power customers, the preference customers should purchase the federal generating and transmission assets of the Colorado River Storage Project in order to avoid the sharp rate increases that characterized Western’s rates from the project since the late 1980s. It is important to note, however, that other preference customers continue to support continued federal ownership of the dams, reservoirs, and hydropower assets. These customers believe that, although some changes in the PMAs’ current practices could lower operating costs and improve efficiencies, as a whole the PMAs have offered high-quality, low-cost services while balancing the diverse needs of the beneficiaries of the federal multi-use projects. Moreover, representatives of investor-owned utilities or proponents of divestiture have questioned why the federal government continues to provide power in restructuring markets. First, electrifying rural areas was an important goal of the federal power program; however, this goal has been largely satisfied. Therefore, the need for the federal government’s involvement is questionable. Second, competition likely would enable wholesale and retail customers to choose from among competing power suppliers. This possibility again questions the need for the federal government to sell power. Third, the issue of providing low-cost PMA power to portions of 34 states in the South and West where the preference customers of the PMAs are located, but not to other areas, is debatable. And fourth, IOUs and other critics of PMA power state that, as federal agencies, the PMAs have advantages that IOUs do not have and therefore would compete with their nonfederal parties on an uneven basis. For example, our work has shown that the PMAs have rates that do not recover all of the costs of generating, transmitting, and marketing power. Also, as federal agencies, the PMAs are not subject to income taxes or state regulatory oversight and have more flexible repayment and rate-setting methodologies. Fifth, the status quo continues the existing risk of nonrepayment of the federal investment. Because of the stakes involved in changing the management and ownership of federal water projects and hydropower plants, maintaining the status quo affords policymakers the opportunity to make careful decisions about how to proceed. The federal government’s role in balancing the multiple uses of water is important. It affects such things as how much water will be available to accommodate the expansion of metropolitan areas, how much water will be used to protect endangered species, and how much water will be needed to protect the harvesting of shellfish in the Apalachicola Bay, Florida. The Bureau and the Corps generate power while balancing these impacts. Any decisions that federal policymakers reach about changing how power is generated or how the water projects will be managed or owned will need to consider the impacts of the decisions on the uses of the water and the beneficiaries of the water projects. An advantage of the status quo is that it continues the federal role in balancing the multiple uses of the water and allows policymakers time to study these issues before they change the operations and/or ownership of the water and the power assets. Also, by preserving the existing multiple uses of the water projects and the projects’ beneficiaries, the status quo avoids the debate that is likely to occur if the Congress reexamines the agreements reached decades ago on federal involvement in power. For example, the status quo continues federal power’s role in helping promote the economies of rural areas, especially by providing inexpensive power to these areas for homes, businesses, municipalities, and irrigation. Many of the cooperatives that currently receive PMA power also have received direct loans or guarantees from RUS. According to Western officials, these cooperatives’ financial health depends in part on the availability of low-cost PMA power. This is of significant interest to the Treasury because of its need to recoup the balance these PMA customers owe in RUS’ loans or loan guarantees. Under the status quo, the PMAs’ revenues are to repay billions of dollars of the costs associated with joint and nonpower benefits for purposes such as irrigation and fish and wildlife protection. Because such benefits likely would not cease to exist if power revenues stopped paying for them, other sources of revenues would have to be located to fund them. In order to avoid increasing the federal deficit, one possible means of paying for these benefits would be for the Congress to fund them from increased tax receipts. However, if federal taxes and revenues could not be increased, then the Congress would need to offset the spending increase for the benefits by decreasing federal spending for other purposes. Alternatively, some costs could be allocated to categories that are not reimbursable through power rates or user fees—to flood control at the Pick-Sloan Program, for example. However, in such a case, additional revenues (such as new taxes or new user fees) would be needed to pay for the costs or offsetting budget cuts to avoid increasing the budget deficit. In these cases, because of the need to find new revenues, uncertainty about repayment of the full Treasury investment would increase. Many options exist for improving the operations of the hydropower program while continuing federal ownership. These options can be grouped in several different ways, including (1) improving the planning, budgeting, and funding for capital repairs of the federal hydropower assets; (2) changing the PMAs’ power rates and repayment methodologies; (3) organizationally restructuring the federal hydropower program to improve its operating efficiency; and (4) eliminating the application of selected legal and administrative requirements to the federal program. In addition, the government could dispose of its high-cost hydropower projects. Some changes can be made by the PMAs and the operating agencies themselves, while others would require congressional action. Improving the operating efficiency of the federal hydropower program would not fully respond to the concerns of the advocates of complete divestiture or privatization, who believe that the government should not participate in a commercial activity. Those concerns could be satisfied only if the hydropower assets were fully divested; however, improving their operations under federal ownership would better safeguard the federal investment while continuing to balance the existing multiple purposes of the projects. Adoption of these improvements may have immediate benefits or may be considered an interim step toward full divestiture, if the Congress proceeds with that option. Federal agencies are traditionally funded through annual appropriations from the Congress. However, as stated in chapter 2, the federal budget process does not lend itself effectively to commercial activities. Under the current planning and budgeting process, the Bureau’s and the Corps’ project and field locations estimate the costs of and develop the budget proposals for capital repairs of not only hydropower facilities, but also dams, irrigation systems, navigation systems, and recreational facilities. Hydropower repairs may be assigned lower priorities than other items, and budget requests are also subjected to 10-percent to 15-percent reduction targets to reduce the federal deficit. Under these conditions, the PMAs’ power customers believe, and our previous work showed, that funding for needed repairs is at best uncertain and at times is not available when it is needed. Several alternatives present themselves for better ensuring that the federal hydropower resources are repaired in a timely fashion. Capital planning and budgeting could be instituted for the federal hydropower program. If the PMAs and the operating agencies were to adopt more businesslike capital planning and budgeting practices, they would be better able to systematically identify and fund improvements and repairs to their power systems. In addition to capital planning and budgeting, other approaches have been adopted. For instance, PMAs, operating agencies, and preference customers have reached agreements allowing customers to finance some capital repairs. The Bureau and the Corps need to improve their planning and budgeting process to facilitate timely repairs of their hydropower facilities. The Corps’ need was illustrated in our 1996 testimony on reliability issues at the Corps’ hydropower plants in the Southeast. The Corps recognized that long-term, comprehensive planning and budgeting systems are needed to identify and fund key repairs and rehabilitations at its hydroelectric power plants, especially in the current environment of static or declining budgets; however, under its current planning and budgeting system, its funding decisions cannot be based on such processes. Operating under the federal budgeting process, the Corps finds itself unable to ensure a predictable source of funding for capital projects at a time when its budget has been decreasing. Therefore, it gives priority to routine, ongoing maintenance and performs reactive, short-term repairs when its power plants experience unplanned outages. The federal budgeting process does not lend itself to funding extensive repairs and rehabilitations; when these actions eventually become essential, the Corps’ budgeting process requires extensive justifications that can take a year or longer to complete. During the early 1990s, the Corps was beginning to address its planning and budgeting needs, for instance, by beginning to rank proposed repair and rehabilitation projects. This effort was suspended in fiscal year 1995, but the Corps’ responsible headquarters official planned to direct the field locations to undertake the effort in time to be considered for the fiscal year 1998 budget. Moreover, in recognition of the need to spend more to repair and rehabilitate its hydropower plants, the Corps in fiscal years 1993 through 1997 requested appropriations for major rehabilitations of some of its hydropower plants. Ten major rehabilitation projects have been approved for funding during fiscal years 1993 to 2007, with a total cost of about $450 million. These projects are being funded from the Corps’ Construction-General account generally over a multiyear period and do not need to be re-budgeted annually. As described by Bureau officials, the Bureau’s planning and budgeting process, like the Corps’, is lengthy and complex, taking over 2 years to produce a known budget level. Because 10-percent to 15-percent budget cuts are applied to the initial budget and subsequent proposals made by the regions and their area offices, future funding levels are uncertain. For example, Bureau officials in the agency’s Billings, Montana, regional office, described the lengthy budget process they expected to undergo to achieve a budget for fiscal year 2000. From the regional perspective, the process began in August 1997 when the regional office received the initial budget proposals from its area offices. During the ensuing 16 months, scheduled to end in December 1998, the area offices, the region, the Bureau’s Denver Office, the Bureau’s Washington Office, the Office of the Secretary of the Interior, and the Office of Management and Budget will review, discuss, and repeatedly revise the proposed area office and regional office budgets, resulting in a consolidated budget for the Bureau and the Department of the Interior. Although by December 1998 the Department will have informed the regional office of expected funding levels for fiscal year 2000, certainty about expected funding levels will not be attained until some time between February 1999, when the Office of Management and Budget will assemble and convey the President’s budget to the Congress, and October 1, 1999, the start of fiscal year 2000. Funding from sources other than federal appropriations has been suggested as one option to improve how the PMAs and the operating agencies pay for repairs of the federal hydropower assets. Although use of nonfederal funds to finance federal agencies’ operations is generally prohibited unless specifically authorized by the Congress, several forms of alternative financing have been authorized by the Congress, according to agency officials. Through one type of authorized arrangement, referred to, among other names, as “advance of funds,” nonfederal entities, such as preference customers, pay for repairs and upgrades of the federal hydropower facilities. Under federal budget statutes, such funding must be ensured before work on a project can be started. For example, Western’s customers are providing advance funding to renovate the generating units at the Bureau’s Shasta power plant in the Central Valley Project. Under an agreement between the Bureau, Western, and the preference customers, the customers may finance up to $21 million and deposit the funds in an escrow account to pay for the work. The Bureau accepts the customers’ funds under the Contributed Funds Act. Customers may be repaid in various ways, including offsets to power rates under which (1) expenses funded from advances from customers are excluded from the revenue requirement for repayment purposes or (2) customers’ monthly power bills are credited for the amount each customer paid to the escrow account. In the case of the Shasta power plant, the customers who contributed funds will be issued credits on their monthly power bills from Western; those that did not contribute funds will not be issued credits. According to the Bureau, this arrangement ensures that all customers contribute. When completed, the entire repair cost will have been expensed throughout the construction period with advance funding from PMA customers. Under another form of alternative financing, referred to as “net billing,” invoice amounts are netted out among parties who perform work or provide services for each other, resulting in the issuance of one check instead of multiple checks. Net billing has been used for purchased power and wheeling for several projects—Central Valley, Loveland Area, and Pick-Sloan, according to Western officials. Western estimates that the use of net billing has reduced appropriation requirements by between $40 million to $50 million annually. Under a variation of net billing, referred to as “bill crediting,” a customer agrees to pay one or more of the PMA’s bills in exchange for an equivalent credit on the customer’s power bill. Bill crediting has the same uses as net billing. Western estimates that bill crediting has reduced appropriations’ requirements by between $45 million to $60 million annually, mostly in the Central Valley Project, and that increased use for the Loveland and Pick-Sloan projects could reduce the appropriations’ requirements by between an additional $2 million to $7 million annually. Supporters of alternative financing, among them officials from the Bureau, the Corps, the PMAs, and the PMAs’ customers, note that its use allows repairs and improvements to be made more expeditiously and predictably than through the federal appropriations process. They believe that alternative financing could provide more certainty in funding repairs and help address problems such as deferred maintenance at Corps-operated plants that provide power marketed by Southeastern. Alternative financing would also move certain costs out of the budget cycle, decreasing the need for appropriations that must be repaid through the PMAs’ power revenues. For example, as of January 1998, Bonneville had entered into long-term agreements with the Bureau and the Corps that will allow Bonneville to directly fund about $150 million dollars in capital improvements and operations and maintenance of the federal hydropower assets in the Pacific Northwest. According to Bonneville, these arrangements will shorten the time needed to secure funding for repairs and maintenance and will remove maintenance as a funding item that must compete with other federal budget priorities. The agreements also promote coordination between Bonneville, the Bureau, and the Corps in budgeting for future maintenance and repairs. Bonneville estimates that this closer coordination will produce operating efficiencies that can reduce costs by up to about $48 million per year. However, Corps and DOE officials cautioned that expanded use of alternative financing may not be prudent because, depending on how it is implemented, oversight by the Congress and the Office of Management and Budget may decrease. According to Bureau and DOE officials, the Congress could take action to foster oversight by the Congress and other entities. For example, Bureau officials believe that to provide for oversight, the agencies could be required to submit data on expenditures to the Office of Management and Budget and to the Congress. Expanded use of alternative financing may require legislative action, especially for the projects operated by the Army’s Corps of Engineers. In a July 1996 memorandum, the Army’s Office of the General Counsel concluded that although the Army has some existing authority to accept funds from outside parties to finance replacements, improvements, and other work at the Corps’ hydropower facilities, the use of these funds must be reviewed case by case and is limited to funds from states and their subdivisions. According to the memorandum, the Congress may have to enact more specific legislation to (1) clarify the terms under which such funds may be accepted, including the kind of work that they could pay for, and (2) establish the framework under which the Army, the PMAs, and the customers should proceed with such alternative financing. The Congress could expand the use of revolving funds. Under one revolving fund arrangement, a fund established by a one-time permanent appropriation is replenished through revenues, which, in the case of the PMAs, are generated by the sale of power or other services and credited directly to the fund, instead of being replenished through annual appropriations. The Congress has authorized the use of these funds at such projects as the Colorado River Storage and Fort Peck projects to fund operation, maintenance, and replacement costs. Proponents of revolving funds, including some officials of Western, the Bureau, and a PMA customer group, note that the funds allow repairs and improvements to be financed more expeditiously and predictably than the federal appropriations process does. Like alternative financing, revolving funds remove some costs from the budget cycle, thereby decreasing the need for reimbursable appropriations. Thus, revolving funds enable the federal power-related operations to be self-financing and also offer customers more opportunities to consult with the agencies on how to spend funds to repair and maintain the hydropower assets. However, officials of PMA customer groups and the Office of Management and Budget also stated that the use of revolving funds could reduce oversight by external parties such as the Congress and the Office of Management and Budget and/or may allow repayment obligations to be incurred that are not routinely approved by these entities. However, the Congress could be kept informed of the operating agencies’ and the PMAs’ spending plans through the annual appropriations process. For example, the PMAs could be required to submit their annual operations and maintenance budgets to the congressional oversight committees. A 1993 DOE legislative proposal, which was not enacted, would have provided for separate accounts established in the U.S. Treasury to be funded from all sources, including sales of power and other services as well as other collections by, contributions to, and appropriations for Southeastern, Southwestern, and Western. These PMAs, the Bureau, and the Corps would use these accounts to pay for the operations, maintenance, and rehabilitation of their power assets. The PMAs would have submitted their annual operations and maintenance (O&M) budgets to their budget committees, including estimates of the PMAs’ and the operating agencies’ O&M spending, project by project. Officials of the Bureau, Western, and a PMA customer group voiced concerns that revolving funds increase the likelihood that nonpower costs, such as environmental initiatives and repayment of obligations to Native Americans, will be added to the revenue requirements base, with rate impacts that are not fully apparent until later. For example, under bills proposed in both the House and the Senate, a potential future cost of up to about $4.5 million would be financed with payments from the Upper Colorado River Basin Fund to divest the lands, structures (including homes), and community infrastructure of the Bureau’s Dutch John, Utah, community that the Secretaries of Agriculture and of the Interior identify as unnecessary. A Bureau official estimated that the agency may incur an additional $300,000 over a 2-year period to administer the transfer of assets. In a related option, the Congress could authorize the three PMAs to use a portion of their revenues from power sales to directly fund statutorily defined hydropower-related activities of the operating agencies instead of turning the revenues over to the Treasury. The Energy Policy Act of 1992, for example, authorizes Bonneville to directly fund such activities at Bureau and Corps’ hydropower projects in the Pacific Northwest. If the Congress authorizes other PMAs to directly fund hydropower assets of their operating agencies, the PMAs’ access to nonappropriated funds, such as those provided to Bonneville, would be one way to pay for the projects. The Congress, however, may wish to consider limiting the types of projects that may be so funded, as it did for Bonneville. Arguments can be made that the way the PMAs establish their revenue requirements and the way they set their rates need to be changed. As noted in our recent products, for example, although generally following applicable laws and regulations, the PMAs’ power rates are not recovering all of the costs associated with generating, transmitting, and marketing federal power. Such cost recovery is generally required by the Reclamation Project Act of 1939 and the Flood Control Act of 1944. DOE’s cost recovery order (Order RA 6120.2), however, excludes certain costs associated with facilities that are not operational and is not specific about the recovery of other costs. The PMAs have consequently interpreted the order to exclude certain costs from their rates. In addition, the nonrepayment of some federal investments in hydropower capacity and other assets (most importantly, irrigation facilities) assigned to power for repayment raises the issue of whether these investments will be recovered under the current repayment methods. In addition, a question arises about whether the PMAs should be required to continue to market their power on the basis of cost-of-service pricing when other parts of the industry are being encouraged to market their wholesale power on a competitive basis. This section discusses various ways that the PMAs could better recover the costs associated with the federal power program: Increasing PMAs’ power rates. Charging rates based on competition. Changing the repayment methodology to recover the federal investment faster and decrease the risk of nonrepayment. Reallocating costs among the water projects’ multiple purposes. Merging rate-setting systems to promote the repayment of costs at certain facilities. Although these changes would address some unrecovered costs that we identified, they would not address all such costs. For example, such unrecovered costs as those associated with the incomplete irrigation facilities at the Pick-Sloan Program, facilities that are not operating because of a lawsuit at the Russell project, or environmental mitigation costs legally exempted from Western’s rates at the Glen Canyon and Shasta dams would not be addressed. Several of the methods listed could result in rate increases, but decisionmakers should consider that increasing the PMAs’ rates is in the government’s interest only as long as the rates do not rise to the point of being noncompetitive. Because the PMAs already sell power generated at a few of over 100 federal water projects whose power they market at prices at or near the prevailing market price, a rate increase could be counterproductive in these instances and could not be sustained in a competitive marketplace. In addition, some are concerned that rate increases would harm rural communities and customers. Relying on Office of Management and Budget Circular A-25 on user fees as well as industry practices and federal accounting standards, our past reports identified a number of power-related costs that had not yet been fully recovered through the PMAs’ electricity rates. Such costs include those for postretirement health benefits and a portion of Civil Service Retirement System benefits for current employees of the PMAs and the operating agencies, construction costs for some projects that were completed or under construction, and construction and O&M costs for hydropower facilities and water storage reservoirs that are infeasible and therefore not expected to be completed. Rates could be increased to fully recover some of these costs. For instance, the full costs associated with the postretirement health benefits and the Civil Service Retirement System benefits could be recovered through power rates. The three PMAs will begin the process of recovering pension and postretirement health benefit costs by including the unfunded liability of the Civil Service Retirement System and postretirement health and life insurance costs of power-related employees in their power repayment studies, beginning in fiscal year 1998. Revenues from rate increases could also pay for unrecovered capital costs for projects that are under construction or not yet in commercial operation when those projects are brought on line. Under DOE’s repayment guidance, the recovery of some federal investments in hydropower has been deferred until projects are completed and placed into commercial operation. These costs are to be repaid when these projects come on line, although rate increases may be substantial. For example, a Southeastern official stated that the costs for the nonoperational pumping units at the Corps’ Russell project, which he estimated at about $528 million as of August 1997, are not yet subject to repayment. Because of litigation over large fish kills, these units have not been allowed to operate commercially and these costs have not been included in Southeastern’s rates. However, if the nonoperational units come on line, these costs would be recovered through rates. The resulting rate increase for customers of that particular rate-setting system may be as high as 25 percent, but in this instance the power would still be competitively priced, according to this official. The industry is being encouraged to base its power rates on a competitive basis rather than on cost of service. Therefore, the Congress could enact legislation authorizing or directing the PMAs to change from cost-of-service rates to rates based on competition. In accordance with legislation, the PMAs are to set their rates at the lowest possible level consistent with sound business principles and market their power primarily to preference customers. Because the three PMAs’ overall average revenue per kWh is at least 40 percent below existing market rates, charging market rates for PMA power would most likely cause the PMAs’ rates to rise. With higher rates, the PMAs’ revenue would be likely to increase and, consequently, the risk of nonrepayment of the federal investment would be likely to decrease as long as the rates remain competitive relative to prevailing market rates. The Congress or the Secretary of Energy could require the methodology for repaying PMA debt to be changed in order to recover the federal investment more quickly. Such a change could increase the PMAs’ rates and revenues as well as the rate of repayment to the Treasury. Under DOE’s current policy and consistent with applicable laws, the PMAs may defer repayment of annual expenses when power revenues do not meet repayment needs during low water years. Deferred annual expenses accrue interest at a current interest rate until they are repaid and generally must be repaid prior to the PMAs’ repaying the principal investment. When repaying principal investment, the PMAs generally must repay their highest interest-bearing debt first rather than the oldest debt. These provisions establish some of the financing flexibility the PMAs need because their revenue reflects the year-to-year variability of water flows and hydropower generation; however, they also result in rates that are lower than they otherwise would be, slower repayment of the federal investment, and a net cost to the Treasury because interest rates on the outstanding federal investment are substantially below the rates Treasury incurs to provide funding to the PMAs and other federal programs. Repaying the federal investment faster would decrease the Treasury’s interest costs and the amount at risk for nonrepayment. However, as for any alternative that increases rates, policymakers would need to consider the impact on the PMAs’ customers and their region. The Congress, or in some cases the operating agencies, could revise the formulas used to allocate costs currently assigned to the multiple purposes of the federal water projects or the “joint costs” (those shared among more than one of the purposes—for example, the capital costs associated with the dam). In some cases, this action would reduce the capital investment that would have to be repaid through the rates the PMAs charge for electricity. For example, officials of the Corps and Western’s preference customers noted that some projects currently allocate little or no costs to recreation or water quality, even though these categories have become increasingly important purposes since the operating agencies prepared the project cost allocations. Through reallocation, a portion of the costs assigned to power would be reassigned to recreation and the electric rates could be lowered accordingly. However, reallocations could result in some costs that are currently being repaid through power revenues—for example, most irrigation-related costs—needing to be repaid through other means. Absent action by the Congress or the operating agencies to institute or increase existing user fees for the activities currently repaid through power revenues, these costs could end up not being repaid. Thus, while the PMAs’ ratepayers could be relieved of the repayment burden of costs no longer assigned to power, the federal taxpayer may end up bearing the burden instead. Also, in commenting on our draft report, DOE’s Power Marketing Liaison Office noted that the equity of certain project beneficiaries (for example, power customers) having to repay more than their fair share of multipurpose costs also needs to be addressed. In some cases, congressional action would be required to authorize a reallocation of costs. For example, as of September 30, 1994, the federal government had about $454 million in federal investment (1) in the Pick-Sloan Program’s hydropower capacity that was initially designed to be used by future irrigation projects and (2) in the costs associated with storing water for these projects. Although these costs are scheduled to be repaid through Western’s power revenues, under Western’s statutory repayment principles, these costs, which we estimated at $464 million as of September 30, 1996, cannot be recovered unless the associated irrigation projects come into service. According to the Bureau, however, almost all of these planned irrigation projects are infeasible and are unlikely to be completed. Reallocating the $464 million from irrigation to hydropower would help ensure full recovery, but without legislative action to do so, it is probable that Western’s power rates will not recover the principal or any interest on it. For some facilities, rate-setting systems could be merged to expedite repayment. For example, at two facilities—the Stampede Powerplant at the Bureau’s Washoe Project and the Mead-Phoenix Transmission Line, which is partially owned by Western, with a combined federal investment of at least $108 million, as of September 30, 1996—Western generated insufficient income to recover capital and operating costs. Western officials are considering a merger of the Washoe and Mead-Phoenix rates with others, resulting in blended rates and increasing somewhat the likelihood of full repayment of the federal investment. In recognition of the changing power markets, the Congress could restructure the PMAs organizationally to better enable them to compete. It can be argued that such changes could provide the PMAs with the flexibility to respond better to market changes and to the needs of their customers, thereby helping to ensure the PMAs’ survival and the repayment of the federal investment. It can also be argued that the PMAs’ federal responsibilities should be continued because of the need to balance the multiple purposes of the water projects. Also, restructuring the PMAs may be seen as an interim step to privatizing them and the operating agencies’ hydropower-related assets. However, absent congressional action and depending on how the program might be reorganized, any restructuring of the PMAs that increases their operational independence may decrease congressional and other oversight. TVA, a wholly owned federal utility with little external oversight, used its financial ties to the federal government and its operational independence to embark on an ambitious nuclear power building program that resulted in nearly $28 billion in debt, as of September 30, 1996. This debt puts TVA at a competitive disadvantage, especially if the Congress were to revise legislation and require TVA to compete with other power suppliers. TVA’s experience highlights the need for the Congress to carefully consider what oversight would be needed before allowing the PMAs to restructure to be more competitive. The Congress could enact laws to authorize the PMAs to operate as federally owned corporations. This type of restructuring, “corporatization,” would allow a government entity that serves a public function of a predominantly business nature to operate in a more efficient, businesslike fashion, while preserving the public service goals that are unique to federal agencies (for example, revenues from Western’s sale of power are scheduled to pay for most of the federal investment in irrigation facilities). Establishing a PMA as a government corporation has been formally proposed in recent years. In 1994, a proposal was drafted to corporatize Bonneville as a way to help maintain its competitiveness. Bonneville has been faced with competition from alternative power sources with lower costs, debt that exceeded $17 billion as of September 30, 1996, and upward pressure on its costs, caused in part by expanded, more costly efforts to protect salmon. The proposal was based on a recommendation in a National Academy of Public Administration report that examined alternative structures to achieve the maximum efficiency and effectiveness at Bonneville. The administration considered legislation to make Bonneville a wholly owned government corporation under the Government Corporation Control Act. This action was intended to increase Bonneville’s flexibility over personnel; procurement; property management; and budgetary, litigation, and claims settlement functions and to enable Bonneville to compete more effectively in electric power markets. Bonneville estimated that the savings from corporatization would have been as much as $30 million annually. In that the other three PMAs’ operations are much smaller than Bonneville’s, the estimated savings from their corporatization would likely be smaller. Corporatization may permit repairs and improvements to be financed more expeditiously and predictably than the federal appropriations process. Presuming that a revolving fund would be established as part of the corporatization, the corporation could operate in a businesslike fashion, without having to submit a budget request for annual appropriations to finance operations. Although the electric utility industry is now unbundling its services, depending on how the government corporation was structured, the generation, transmission, and marketing aspects could be put under one agency, possibly reducing overhead. Each PMA could be established as a separate corporation or two or more of the PMAs—Southeastern and Southwestern, for instance, could be merged. The latter option may afford the economies of scale necessary to make the new corporation or corporations viable, according to a Corps headquarters official. Alternatively, distinct federal rate-setting systems could be corporatized as separate entities from the rest of the PMA. Western officials responsible for marketing power from the Bureau’s power plants within the Salt Lake City Integrated Projects—the Colorado River Storage Project plus the Provo River, Falcon-Amistad, and other projects that are aggregated for rate-setting purposes—suggested that their marketing program could be corporatized. They said that it already benefits from substantial operating and budgeting independence because its operations are financed from a revolving fund. However, in its response to our draft report, DOE’s Power Marketing Liaison Office stated that it is not Western’s policy to support the corporatization of this marketing program at this time. If the government’s objective is to eventually end its participation in a “commercial” activity, corporatization could be an interim step toward divestiture of its hydropower-related assets. In a 1995 report on the privatization or divestiture practices of other nations, we noted that the five nations we reviewed generally (1) converted government agencies or functions into a corporate form before privatizing them or (2) primarily privatized entities already in a corporate form. Converting a government department into a corporate entity, followed in many cases by a privatization, has been common worldwide during the past decade. In New Zealand, for example, the government included a set of reform principles designed to improve performance in the delivery of public sector goods and services in the State-Owned Enterprises Act of 1986. The government anticipated that entities corporatized under this act would be subject to the same regulation, antitrust, tax, and company law as private enterprise. The restructuring of the electricity industry commenced with the corporatization of the government’s generation and transmission capacity in 1987, corporatization of the retail power companies in 1993, full deregulation of the retail sector in 1993 and 1994, and establishment of a competitive wholesale electricity market in 1996. According to a former New Zealand government official, the government privatized seven small government-owned generating projects in 1995. Additional privatizations of generation facilities, while possible, are not anticipated, according to New Zealand’s Energy and Finance Ministers. The changes in electricity rates since the New Zealand’s restructuring of the electricity sector are noteworthy, according to a former New Zealand government official we interviewed. Although very large rate increases had been feared for farmers, for example, rural rates declined by about 40 percent in real terms from 1987, when the reform process started, to 1994, according to one study. Cross subsidies between customer classes are reported to be greatly reduced. Over a longer term, inflation-adjusted retail domestic (residential) rates increased by about 5 percent to 15 percent from 1985 through 1997 and from about 16 percent to 20 percent from 1990 through 1997, according to the New Zealand Ministry of Commerce. Commercial rates, on the other hand, decreased by about 20 percent to 28 percent from 1985 through 1997 and by about 1 percent to 9 percent from 1990 through 1997. In the United States, experience with such conversions after interim corporatization of government activities has been limited. For example, the Congress enacted legislation in 1992 to corporatize DOE’s uranium enrichment operations as the U.S. Enrichment Corporation in a transitional step toward eventual privatization. Similarly, a bill now in House committees would convert the three PMAs into corporations as an interim step toward their privatization. Despite the advantages, creation of a government corporation could significantly reduce the amount of oversight the entity receives. In the past, we have suggested that the Congress strengthen the oversight and accountability of government corporations. For example, over the years, we and others, have characterized TVA, an existing wholly owned federal corporation, as having insufficient independent oversight. Some have noted, moreover, that an entity that resulted from a merger of, for instance, the Bureau’s water management and power generating responsibilities with Western’s power marketing responsibility could experience conflicts among these three different roles. The Congress could consolidate the power-related operations of the operating agencies and the PMAs. Some operational improvements and cost savings could result. Officials at the Bureau’s Denver office recommended that Western’s assets be returned to the Bureau so that the Bureau could better coordinate the multiple purposes of the water projects, while reducing overhead. They estimate that overhead costs could be reduced by up to 30 percent if Western’s power marketing activities were consolidated within the Bureau. Although the Bureau and the Corps previously marketed the power they generate, concerns exist about reconsolidating the power marketing function in these agencies because of the need to balance the needs of hydropower with the needs of the other activities the agencies pursue. Each agency has it own priorities, which do not always favor maximizing power revenues. For example, the Congress may provide funds to the Corps to upgrade a failing generator, but if a key lock in the Corps’ navigation system were disabled, the Corps might divert the funds intended for the generator to the lock. This could prolong an outage at the power plant and cause the government to lose revenue. Although a Corps headquarters official stated that this scenario occurred infrequently, he said that a repair project may be deferred because of conflicting priorities. At the same time, if the power generating activities of the Corps and the Bureau were consolidated within the PMAs, the PMAs, which have a primary mission of marketing power, may inadequately consider the other purposes of the water projects when operating the power plants. In addition, consolidations clash with the developing trend among vertically integrated power utilities to segregate generation, transmission, distribution, and ancillary services. Bureau, Corps, and PMA officials believe that some of the legal and administrative requirements that their agencies must follow cause them to operate in an unbusinesslike fashion and may cause the PMAs’ power rates to increase. For example, aware of the need to operate more efficiently, in February 1996 Western chartered an internal study designed to identify and address laws, regulations, and rules that it determined to be counterproductive to its functioning in a businesslike manner. Although many of the study’s recommendations are administrative in nature, Western identified opportunities to improve its performance that ranged from a few thousand dollars to millions of dollars. For example, the report on the study recommends that Western request an exemption from DOE’s requirement to report quarterly on safety. Western contends the report is of no value, but exempting it from this requirement could save Western $6,630 annually. In another example, Western estimated that if it used a credit card to purchase supplies and services instead of purchase orders, it could save over $500,000 annually. In an example that would require legislative action, exempting Western from the statutory requirements in the Federal Acquisition Regulations about taking sealed bids for procurements could save the agency $115,600 annually. Of more consequence, the Congress could allow Western to pay prevailing local area wages instead of those required by the Service Contract Act of 1965. The report states that such an amendment could save Western about $6.2 million annually. The scope of Western’s study included the Code of Federal Regulations, the Federal Acquisition Regulations, executive orders, DOE’s orders and guidelines, and other directives. The Congress could pass legislation that would allow the Bureau and the Corps to divest themselves of projects that have power generating costs that exceed the costs and rates of their rate-setting system. Officials from the Bureau, officials from two of Western’s customer groups, and representatives of some of Southwestern’s customers suggested that the PMAs could operate more efficiently and reduce pressure to raise power rates if the operating agencies were allowed to dispose of several plants that produce higher-cost power. Collectively, they suggested that some of the hydropower plants at the Bureau’s Collbran, Dolores, Loveland Area, and Rio Grande projects as candidates for disposal. According to Bureau officials, some of these projects associated with the Colorado River Storage Project produce power at costs ranging from about 3.5 to 6 cents per kWh, whereas Western sells power at a composite firm rate of about 2 cents per kWh for the Colorado River Storage Project. According to a Corps official, one obvious problem with this option is finding a willing buyer for these inefficient units. Also, to the extent that power revenues cease to pay for some of the federal investment in constructing these units, the taxpayers would assume a larger burden. Whether the government’s investment in these projects is fully recovered depends on the terms and conditions of the sale and the resulting price received for the assets. Consistent with the philosophy that the government should not be involved in commercial activities that are best left to the nonfederal or private sector, the Congress could enact legislation to divest the PMAs and the government’s hydropower assets. As we concluded in our March 1997 report, divesting the federal hydropower assets, while possible, would be complicated for several reasons. Any divestiture of hydropower-related assets would need to balance the multiple purposes of the water projects that limit and define how water is released through the turbines, how and when electricity can be generated, and in what quantities. These federal responsibilities would not necessarily terminate after a divestiture. Other factors would also have to be accommodated. These factors include the types of assets being divested, the conditions attached to the sale and the use of the assets after the divestiture, the operating conditions of the assets, the sales mechanism used, and the impact of the divestiture on regional economies, including the impact on regional electricity prices. Of particular note, the impact of a divestiture on the future rates of the preference customers would have to be considered. If the PMAs were privatized, rates would likely increase to varying degrees for most of the current preference customers. Together, these factors complicate the sale of federal hydropower assets and at the same time could affect the willingness of potential buyers to bid on the federal hydropower assets and the price the government could obtain for them. It should be noted that customers themselves have proposed defederalization of the federal hydropower assets. For example, in 1995, 37 of Western’s preference customers advocated an arrangement whereby they would purchase, lease, or obtain other rights to the federal hydropower generating assets within the Boulder Canyon and Parker-Davis projects, as well as certain transmission projects. According to a representative of these customers, this proposal was made to prevent an investor-owned utility from acquiring the federal power resources and was also a reaction against other privatization proposals that were being presented at that time. With very few exceptions, federal hydropower projects have multiple purposes specified in their authorizing legislation. For example, the Corps’ Fort Peck project on the Missouri River in Montana has hydropower as a purpose as well as providing for fish and wildlife habitat, flood control, irrigation, navigation, recreation, water quality, and water supply. Multiple purposes are often complementary but are sometimes at odds. For example, water is stored in and released from a reservoir to provide for recreation, but its release through the turbines could be scheduled in a way that is intended to maximize revenue. In contrast, Western’s Billings, Montana, office forecasts decreases in power revenues in the long-term because water, which would otherwise be used to generate electricity, will be increasingly used for irrigation and for other purposes. In its fiscal year 1995 repayment study, Western predicted that revenues from the sale of hydropower would decrease from about $253 million in 2001 to about $213 million (in constant 1995 dollars) in fiscal year 2080 for the Pick-Sloan Program. At the Bureau’s and the Corps’ water projects, power generation is defined and constrained by the requirement to manage the water for other purposes. The Bureau, for instance, at some projects has restricted releases through the turbines to mitigate environmental impacts downstream. The need to manage water for multiple purposes and to generate hydropower in a way that balances other purposes would have to be accommodated even after a divestiture occurs, absent congressional action. In addition, the water rights of Native Americans and of states would need to be accommodated in the event of a divestiture. According to Bureau officials, Native Americans’ rights to water at some federal water projects are the earliest and thus supersede the use of water for other purposes, including hydropower generation. As an example, Bureau officials cited a legal settlement with tribal entities of the Fort Peck Reservation in Montana that includes the right to about 1 million acre-feet of water from the Missouri River. In addition, according to DOE’s Power Marketing Liaison Office, a divestiture may have to address how to transfer out of federal ownership the transmission lines and rights-of-way that traverse tribal lands. The tribes may be concerned about the transfer or sale of such lines to private parties. States also have water rights, and the Bureau and the Corps are increasingly arbitrating between the claims of various states. For example, for several years, Alabama, Florida, and Georgia have been contesting the uses of water in two river basins in the Southeast that the Corps manages. As stated in our March 1997 report, the three general ways the government could divest itself of its hydropower assets are divesting (1) only the PMAs (including the right to market power and any associated federally owned transmission assets); (2) the PMAs and the generating assets of the Bureau or the Corps or both; and (3) the PMAs, the generating assets, and the balance of the projects (for example, the dams and the reservoirs).Divesting combinations of these assets is also possible. In general, divesting only the PMAs and the hydropower generating assets would be less complicated than divesting the balance of the projects because the first two alternatives retain the Bureau and the Corps in their role of managing how water is used and in balancing the projects’ multiple purposes. The kinds of assets divested will influence the regulatory issues accompanying a divestiture. Many options for regulating the operations of divested hydropower assets exist, including regulatory regimes that could be established by federal, state, or regional authorities. FERC, which currently licenses the operation of nonfederal hydropower assets, primarily regulates the reasonableness of wholesale rates charged by the PMAs but does not provide more detailed oversight. According to FERC officials, FERC has experience regulating the multipurpose aspects of water development at over 1,600 projects nationwide pursuant to much the same multiple-use standards as apply to federal projects. FERC, however, does not have complete authority to set regulatory requirements. Other federal and state agencies, through FERC’s regulatory process, may impose mandatory conditions on FERC’s licenses, which complicate FERC’s licensing process. If only the PMAs (including their rights to sell power and any transmission lines) were divested, then the Bureau and the Corps would continue to operate the hydropower plants, dams, and reservoirs in accordance with existing plans, guidelines, and regulations. In such a case, the buyer would not need a FERC-issued license; the Bureau and the Corps would continue to manage the water as in the past, the existing restrictions would be likely to remain in effect, and the buyer would market the power subject to the same conditions as the former PMA. According to FERC officials, they prefer to license all of a project’s features that have a role in power production. However, if the power plants were divested as well, the new owner would be required to obtain an operating license from FERC, unless this requirement was specifically exempted by law. Licensing a divested plant could take a long time. We reported, for example, that the median processing time for 111 projects applying for relicensing between January 1982 and May 1992 was 2.5 years. Some had taken as long as 10 to 15 years. In January 1998, a FERC official told us that the median time to relicense over 150 projects whose licenses expired in 1993—the most recent data FERC had analyzed—was about 30 months. If a divestiture involves a PMA, the power plants, and the balance of the water projects (most importantly, the dams and reservoirs), the Bureau and the Corps would no longer fill the role of specifying the operating conditions of the project. Instead, safeguards for the multiple uses of the water would primarily be contained in the conditions FERC would attach to the operating license pursuant to the Federal Power Act. In such an event, in licensing the hydropower plant, FERC would be required to weigh the plant’s impact on such aspects as the environment and recreation. Licensing would therefore be complicated by the need to complete a number of studies on the power plant’s impact on fish, plant, and wildlife species; water use and quality; and any nearby cultural and archeological resources. Moreover, the government of each affected state would have the opportunity to issue a water quality certification. FERC officials also cautioned that if power plants, dams, and reservoirs were sold, then FERC’s licensing process could revisit the management and uses of the water pursuant to the Federal Power Act and possibly change the operation of the project, potentially affecting power generation. In connection with this issue, the executive director of the National Hydropower Association stated that nonfederal hydropower plants are losing generating capacity because of environmental restrictions or mitigations that are attached as conditions to their operating licenses as FERC relicenses those plants. Moreover, according to a September 1997 report by DOE’s Idaho National Engineering Laboratory, at the time of relicensing, 96 percent of the peaking projects relicensed since 1987 have had their ability to meet peak demand reduced. Of the 52 projects that were relicensed from 1987 to 1996, FERC added capacity to only 4 projects, but the remaining 48 projects had their ability to meet peak demand reduced by from 0.4 to 54.3 percent of their previous capacity: the average reduction was 6 percent. Also, FERC’s review of over 130 projects licensed from the 157 applications filed in 1991 shows that while generating capacity had a very small increase, actual electricity generation had a very small decrease—less than 1 percent. The explicit and implicit liabilities borne by the government and which of those liabilities would transfer to a buyer would also affect the price obtained for the federal power assets. Sales of some or all of the hydropower assets—at prices that exceed the value to the government—would produce budgetary savings in the long run, according to a November 1997 report by the Congressional Budget Office. The report estimates that the combined assets of the three PMAs may be worth between about $8 billion and $11 billion. A sale could also result in a future stream of tax payments to the Treasury, also depending on the divestiture’s terms and conditions. However, the report states that losses are possible, depending on the terms and conditions of the sale. In addition, as a matter of general principle, policymakers would need to take into consideration the fact that assets that are sold with many or relatively onerous restrictions (from the viewpoint of a prospective purchaser) or uncertainties about future operations are correspondingly less attractive and are likely to sell for less. While the government may still choose to place restrictions or to assign or retain certain liabilities, the financial consequences in terms of the sale price should be assessed. If the government’s objective is to obtain the maximum possible price for its assets, the government could retain certain liabilities that could reduce risks to potential buyers. In some cases, the federal government could be in a better position than the buyer to bear certain risks. For instance, in the proposed divestiture of the U.S. Enrichment Corporation, the government would retain liability for the environmental cleanup associated with the prior production of enriched uranium. According to a contractor’s report, decontamination and decommissioning activities at uranium enrichment plants could cost as much as $17.4 billion in 1994 constant dollars. At some hydropower projects, available generating capacity has been diminished by up to one-third because of the need to mitigate environmental impacts downstream. Buyers may discount any prices they offer because of the loss of available generating capacity unless the government assumes the liability for mitigating environmental impacts. In addition, in the case of the federal hydropower assets, uncertainty about future operating conditions because of potential environmental liabilities may discourage bidding or result in lower prices than if the federal government assumes some of the liabilities. For instance, one provision of the Central Valley Project Improvement Act directs the Secretary of the Interior to manage annually 800,000 acre-feet of water for environmental purposes authorized by the act. According to the Bureau, an analysis of the environmental impacts indicates that hydropower generation may be reduced by about 5 percent. Were the government to divest the project’s assets, it might agree to limit the effect of water use restrictions on potential buyers for a specific period and to specify changes in water use restrictions over time to reduce the uncertainty the buyer would face. If the government’s objective is to expedite the divestiture on terms that would less adversely affect the projects’ beneficiaries, getting the highest possible price for the assets might be a secondary consideration. For example, although a decision to limit bidders on particular assets to certain geographic areas would foster a goal of local or regional control of those assets and expedite a transfer, it could reduce the proceeds from the sale if other potentially interested buyers were precluded from making offers. In the ongoing divestiture of the Alaska Power Administration, an overriding concern is to protect that PMA’s ratepayers from increases in electricity rates. Decisionmakers therefore restricted the eligibility of bidders to only nonfederal entities from within the state of Alaska. It also accepted a sale price approximating the present value of future principal and interest payments that the Treasury would have received instead of establishing the price by selling the assets in an open, more competitive fashion to the highest bidder. Assets that are in better operating condition are more likely to attract higher bids than assets in poor condition. We testified in July 1996 that federal hydropower plants in the Southeast have experienced significant outages and that these outages occur because of the age of the plants—an average of about 30 years—and the way they have been operated. If these hydropower assets were to be sold without reducing the current backlog of necessary maintenance, bids would be lower. However, a 1995 World Bank review of international experience with divestitures found that in preparing a government enterprise for divestiture, a government should generally refrain from making new investments to expand or improve that enterprise because any increase in sales proceeds is not likely to exceed the value of those investments. DOE’s Power Marketing Liaison Office noted that the statement of the World Bank should not be interpreted to imply that federal facilities should be allowed to decay without proper maintenance. The objectives underlying a divestiture help determine the most appropriate sales method. For example, if a divestiture is largely motivated by fiscal considerations, an appropriate sales mechanism would involve some form of competitive bidding and tend to place few restrictions on the number or identity of bidders. For example, the Congress, in the 1996 National Defense Authorization Act, directed DOE to sell its Naval Petroleum Reserve No. 1 (Elk Hills) by February 1998 and to do so in a manner that would obtain the maximum proceeds to the government.The government has been producing and selling oil and gas from the field for the past 20 years. According to DOE, the reserve’s sale is part of an effort to remove the federal government from nonfederal functions. In October 1997, DOE announced that it had executed agreements preparing for the reserve’s sale for $3.65 billion in cash as a result of a competition designed to allow all qualified bidders to compete. Before the final selection, DOE had contacted more than 200 companies and received 22 bona fide offers, according to DOE. This sale, which was finalized on February 5, 1998, is the largest divestiture in U.S. government history, according to DOE. In general, we have supported the principle that the federal government should receive full market value in selling its assets.Alternatively, if the major motivation of a divestiture is to transfer operations to the private sector, the government could choose to negotiate a sales price with a selected buyer. In practice, the size of the assets to be sold, in terms of value and scale of enterprise, has influenced the type of sales process used. Trade sales and public stock offerings are general processes; trade sales are used more often to sell smaller enterprises or assets and public offerings to sell larger ones. Sales can be organized using competitive bidding methods or negotiations with either type of sale. A brief description of these processes follows: “Trade sales” draw on the idea that an existing set of businesses competing in the relevant line of business (or trade) are likely to offer more and higher bids for the assets. Three key attributes of the PMAs and the electricity industry may lend themselves to a trade sale: (1) the PMAs and related hydropower assets are part of an established industry with capital market connections experienced in the valuation, grouping, and sale of electricity-generating assets; (2) sales of significant electricity-generating assets are not unusual; (3) several bidders are likely for at least large portions of the PMAs and their related assets, depending on how those assets are grouped for sale. A trade sale can be a negotiated sales process between the government and a buyer or can be accomplished using an auction to determine both the sales price of the assets as well as buyers. Stock offerings have been used domestically, most recently in the sale of Conrail in 1987, as well as internationally to divest large public enterprises. This method of sale would most likely require creating a government corporation or corporations out of the PMAs and their associated assets. Some of these assets could be grouped for sale, and some could be excluded from the sale, depending on the policy trade-offs discussed. In the case of some federal water projects, for example, the government could decide to retain control of the dam and reservoir to satisfy increasingly significant restrictions on the use of water because of concerns about the environment or endangered species. The stock of the government corporation would be subsequently sold through standard financial market methods, such as a private placement through negotiations between particular investors and the government or through a sale to the general public by using competitive bidding. In cases where auction methods may be used to sell government assets, recent government experience indicates the importance of carefully choosing the specific format for an auction. That is, a policy decision to choose a competitive auction format requires making many subsequent decisions to define the specific rules leading to an appropriate operational auction. For example, the Federal Communications Commission chose to auction the leases of electromagnetic spectrum licenses for use in mobile communications. While generating a large amount of revenue was a less important objective than achieving an efficient geographic allocation of spectrum licenses to communications firms, the auctions generated more revenue than some potential bidders had predicted, according to auction analysts. In large part, in structuring these auctions, the government carefully considered the auction format and the identification of particular problematic features of auctions of similar assets in other nations. Most domestic and international divestitures have relied on private capital market firms as consultants and managers because of their frequent experience with complicated and high-valued transactions governing the transfer of assets in the private sector. Particularly in the case of public offerings but also for trade sales, the government would be likely to incur substantial costs to prepare its assets for sale or to pay for services performed by its financial advisers. For example, in the sale of Conrail, the government employed a variety of financial advisers and a prominent law firm with expertise in a variety of fields, including tax and employment law. Also, legislation authorizing the sale of DOE’s Elk Hills Naval Petroleum Reserve required DOE to use an investment adviser to administer the sale. If the government’s objective is to perpetuate the social and public policy compacts concerning public power, it could transfer or sell its hydropower assets to the preference customers. The assets could be sold free of the debt associated with them. Although such a transaction would provide some revenue to the Treasury, it would probably provide less of a return to the Treasury than a sale to parties that would be willing to pay the highest bid possible for the assets. A debt-free transfer is also harmful to the Treasury because it would incur the debt associated with the hydropower assets, including perhaps any associated debt previously repaid by power revenues—for example, the federal investment in irrigation projects beyond the ability of irrigators to repay. A variation of this suboption is contained in a bill now before House committees. According to the bill’s sponsor, this proposal is designed to avoid the fight over elimination of preference by issuing warrants entitling the existing preference customers to purchase, by a pre-set date and at a stipulated price, a fixed number of shares (based on recent electricity purchases) in the PMA from which they purchase power. The stipulated price would be set somewhat below the expected market price value of the shares. The warrants would be fully negotiable so that the preference customers could sell them if they so chose. The actual sale of the shares would be made to individuals, which could be IOUs or investment bankers, holding the warrants on the specified day of sale. How a divestiture could affect preference customers’ rates needs to be considered. Some of Southeastern’s, Southwestern’s, and Western’s customers are concerned that a sale would significantly raise their rates. From 1990 through 1995, the three PMAs received less than 2 cents per kWh for their power—at least 40 percent less than what the nonfederal utilities received per kWh during the same period. However, proponents of divestiture contend that competition in the wholesale market would be likely to moderate rate increases. For example, representatives of the Edison Electric Institute (the trade association for IOUs) maintain that because the wholesale market is competitive, very few preference customers will lack access to alternate power suppliers following a divestiture. They believe that, after a PMA is divested, some preference customers who relied heavily on that PMA will be able to purchase power from independent power producers, energy brokers, or energy marketers at competitive rates. In addition, as we noted earlier in this report, many states are moving toward deregulating both wholesale and retail markets. Representatives of PMAs and their customers believe that having access to alternate supplies of electricity is not enough. They note that even in cases in which preference customers may buy most of their electricity from alternate sources, these customers often rely on the PMA for power during hours of peak demand, particularly in areas where Southeastern and Southwestern sell power. Having access to inexpensive power during times of peak demand is important to these customers because, typically, power sold to meet this demand is more expensive than power sold at other times. In response, Edison Electric Institute officials maintain that preference customers will be able to purchase power even during peak periods at competitive prices. To address these concerns, we estimated how much preference customers’ rates might increase if the PMAs were divested. We examined only the potential rate impacts of divesting the PMAs and excluded other factors that are currently volatile and difficult to project. In our analysis, we assumed, among other things, that (1) immediately after a divestiture, the buyer of the PMA would raise each preference customer’s rates to the level the customer paid for non-PMA power in 1995 and (2) the preference customers do not change the quantity of electricity they purchased in 1995. Because of a lack of data, we did not assess how increasing competition in the wholesale market may affect the rate changes from divestiture. Also, we did not project whether the emergence of competition in retail markets would affect rates in the wholesale market. It is important to note that our methodology yields conservative results. If prices for wholesale power decline in the future, as many industry analysts believe they will, preference customers’ actual rate changes from divestiture will be smaller than our estimates. Our analysis shows that most preference customers will experience relatively small rate increases after a divestiture of the PMAs. As shown in figure 3.1, we estimate that more than two-thirds of preference customers may see rate increases of 25 percent or less, or up to 0.5 cents per kWh. If the preference customers passed these costs directly on to their end-users, the average residential end-users’ electricity bills would increase by no more than $4.17 per month. However, we also estimate that some preference customers, mainly those that purchase a large portion of their power from the PMA, may see their rates increase more. About 13 percent of preference customers may see rate increases that exceed 75 percent. Expressed in kWh, about 16 percent of preference customers may see their rates increase by more than 1.5 cents per kWh. If costs are passed directly, the average residential end-users served by about 25 percent of preference customers would see their electricity bills increase by more than $8.33 per month. Preference customers who currently purchase a small portion of their total power from Southeastern, Southwestern, or Western generally may experience smaller rate increases after a divestiture. For example, in fiscal year 1995, 99 percent of Southeastern’s preference customers received less than a quarter of their power from the PMA. Correspondingly, as illustrated in figure 3.2, we calculated that almost all (98 percent) of Southeastern’s preference customers may experience rate increases of 0.5 cents per kWh or less, and 99 percent would see their rates increase by one-quarter or less. Moreover, we estimated that about 27 percent (or 72) of these customers may see their rates decline if they purchased all of their power at 1995 wholesale market rates. Some of these customers currently may have access to less expensive power; however, for various reasons, these customers have opted not to buy from these sources. In contrast, preference customers who currently purchase most or all of their power from the PMA may experience much greater rate increases. For example, in 1995, about 38 percent of Western’s preference customers purchased more than half of their electricity from the PMA. As shown in figure 3.3, we estimated that about one-fifth of Western’s customers may see their rates increase by more than 75 percent. About 27 percent of preference customers may see rate increases greater than 1.5 cents per kWh. If preference customers pass the higher rates on to those they serve, the average residential end-users served by about 16 percent of Western’s preference customers may see their electricity bills increase by at least $16.67 per month. Similarly, almost one-third of Southwestern’s preference customers purchase more than 75 percent of their electricity from the PMA. As shown in figure 3.4, although most of Southwestern’s preference customers will experience relatively small rate changes, about 25 percent may see their rates more than double. If these preference customers pass these increases on to those they serve, the average residential end-users may see their rates increase by at least $16.67 per month. It is important to remember that, although some preference customers may initially experience significant rate increases, government may mitigate these rate increases through various mechanisms, such as rate caps. In addition, these customers currently pay rates that, on average, are 40 to 50 percent below what neighboring utilities pay that do not have access to PMA power. After the divestiture, these preference customers will be paying the same market rates as those utilities. Finally, smaller-sized preference customers may experience larger rate increases after divestiture. As illustrated in figure 3.5, we estimated that about one-fifth of Southeastern’s, Southwestern’s, and Western’s small preference customers will experience rate increases exceeding 75 percent. About 30 percent of small customers will see their rates rise by more than 1.5 cents per kWh. In contrast, 2 percent of medium-sized preference customers and 3 percent of large preference customers may see rate increases exceeding 75 percent. However, in all three size categories, a majority of preference customers may experience rate increases of 25 percent or less or 0.5 cents per kWh or less. We believe smaller customers may experience larger rate increases after divestiture because they generally purchase a larger portion of their power from the PMAs than medium-sized and large preference customers.
Pursuant to a congressional request, GAO reviewed various issues concerning the role of certain power marketing administrations (PMA) and other federal agencies in restructuring electricity markets, focusing on: (1) whether the government operates them and the related electric power assets in a businesslike manner; and (2) options that Congress and other policymakers can pursue to address concerns about the PMAs' role in restructuring markets and about their management. GAO noted that: (1) although federal laws and regulations generally require that the PMAs recover the full costs of building, operating, and maintaining the federal power plants and transmission assets, in some cases federal statues and the Department of Energy's rules are ambiguous about or prohibit the recovery of certain costs; (2) as GAO reported in September 1997, for fiscal years 1992 through 1996, the federal government incurred a net cost of $1.5 billion from its involvement in the electricity-related activities of the Southeastern, Southwestern, and Western Area Power Administrations; (3) the $1.5 billion was the amount by which the full costs of providing electric power exceeded the revenues from the sale of power; (4) the availability of federal power plants to generate electricity is below that of nonfederal plants because the federal plants are aging and because the federal planning and budgeting processes, as implemented, do not always ensure that funds are available to make repairs when needed; (5) the resulting declines in performance decrease the marketability of federal power; (6) to mitigate these funding delays, the Bureau of Reclamation, Army Corps of Engineers, PMAs and their preference customers have negotiated or are negotiating agreements whereby customers pay for needed repairs in advance; (7) the net cost to the Treasury and the decreased generating availability of the federal power plants--when combined with the competitive pressures on all electricity suppliers to decrease their rates and the need to recoup some federal hydropower projects' environmental costs--create varying degrees of risk that some of the federal investment in certain hydropower plants and facilities will not be repaid; (8) although the recovery of most of the federal investment in Southeastern's, Southwestern's, and Western's hydropower-related facilities is relatively secure, up to $1.4 billion out of about $7.2 billion of the federal investment in the electricity-related assets of these PMAs is at some risk of nonrecovery; and (9) three general options are available for the Bureau, the Corps, Southeastern, Southwestern, and Western to address their roles in emerging restructured electricity markets: (a) the Bureau and the Corps could continue generating and the PMAs could continue marketing power as in the past; (b) the current ownership structure could be maintained while improving how the federal assets are managed and operated; and (c) the federal government could divest the PMAs; the PMAs and the generating assets; or the PMAs, the generating assets, and the dam reservoirs.
Estimates of the numbers of North Koreans outside of North Korea have varied widely over the past decade, according to NGOs and scholars. Estimates have ranged from 6,000 to over 300,000, depending on the source of the data and the time period in which the data were collected. Scholars and NGOs said that a number of factors can contribute to the variance in the estimates, including the following: Some countries in the region limit access to the North Korean population, thereby preventing NGOs, international organizations, and scholars from collecting comprehensive data. The UNHCR, the UN agency dedicated to the protection of refugees, has little access to North Korean refugees, according to UNHCR officials. Methodological variances among NGOs and scholars in accounting for the numbers of North Korean refugees. In part because methodologies are not readily shared among those who collect data, estimates on the higher side could include North Koreans on the border who have been double- counted because of the two-way flow of migration across the border for economic purposes of finding work, according to scholars. Conversely, scholars also said that some estimates might not include those North Koreans who transit through the region fairly quickly on their way to third countries for resettlement (see fig. 1). Difficulty obtaining comprehensive information on the estimates. Scholars told us that because of the political sensitivities surrounding the North Korean population outside of North Korea, those collecting data on the population are hesitant to share their data for fear they will reveal their sources or compromise their operations in country. One scholar stated that it would be helpful if information on estimates of North Koreans in the region could be shared among those collecting the data so that methodological approaches could be critiqued. NGOs and scholars told us that, within the last couple of years, the flow of North Koreans out of North Korea has slowed due to the tightening of the border and stricter scrutiny of North Korean migrants. One recent estimate noted that the current number of North Koreans in the region might be between 6,000 and 16,000 at any given time. Despite the tightening of the borders, NGOs and scholars pointed out that North Koreans are still transiting through Asia to seek resettlement in other countries. According to NGOs and South Korean officials, the composition of North Koreans who have resettled in South Korea has changed over the past decade. While the ratio of male and female North Koreans arriving in South Korea in the early 2000s was closer to equal, South Korean data show that the number of female refugees has steadily increased, with 77 percent of the total number of refugees entering South Korea for resettlement being female in 2009. In addition, some of the more recent North Korean arrivals tend to be family members of those North Koreans who are already in South Korea. South Korean data indicate that 75 percent of North Koreans who arrived in South Korea recently range in age from 20 years to 40 years. The USRAP processes refugee applications for resettlement in the United States through an interagency effort involving a number of governmental and nongovernmental entities overseas and in the United States. Table 1 provides a description of these entities and their role in the USRAP process. USRAP’s steps for processing North Korean refugees who have arrived in the United States for U.S. resettlement are shown in figure 2. We provide further explanation of certain steps in the USRAP process below. Case creation. North Koreans generally request consideration as refugees from a U.S. embassy or from UNHCR. Under the USRAP, a U.S. embassy submits a P-1 referral to State/PRM in Washington, D.C., and the DHS/USCIS Refugee Affairs Division in Washington, D.C., and they must concur in the granting of the USRAP access to the refugee applicant. After this concurrence, State requests or authorizes the OPE to create the case in the State’s WRAPS database. At this point, the refugee applicant begins the process for consideration for admission to the USRAP. Security checks. Once the OPE prescreening interview is completed, the OPE requests that U.S. government agencies conduct the required security checks. DHS approval. After the prescreening interview and usually after the security checks have been cleared, DHS/USCIS officers interview the refugee applicants and make a recommendation of either “approve,” “hold,” or “deny” for the case. The USCIS District Director or Deputy District Director must then review and agree with all recommendations before the case decision can be finalized. DHS/USCIS will not finalize approval for a case until all security checks have cleared. Once required security checks and medical screening are complete, and DHS/USCIS has determined that the refugee applicant is eligible for resettlement to the United States, the OPE confirms the resettlement location. Exit permission. After the United States has approved the refugee for resettlement and after the individual is ready to travel, the U.S. embassy or UNHCR requests exit permission from the government of the country where the refugee is being processed. According to U.S. officials involved in the processing of these cases, the exit permission process often entails ongoing communications with that government and can take several months. Arrival in the United States. Upon arrival in the United States, U.S. resettlement agencies and HHS provide eligible refugees with services and assistance, as noted in table 1. The USRAP opened cases for 238 North Korean refugee applicants from fiscal years 2005 to 2010, as of March 29, 2010. During this time period, the U.S. government, particularly State, took actions to facilitate the processing of North Korean refugees. However, the policies of some host countries affect U.S. processing times for North Korean refugees. From fiscal years 2006 to 2008, the most current year for which complete data were available, U.S. processing times did not improve. In addition, while processing times for North Koreans were lower in fiscal year 2006 than those of some other refugee populations, processing times were generally comparable in fiscal year 2008. The USRAP processed a total of 238 North Korean refugee applicant cases from fiscal years 2005 to 2010, as of March 29, 2010. During this time period, 94 of these individuals arrived in the United States, 107 withdrew their applications, 18 were rejected or denied, and 5 individual cases were closed. In addition, 14 individuals were pending, including 9 that were on hold awaiting medical or other clearances. Figure 3 shows the status of North Korean refugee cases. According to State, many North Koreans withdrew their applications when they realized that (1) they would be found ineligible for consideration because they had already been firmly resettled in South Korea or (2) resettlement in South Korea was faster and entailed fewer requirements than U.S. resettlement. All of the pending cases were created in fiscal years 2009 or 2010. These include active cases, some of which are awaiting exit permission, as well as cases that were put on hold. Cases may be on hold because they are awaiting medical clearance or clearance for other required security checks. According to U.S. and IOM officials, some North Korean applicants were required to complete medical treatment for tuberculosis, for which the U.S. Centers for Disease Control and Prevention requires 6 to 9 months of treatment before the U.S. government grants medical clearance and permission to travel to the United States. The U.S. government, particularly State, has taken actions to facilitate the U.S. resettlement process for North Korean refugees by placing a high priority on North Korean cases and providing additional resources to process these cases. In response to the NKHRA, the U.S. government places a high priority on the overseas processing of North Korean refugee cases, specifically the processing stages involving OPE prescreening, security checks, DHS/USCIS interviews, and U.S. resettlement agencies. OPE prescreening. State has directed the OPE to prioritize the prescreening interviews of all North Korean cases at the start of the process. To do so, the OPE schedules and conducts the prescreening interview for North Korean cases before all other refugee cases on its list, according to OPE officials. Security checks. State has prioritized the security checks for some North Korean cases. State/PRM told us they usually expedite the security checks for some North Korean cases that have been pending for more than 2 or 3 months. Conducting and clearing security checks involve a number of federal agencies, and this stage in USRAP can create delays in U.S. processing times, according to U.S. officials. DHS/USCIS interviews. State and DHS/USCIS prioritize North Korean cases by scheduling USCIS adjudication interviews with them before other refugee cases. According to DHS/USCIS officials, doing so can save a month or more in overall processing time. In addition, to expedite some North Korean cases that are urgent or of humanitarian concern, DHS/USCIS may conduct interviews before the security checks are cleared, whereas the standard protocol for most refugee populations is to conduct the USCIS interview afterwards. U.S. resettlement agencies’ assurance. A final part of the process that State has prioritized is finding and confirming the resettlement location and resettlement agency support for North Korean refugees who will soon arrive in the United States. For North Korean cases, State/PRM requests that U.S. resettlement agencies confirm the location and support upon arrival within 1 week, whereas this confirmation can take up to 4 weeks for other refugee populations, according to State/PRM. The U.S. government provides additional resources to process North Korean refugee applicant cases. A State/PRM official told us that, overall, each North Korean case usually requires more of State’s time and resources than a comparable refugee case from another country. According to another State/PRM official, North Korean cases at one point comprised 1 percent of State/PRM’s caseload in the region but 20 percent of PRM’s time. Since UNHCR is not permitted to access North Koreans in some of the countries in the region, State and DHS take over the refugee referral role. U.S. officials are also more involved in obtaining exit permission for North Koreans in some countries than they are for other refugee populations for which UNHCR or IOM typically assists with this step of the process. Because North Korean refugee cases are more labor intensive than other refugee cases, State/PRM and OPE officials said that in 2006, State created and funded a Korean-speaking special caseworker at the regional OPE to focus specifically on North Korean cases. According to officials at the regional OPE, North Koreans are the only population to which this OPE dedicates a special caseworker. The special caseworker facilitates processing by often serving as an interpreter during the prescreening and DHS/USCIS interviews or interactions with U.S. staff. In addition, the special caseworker sends a biweekly status report to State/PRM and DHS/USCIS and, as of early 2010, was in the process of establishing monthly meetings with the North Koreans to update them on the status of their case. As a result, processing of these cases is now more efficient, according to U.S. and OPE officials. Additionally, officials from U.S. and international organizations must spend time traveling to different locations in the region to process some North Korean refugee applicants, since North Koreans are often housed in shelters or immigration detention centers located throughout the region rather than a few refugee camps concentrated in one country. According to U.S. officials and international organizations, these trips can also lengthen the processing times for these cases. Some host countries’ policies affect the processing of North Korean refugee cases, according to officials from the U.S. government and international organizations. For example: Some host countries delay granting exit permission for North Korean refugees, which can add months to overall processing times, according to U.S. officials. These officials also told us that some host countries often delay granting exit permission because they do not want to become “magnets” for more North Koreans and do not want to facilitate the movement of North Korean refugees from the host country to the United States. In addition, State officials told us that some host countries grant exit permission to North Koreans seeking resettlement in South Korea faster than those seeking U.S. resettlement, thus leading to faster processing times for the South Korean-bound North Koreans. Some host countries consider North Korean refugee issues to be sensitive and prefer that these issues be handled discreetly, according to U.S. officials. Due to these sensitivities, U.S. officials have told us that processing North Korean cases often requires a high level of U.S. involvement. For example, State officials described North Korean refugee cases in which the U.S. government and host countries communicated intensively over extended periods of time before the U.S. government received permission to process these cases or received exit permission in these countries. According to U.S. officials, one reason for these countries’ reluctance to assist in the U.S. processing of North Korean cases is concern about creating tensions with the North Korean government. Some host countries do not recognize any North Koreans as “refugees” and limit access to them. Some host governments consider North Koreans to be illegal or economic migrants, and thus do not offer them protection as refugees. Some of these countries are also not parties to the UN Protocol and Convention on Refugees and do not allow UNHCR access to North Korean refugees, according to UNHCR officials. In some of these countries, UNHCR does not assist the United States during the processing of North Korean refugee applicant cases. U.S. and OPE officials told us that some countries can also limit U.S. government and international organizations’ access to North Korean refugee applicants by requiring them to request permission for meetings with these refugees during processing, which can cause delays. Overall, average processing times did not improve for the 85 cases created in fiscal years 2006 to 2008 that arrived in the United States. Average processing times increased from 133 days in fiscal year 2006 to 314 days in fiscal year 2008 (see fig. 4). State officials said that one host country limited U.S. government access to North Koreans in fiscal years 2007 and 2008, which resulted in longer average processing times for cases created in those years. Furthermore, for those stages in the overseas process that are primarily handled by the U.S. government—the stages between case creation and DHS approval—we found that average processing times did not improve from fiscal year 2006 to fiscal year 2009. Average processing times in fiscal year 2009 (147 days) were faster than in fiscal years 2007 (284 days) and 2008 (192 days), but slower than in fiscal year 2006 (65 days) (see fig. 5). The time period from case creation to DHS approval includes all of the pending cases that were created in fiscal year 2009 that were excluded from figure 4. We have complete data through fiscal year 2009 for the processing times of North Korean cases that pertain to the stages between case creation and DHS approval. According to State data from fiscal years 2006 to 2008, processing times for North Koreans in fiscal year 2008 were generally comparable to those of some other refugee populations that require similar security checks, as well as to those of some other refugee populations in the region. (See figs. 6 and 7.) When compared to two other refugee populations that require similar security checks—Iraqi and Sudanese refugees—there were greater disparities in average processing times in fiscal year 2006 compared to fiscal years 2007 and 2008; however disparities have lessened over time. For example, in fiscal year 2006, average processing times for North Koreans were 287 days faster than average processing times for Iraqis, but were 74 days slower than these times for Iraqis in fiscal year 2008. Processing times for North Koreans in fiscal year 2008 were also generally comparable to those of three other refugee populations who are processed in the region, namely Burmese, Chinese, and Vietnamese refugees. While there were disparities in average processing times for North Korean refugees compared to these populations in fiscal year 2006, these disparities lessened over time. For example, in fiscal year 2006, average processing times for North Koreans were 424 days faster than average processing times for Burmese, but only 23 days faster in fiscal year 2008. There are substantially fewer North Korean refugee cases than Iraqi, Sudanese, Burmese, or Vietnamese refugee cases, as described in the notes to figures 6 and 7. A number of factors can affect processing times for all refugee populations, including North Koreans as described earlier. While we discussed factors that affect processing times of North Korean cases, the factors affecting other populations were outside the scope of our review. In the United States, North Koreans have applied for asylum protection through two processes—the affirmative and the defensive. DHS/USCIS identified 33 North Koreans who have applied for asylum in the United States from October 2004 through March 2010, but the actual number is likely higher. North Koreans have sought asylum status in the United States through either the affirmative or defensive process. The asylum process for North Koreans is generally similar to that of individuals of other nationalities, except that DHS/USCIS conducts an additional review of North Korean cases. In the affirmative asylum process, individuals, including North Koreans, who are physically in the United States—regardless of how they arrived or their current immigration status—may present an asylum application to DHS/USCIS. Following the initiation of background checks, a DHS/USCIS asylum officer conducts a non-adversarial interview with the applicant to verify the applicant’s identity, determine whether the applicant is eligible for asylum, and evaluate the credibility of the applicant’s asylum claim. If the asylum officer finds the applicant is eligible for asylum, the officer issues an approval and the applicant can remain in the United States. If the asylum officer finds the applicant is ineligible for asylum but the applicant is otherwise in lawful immigration status, the asylum officer denies the claim and the applicant can remain in the United States under the terms of his or her lawful status. However, if the applicant is determined to be ineligible for asylum and does not otherwise have a lawful immigration status, then the applicant is placed in removal proceedings and the case is referred to an EOIR Immigration Judge for a hearing. Through the defensive asylum process, applicants, including North Koreans, request asylum as a defense against removal from the United States and can be held in detention while their case is processed. According to DHS/USCIS officials, individuals generally enter the defensive asylum process in one of three ways: (1) as a referral to an EOIR Immigration Judge following a finding of ineligibility in an affirmative asylum application; (2) by asserting a claim of asylum after they are apprehended in the United States and placed into removal proceedings because they are in violation of their immigration status or do not have proper documentation; or (3) after being detained at a port of entry without proper documentation or apprehended near a port of entry within 14 days of their illegal entry, being placed in expedited removal proceedings, and asserting a fear of return or intention to apply for asylum and after a DHS/USCIS asylum officer finds that they have a credible fear of persecution or torture. Adjudication of asylum claims in immigration court is “adversarial” in that the EOIR Immigration Judge receives the applicant’s claim and then hears arguments about the applicant’s eligibility for asylum from the applicant and the U.S. government, which is represented by a DHS/Immigration and Customs Enforcement (ICE) attorney. The EOIR Immigration Judge then makes an eligibility determination, which can be appealed by either the applicant or the U.S. government. Immigration Judges can grant asylum to applicants, allowing them to stay in the United States, or deny asylum and order them to be removed from the United States unless they qualify for another form of relief. DHS/ICE enforces alien detention and removal. Appendix VI provides more information about the affirmative and defensive asylum processes. North Koreans and North Koreans with South Korean citizenship claiming asylum through the affirmative process or who claim a credible fear of persecution or torture during expedited removal proceedings receive an additional level of review at USCIS Asylum Division headquarters. DHS/USCIS officials stated that, because North Korean asylum seekers may attract national media attention or a high-level U.S. government interest, USCIS headquarters must review North Korean asylum cases before rendering a final decision. DHS/USCIS asylum field offices send the USCIS Asylum Division headquarters a packet containing the asylum application or credible fear worksheet, a draft assessment of the case, and other supporting documents. Asylum officers at USCIS headquarters review the case to ensure that proper procedures were followed and the decision on the case is legally sufficient. According to DHS/USCIS officials, North Korean asylum cases follow the same process as affirmative asylum cases of other nationalities apart from this case review. Since some North Koreans seeking asylum have citizenship in both North Korea and South Korea, according to USCIS and EOIR officials, U.S. asylum decisions for North Koreans can be affected by the issues of dual citizenship. Under U.S. law, North Koreans holding South Korean citizenship must establish a fear of persecution or torture in South Korea, as well as North Korea, to obtain asylum in the United States. Historically, UNHCR and the international community, including the United States, viewed South Korea as the third-country resettlement of choice. The constitution of the Republic of Korea, known as South Korea, states that the territory of South Korea shall consist of the Korean Peninsula and its adjacent islands. According to South Korean officials, since the South Korean constitution considers all Koreans on the Korean Peninsula, including North Koreans, to be citizens of South Korea, North Koreans generally are entitled to South Korean citizenship, with some exceptions. However, the 2004 NKHRA sought to clarify that North Koreans are not barred from eligibility for refugee or asylum consideration in the United States on account of any legal right to citizenship that they may enjoy under the South Korean constitution. Accordingly, USCIS and EOIR officials told us that North Korean citizens applying for asylum who have not availed themselves of South Korean citizenship only need to establish a fear of persecution or torture in North Korea, while those North Koreans who have availed themselves of South Korean citizenship must establish a fear of persecution or torture in both countries. USCIS and EOIR make a determination regarding the North Korean applicant’s citizenship before determining eligibility for U.S. asylum. DHS/USCIS officials stated that at least one North Korean with South Korean citizenship was placed in the defensive asylum process after the individual was found to have a credible fear of persecution or torture in both North and South Korea. According to EOIR data, the North Korean was granted asylum. DHS/USCIS identified 33 North Koreans—including their dependents (some of whom may not be North Korean)—who have sought asylum in the United States since fiscal year 2005 through either the affirmative asylum or credible fear process. Of the 33 North Korean asylum seekers, 9 have been granted asylum, 15 are pending, as of March 2, 2010, and 9 individual cases were closed or resolved for other reasons (“other decisions”). A North Korean applicant receives a grant of asylum if USCIS or the EOIR Immigration Judge determines that the applicant is eligible for asylum. The asylee can remain in the United States indefinit unless asylum is terminated. The 33 North Korean asylum seekers comprise a total of 25 asylum cases, of which 7 cases were grante asylum. Furthermore, 24 North Koreans originally sought asylum thro he the affirmative asylum process. Nine North Koreans were placed in t credible fear process, and when found eligible, applied for asylum with EOIR in the defensive asylum process. Table 2 provides a summary of the outcomes for North Korean asylum seekers since fiscal year 2005. We provided a draft of this report to the Departments of State, Homeland Security, Justice, and Health and Human Services. State, DHS, and DOJ provided technical comments, which have been incorporated throughout the report, as appropriate. We are sending copies of this report to interested congressional committees and to the Secretaries of State, Homeland Security, Justice, and Health and Human Services. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. In this report, we (1) assess the U.S. government’s efforts to facilitate the processing of North Korean refugees overseas for resettlement in the United States, and (2) determine the number of North Koreans who have sought asylum to remain in the United States and the process by which they may do so. Because Congress passed the North Korean Human Rights Act (NKHRA) in 2004, we focused our data collection efforts on fiscal years 2005 through 2010. To assess how the U.S. government processes North Korean refugees, we reviewed documents from the Departments of State (State) and Homeland Security (DHS) related to the U.S. Refugee Admissions Program (USRAP). We also interviewed officials from State’s Bureau of Population, Refugees, and Migration (PRM); State’s Bureau of East Asian and Pacific Affairs; State’s Bureau of Democracy, Human Rights, and Labor; State’s Office to Monitor and Combat Trafficking in Persons; State’s Office of the Special Representative for North Korea Policy; DHS Security Advisory Opinion Review Board; and DHS/U.S. Citizenship and Immigrations Services (USCIS). Additionally, in Asia we interviewed representatives from the International Organization for Migration (IOM), the United Nations High Commissioner for Refugees (UNHCR), and the regional Overseas Processing Entity (OPE) to understand the various steps in overseas and domestic refugee processing. We spoke with both U.S.- and South Korean- based nongovernmental organizations (NGO), academics, think tanks, and resettled North Koreans living in the United States to learn about the characteristics of the North Korean refugee population; the challenges that North Korean refugees face in their journeys from North Korea to gain access to refugee admissions processing in Asia; and factors that North Koreans consider when deciding in which country to resettle. We also analyzed aggregate and country-specific data from State’s Worldwide Refugee Admissions Processing System (WRAPS) to determine the processing times for North Korean refugees from fiscal years 2005 to 2010. We asked State to provide us WRAPS data on an individual level because some cases in WRAPS include more than one individual. To determine the reliability of WRAPS data on North Korean refugees, we interviewed State/PRM and OPE officials who input, monitor, and use these data about procedures for collecting data and ensuring their accuracy. We also reviewed the data at various stages of the refugee resettlement process and analyzed the WRAPS Privacy Impact Assessment. We determined that these data were sufficiently reliable to calculate the processing times for North Korean cases created (1) between fiscal years 2006 and 2008 for the total number of days between case creation and arrival in the United States, and (2) between fiscal years 2006 and 2009 for the total number of days between case creation and DHS approval of the case. We made this determination because decisions on DHS approval had been made for all pending cases created in fiscal year 2009 and prior years, but not all of the pending cases that were created in fiscal year 2009 had arrived in the United States, and therefore we did not have complete data on arrivals for that year. However, although we worked to identify delays in U.S. processing through interviews and an examination of the aggregate data, we did not have access to individual-level data on the factors that might influence processing times. Consequently, we were not able to probe these factors using statistical modeling techniques, and cannot comment on the extent to which changes in processing times might be attributable to factors such as State Department or Asian countries’ actions. To determine the number of North Koreans who have sought asylum to remain in the United States and the process by which they may do so, we reviewed relevant statutes as well as documentation and data from USCIS and U.S. Immigration and Customs Enforcement (ICE) within DHS and from the Executive Office for Immigration Review (EOIR) within the Department of Justice (DOJ). We also interviewed officials from these agencies. We analyzed data from USCIS’s Refugees, Asylum, and Parole System (RAPS) to determine the number of North Korean affirmative asylum applicants processed from fiscal years 2005 to 2010. To determine the reliability of RAPS data on North Korean asylum applicants, we interviewed DHS/USCIS officials about their procedures for collecting affirmative asylum application data and ensuring their accuracy. In addition, we asked about data limitations and analyzed the RAPS Privacy Impact Assessment and System of Records. We also obtained data from USCIS’s Asylum Prescreening System (APSS) to determine the number of North Korean credible fear applicants processed between fiscal years 2005 and 2010. To determine the reliability of APSS data on North Korean asylum applicants, we interviewed DHS/USCIS officials about their procedures for collecting the credible fear data and ensuring their accuracy. USCIS databases contain data on all citizens of North Korea in both affirmative and credible fear cases, according to USCIS. We requested data on North Korean asylum cases in the defensive process from EOIR. However, EOIR was not able to provide comprehensive data on its North Korean asylum cases without a manual review of individual cases. According to EOIR officials, asylum applicants with North Korea as their country of birth may have been categorized as South Koreans in EOIR databases because, in accordance with eligibility requirements in the INA, the database only tracks country of nationality and not birth, making a manual review necessary to provide accurate data. EOIR officials also stated that they did not have resources necessary to perform a manual review. We determined that the RAPS and APSS data were sufficiently reliable to report on the North Korean affirmative asylum cases and credible fear cases from fiscal years 2005 to 2010. However, these data may not represent all North Korean asylum cases filed during this time period because EOIR was not able to provide data on North Koreans who first claimed asylum defensively in front of an EOIR Immigration Judge. To describe the resettlement benefits offered to North Korean refugees both in the United States and in South Korea and to describe the resettlement experiences of North Koreans, which are discussed in appendixes II and III of this report, we reviewed relevant laws, regulations, and policies regarding government-funded refugee resettlement programs in both countries. We obtained documentation and spoke with officials at State/PRM and the Department of Health and Human Service’s (HHS) Office of Refugee Resettlement (ORR). Through field work in South Korea, we met with South Korean officials from the Ministry of Unification to discuss the assistance and social services that the South Korean government provides to North Koreans. Information on South Korea’s resettlement program to North Koreans was provided by the South Korean Ministry of National Unification and was not independently verified. We also visited the Hanawon Social Adaptation Education facility, a South Korean government-run facility, to observe the social adaptation training provided to North Koreans and the Hangyeore Middle and High School to see how North Korean students receive special educational courses to prepare them for entrance into the South Korean school system. In addition, we spoke with six resettlement agencies that have resettled North Koreans in the United States: Episcopal Migration Ministries, International Rescue Committee, Church World Service, Lutheran Immigration and Refugee Service, U.S. Conference of Catholic Bishops, and World Relief. We met with NGOs in both the United States and South Korea who provide services to resettled North Koreans to discuss the challenges that they have faced in serving this population. We also interviewed several North Korean refugees who have resettled in the United States about their experiences. Their views or experiences may not be representative of all North Korean refugees. German, British, Japanese, and Canadian foreign officials and embassies provided us with data on North Koreans who have sought humanitarian protection in countries other than the United States. We did not conduct a data reliability assessment on this information because we are providing these data for background purposes only. We conducted this performance audit from August 2009 to June 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on audit objectives. Both the United States and South Korea offer resettlement benefits to North Koreans that cover immediate needs assistance, cash and medical assistance, social and employment services, and other needs. By law, U.S. resettlement assistance and services are provided to refugees without regard to race, religion, nationality, sex, or political opinion. North Koreans receive the same benefits and access to federal programs as all other refugee populations. The United States, which resettles more refugees than any other country, resettled refugees from 69 countries in fiscal year 2008, according to State. The 99 North Koreans who have resettled in the United States represent a very small portion of the approximately 260,000 refugees resettled in the United States since fiscal year 2006. Almost all of the refugees in South Korea are North Korean, and South Korea’s resettlement program was specifically created to provide benefits and services for North Koreans. Approximately 18,000 North Koreans have resettled in South Korea since the 1950s, while only 268 individuals who were not from North Korea have been granted refugee or humanitarian status in South Korea since 1994, according to South Korean government officials. The South Korean Ministry of Unification manages the resettlement program, and almost 50 percent of the Ministry’s budget is allocated for the resettlement of North Koreans, according to Ministry reports. According to South Korean officials, the government has tailored its resettlement program to specifically reflect the changing needs of resettled North Koreans over time. Both governments’ resettlement benefits cover immediate needs (fig. 8), cash and medical assistance (fig. 9), social and employment services (fig.10), and other benefits (fig. 11). Reception irport. Trporttion to reettlement detintion. $1,100 to provide for initil food, housing, necessary clothing; referr to medicl, trining, nd other ociervice progr. (from date of Hanawon graduation, in years) 12-week government-rn trining coe tht cover topic such as initiettlement support; mentability nd helth edtion; creer gidnce nd basic voctionl trining; nd introdction to Sth Koreociety nd overcoming ocio-cltl difference. -week coe focusing on locervice nd assnce nd pting to ociety. Issuance of permnent reidence nd reident regitrtion crd. Leased rtment nd $11,456 for ecrity depoit. Two assnt offer intenive reettlement support fter fmily move into commnity. Both the United States and South Korea provide benefits to address the immediate needs of resettled North Koreans (see fig. 8). Refugees are eligible for State/PRM-funded basic needs support and services when they arrive in the United States. As of January 1, 2010, State/PRM provided the resettlement agencies with $1,800 per refugee to cover the direct and administrative costs of these services, of which at least $1,100 must be designated for the direct support of the refugee’s short-term living expenses. To facilitate North Koreans’ resettlement, the South Korean government established the Hanawon Social Adaptation Education facility in 1999 to provide North Koreans with a 12-week training course before they resettle in local communities. The Hanawon curriculum is divided into four categories: mental stability and health education, career guidance and basic vocational training, introduction to South Korean society and overcoming socio-cultural differences, and initial settlement support. Following graduation from Hanawon and resettlement in a South Korean community, North Koreans may continue their education through local Hana Centers that provide an additional 3 weeks of job and employment training. South Korea helps North Koreans find rental units and provides the security deposit as well as the resettlement assistants who offer additional support after a family moves into a South Korean community. Time-limited casassnce nd other support ervice for qualifying low-income individua with dependent children. 200 TANF level te rnged from $170 to $92 per month for ingle-prent fmily of three. Casassnce to low-income individua who re ged, lind, or disabled. Progrimilr to TANF for refgee who do not qualify for TANF. Helth cre coverge for qualifying low-income individua. Progrimilr to Medicid for refgee who do not qualify for Medicid. (from date of Hanawon graduation, in years) One-time pyment of $2,644 t gruation from Hwon, with dditionl $2,644 in quarterly inllment. Pyment of $1,571 over 5-yer period for eligile North Kore with disabilitie. Pyment of $6,45 over 5-yer period for individua over 60 ye of ge t gruation from Hwon. Casassnce of $ month for qualifying low-income fmilie. Free medicervice Sth Koren hopit for qualifying low-income fmilie. Both the United States and South Korea provide resettled North Koreans with cash and medical assistance (see fig. 9). North Korean refugees may be eligible for a number of longer-term U.S. federal public benefit programs—including TANF, Medicaid and CHIP, and SSI for up to 7 years generally, depending on the program and the state. Refugees who are not eligible for TANF, SSI, Medicaid, or CHIP may be eligible for ORR funded Refugee Cash Assistance and Refugee Medical Assistance for up to 8 months, according to HHS. In South Korea, resettled North Koreans receive a one-time financial benefit, or endowment, valued at approximately $5,000 upon their entry into South Korean society. Qualified low-income families also receive free medical care indefinitely at South Korean hospitals as well as minimum living support of $353 per month indefinitely. Emphas on ecring erly employment for refgee, inclding employment preption nd jo plcement nd retention ervice. (from date of Hanawon graduation, in years) Finncil incentive for mintining able employment for t least 6 month; mximm of $19,740 in comintion with jo trining incentive. Finncil incentive for ndergoing t least 500 ho of jo trining, inclding quiring technicl licen; p to $19,740 in comintion with employment incentive. Service inclding pychologicl coeling, jo trining, nd employment assnce. Compnie hiring North Koren employee receive subsidy equivlent to hlf of the employee'ge. Provide assnce with ecring government-issued docment nd ccessing finncind medicassnce. Up to 8 month Provide assnce on ccessing voctionl trining nd finding jobs. ORR social service benefits are not subject to financial eligibility criteria. ORR social services include citizenship and naturalization preparation services as well as referral and interpretation services, which may be offered beyond 5 years. The employment and job training incentives are provided per individual. Both the United States and South Korea provide social and employment services for resettled North Koreans (see fig. 10). In the United States, ORR social services emphasize the preparation of refugees for job placement and retention. In addition, ORR’s program offers a wide range of services that include employability services, such as English language instruction, vocational training, and on-the-job training. North Koreans in South Korea may also access social and employment services. For example, North Koreans may receive financial payments as incentives for maintaining stable employment or undergoing job training, including acquiring technical licenses. The combined maximum value of these incentives is about $20,000. The Hana Centers also provide psychological counseling to North Koreans for up to 1 year following their resettlement. The South Korean government also pays companies half of the wages of their North Korean employees for up to 3 years. ears. Food assnce for qualifying low-income individua. (from date of Hanawon graduation, in years) Subsidy of p to $2,291 for reettling oide of the cpitl city, Seol. Locl police officer provide protection. Fll tition t public middle chool, high chool, nd niverity; hlf tition t privte niverity. Bording chool for North Koredent to prepre them for the trition to the Sth Korechool tem. North Korean refugees in the United States and South Korea may receive additional benefits (see fig. 11). For example, eligible refugees in the United States can qualify for food assistance under SNAP. Aside from the major federal benefit programs available to eligible refugees, there are additional programs such as school lunch programs for children of eligible refugees. The South Korean government provides North Koreans with a personal safety counselor for protection indefinitely as well as a subsidy to resettle outside of the capital city, Seoul. In addition, the South Korean government may provide long-term tuition assistance for those North Koreans attending higher education institutions. Notably, the South Korean government has established a system to support the education and integration of North Korean children. For example, the government- funded Hangyeore Middle and High School, established in 2006, was specifically designed to address the needs of North Korean students and serve as a transition school until the students are ready to enter regular South Korean schools. Children at Hangyeore stay at the boarding school for 6 months to 2 years depending on their individual performance. According to the school’s principal, the school’s mission is to (1) match the age group of the students with their abilities in school, (2) enhance students’ learning capabilities, (3) help students to overcome cultural differences, and (4) help students to heal psychologically. North Koreans who have resettled in the United States and in South Korea come from an isolated society with limited or no exposure to capitalism and can therefore face economic difficulties. According to some of the North Korean refugees we spoke with in the United States, they have encountered difficulties finding jobs and affording basic living expenses. South Korean and U.S. officials also noted that North Koreans in South Korea have high rates of unemployment. In addition, North Koreans also have difficulties learning basic day-to-day skills, such as using supermarkets, credit cards, and public transportation. According to NGOs and government officials, North Koreans face linguistic, cultural, and social challenges in both the United States and South Korea. For example, North Koreans who are now resettled in the United States told us about their struggles adjusting to a new language and culture. Even in South Korea, North Koreans face difficulties with the Korean language due to the differences in dialects. In addition, North Koreans in South Korea face social problems and have higher school drop- out, crime, and alcoholism rates, according to South Korean and U.S. government officials. Obtaining medical care has also been a challenge for some North Koreans. Since some North Koreans arrive in the United States in poor health and have suffered traumatic and stressful experiences, obtaining and affording medical care has been important, according to resettled North Korean refugees and NGOs. State/PRM, OPE, and IOM officials have told us that some North Koreans face mental health issues and psycho-social problems, in addition to physical problems. Some North Koreans who have resettled in the United States and South Korea have overcome some of the challenges of assimilating into a new culture and society and have achieved accomplishments such as obtaining higher educational degrees. According to an NGO, some North Koreans who have resettled in the United States have passed their General Educational Development Test or are attending community college. Moreover, during our fieldwork in South Korea we heard of 10 North Korean students who attended medical school in South Korea after finishing high school. In addition, some resettled North Koreans have become business owners, according to NGO and South Korean government sources. Finally, some resettled North Koreans have established or work for NGOs in South Korea that assist other resettled North Koreans. For example, Dr. Lee Ae-Ran, the first female North Korean defector to receive a doctoral degree in South Korea, has established a number of organizations to assist other North Koreans in South Korea. She was awarded the 2010 U.S. State Department International Women of Courage award for her accomplishments. As illustrated in figure 12, approximately 18,000 North Koreans have resettled in South Korea since the end of the Korean War, according to South Korean government data. The number of North Koreans resettling in South Korea has increased since 2000 with some years, namely calendar years 2004 and 2006, seeing an annual increase of 46 percent or more. The number of North Korean refugees arriving in South Korea increased 5 percent from calendar years 2008 to 2009. According to data from the South Korean government, 81 percent of the resettled North Koreans in South Korea are from the Hamgyeong Province in the northeast area of North Korea. In recent years, the percentage of North Korean refugees who are women has increased from about 50 percent in 2001 to about 77 percent in 2009, according to South Korean data. This appendix provides data on North Korean applicants for humanitarian protection status in the United Kingdom, Germany, Canada, and Japan. Humanitarian protection status includes refugee status, asylum status, and other immigration statuses governments extend on humanitarian grounds. Each country has different definitions of refugee or asylum seeker; thus, countries’ data are not directly comparable. The United Kingdom, Germany, Canada, and Japan have provided North Koreans with a humanitarian protection status to allow them to remain in their countries. As shown in table 3, from calendar year 2006 to September 30, 2009, the United Kingdom received a total of 665 applications filed by North Koreans for a humanitarian protection status. According to the British Embassy, the United Kingdom granted 350 North Korean cases humanitarian protection status during this time period. From calendar years 2000 through 2009, 329 North Korean individuals applied for a humanitarian protection status in Germany, as shown in table 4. German Embassy officials informed us that Germany granted a humanitarian protection status to 189 North Korean individuals from calendar years 2000 through 2003. Since calendar year 2003, two North Koreans have been granted a humanitarian protection status in Germany. As shown in table 5, from calendar years 2000 through 2009, 217 North Korean individuals applied for humanitarian protection status in Canada. According to a Canadian official, the Canadian Immigration and Refugee Board had granted 76 of these North Koreans humanitarian protection status, with 66 granted in calendar year 2009. Japanese officials informed us that over 100 North Koreans have entered Japan on humanitarian grounds. They did not provide us with additional information. Individuals can acquire asylum status in the United States through either the affirmative process or the defensive process. Figure 13 outlines the steps involved in both the affirmative and defensive asylum processes. In the affirmative asylum process, individuals who are physically in the United States, regardless of how they arrived or their current immigration status, may present an asylum application to DHS/USCIS. Following the initiation of background checks, a DHS/USCIS asylum officer conducts a non-adversarial interview with the applicant to verify the applicant’s identity, determine whether the applicant is eligible for asylum, and evaluate the credibility of the applicant’s asylum claim. DHS/USCIS asylum officers can also request a comment letter from State’s Bureau of Democracy, Human Rights, and Labor (DRL) on a particular case. These comment letters help inform the asylum officers’ decisions on cases by providing information on general country conditions and information specific to an individual applicant’s situation that may not be accessible from other sources. In the case of dual citizenship, the DRL comment letter could include information on both countries. If the DHS/USCIS asylum officer finds the applicant is eligible for asylum, the officer issues an approval and the applicant can remain in the United States. If the DHS/USCIS asylum officer finds the applicant is ineligible for asylum but the applicant is otherwise in lawful immigration status, the asylum officer issues a Notice of Intent to Deny to the applicant, who then has 16 days to provide a rebuttal. The DHS/USCIS asylum officer considers the rebuttal, if any, prior to issuing a final denial or grant of asylum. If the DHS/USCIS asylum officer issues a final denial of asylum status, the applicant can remain in the United States under the terms of his or her lawful status. However, if the applicant is determined to be ineligible for asylum and does not otherwise have a lawful immigration status, then the applicant is placed in removal proceedings and the case is referred to an Executive Office for Immigration Review (EOIR) Immigration Judge for a hearing. Through the defensive asylum process, applicants request asylum as a defense against removal from the United States and can be held in detention while their case is processed. According to DHS/USCIS officials, individuals are generally placed in the defensive asylum process in one of three ways. DHS/USCIS can refer them to an EOIR Immigration Judge after a finding of ineligibility in an affirmative asylum application. They can assert a claim of asylum after they are apprehended in the United States and placed into removal proceedings because they are in violation of their immigration status or do not have proper documentation. DHS can place them in the defensive asylum process if they are detained at a port of entry without proper documentation or are apprehended near a port of entry within 14 days of their illegal entry, they are being placed in expedited removal proceedings they assert a fear of return or intention to apply for asylum, and after a DHS/USCIS asylum officer finds that they have a credible fear of persecution or torture (“the credible fear process”). Adjudication of asylum claims in immigration court is “adversarial” in that the EOIR Immigration Judge receives the applicant’s claim and then hears arguments about the applicant’s eligibility for asylum from the applicant and the U.S. government, which is represented by a DHS/ICE attorney. The Immigration Judge then makes an eligibility determination, which can be appealed by either the applicant or the U.S. government. Immigration Judges can grant asylum to applicants, allowing them to stay in the United States, or deny asylum and order them to be removed from the United States unless they qualify for another form of relief. DHS/ICE enforces removal orders by coordinating alien deportation and repatriation. In addition to the persons named above, Cheryl Goodman, Assistant Director; Andrea Miller; Teresa Abruzzo; Georgina Scarlata; Debbie Chung; Martin de Alteriis; and Mary Moutsos made key contributions to this report. Technical assistance was provided by Muriel Brown, Etana Finkler, Mike Maslowski, Chhandasi Pandya, and Jena Sinkfield.
Famine killed hundreds of thousands of North Koreans in the 1990s and compelled a large number of others to leave in search of food, economic opportunities, and escape from a repressive regime. This migration continues. Some North Koreans seek resettlement in other countries, such as South Korea and the United States. To promote a more durable humanitarian solution to the plight of North Korean refugees, Congress passed the North Korean Human Rights Act in 2004. In reauthorizing the Act in 2008, Congress found that delays in processing North Korean refugees have led refugees to abandon their quest for U.S. resettlement. GAO was asked to (1) assess the U.S. government's efforts to facilitate the processing of North Korean refugees who request resettlement in the United States from overseas, and (2) determine the number of North Koreans who have sought asylum to remain in the United States and the process by which they may do so. GAO is issuing a separate classified annex to this report. GAO analyzed data on North Korean refugees and asylees, interviewed agency officials, and conducted fieldwork in Asia. This report does not contain recommendations. The Departments of State (State), Homeland Security, and Justice provided technical comments and GAO incorporated these comments, as appropriate. The U.S. government has taken actions to facilitate the U.S. resettlement of North Korean refugees from overseas, but processing times did not improve from fiscal years 2006 to 2008 due in part to some host countries' policies. The United States opened cases for 238 North Korean refugee applicants from October 2004 through March 2010, and 94 of these North Koreans arrived in the United States. As part of its recent actions to facilitate the processing of North Korean refugees, State has placed a high priority on these cases and provided additional staff time and resources to process these cases. However, according to U.S. officials, some U.S. requirements, such as conducting and clearing security checks, can delay U.S. processing. According to officials from the U.S. government and international organizations, the policies of some host countries also can affect U.S. processing of North Korean refugees. For example, some host countries delay granting North Korean refugees permission to leave their countries. Average processing times for North Koreans did not improve from fiscal years 2006 to 2008, the most recent year for which complete data were available. State officials said that one host country limited U.S. government access to North Koreans in fiscal years 2007 and 2008, resulting in longer average processing times for cases created in those years. While processing times for North Koreans were lower in fiscal year 2006 than those of some other refugee populations, the processing times were generally comparable in fiscal year 2008. From October 1, 2004, through March 2, 2010, at least 33 North Koreans have sought asylum protection to remain in the United States, but the actual number is likely higher. Of the 33 North Koreans, 9 individuals have been granted asylum, 15 are still pending, and 9 are categorized as "other decisions," meaning their cases have been denied, dismissed, or withdrawn, according to U.S. Citizenship and Immigration Services (USCIS) data. The actual number of individuals is likely higher for several reasons including agencies' difficulties in compiling information. North Koreans can seek asylum protection through two processes--the affirmative or the defensive. In the affirmative process, individuals who are physically in the United States may present an asylum application to USCIS and undergo a non-adversarial interview to determine their eligibility for asylum. In the defensive process, applicants request that the Department of Justice grant them asylum as a defense against removal from the United States. USCIS data do not include information on North Koreans who first claimed asylum before an Immigration Judge in the defensive process.
IRS formulates its budget at three levels: appropriation account, budget activity, and program activity, as shown in figure 1. (See appendix II for the non-interactive figure of IRS’s budget formulation structure.) The annual budget request for IRS presents funding and FTE data at the appropriation and budget activity levels. IRS formulates its budget to align with its strategic goals, which is an intended outcome of the Government Performance and Results Act of 1993 (GPRA). IRS executes its budget by allocating funds from its appropriation accounts to fund centers. Fund centers manage and distribute funds and allocate funds to sub-units, known as cost centers, where funds are obligated. IRS’s appropriation accounts align with its organizational structure at the highest level. For example, IRS has an Operations Support appropriation account and an Operations Support commissioner-level organization. In addition, IRS’s organizational structure tracks roughly to its budget execution structure, which is made up of three types of fund centers: (1) support, (2) functional, and (3) operating. (See figure 2.) The lower levels of the budget formulation and budget execution structures include (1) program activities, which break down the budget activities and are listed above; and (2) organizational entities and other efforts of interest, which are not discrete categories and are different perspectives of IRS’s organizational structure. For example, Wage and Investment is one division within IRS and can be referred to as an organizational entity, while identity theft would be considered an area of interest that crosses divisions within IRS, including Wage and Investment. Prior to the preparation of the annual budget request, IRS budget staff ensure that FTE movements between budgetary accounts made during the fiscal year (e.g., shifting staff to work on different issues) match up with base budget resources assigned to each account. This review (which is conducted by the IRS Corporate Budget Office in coordination with business units) is necessary because business units may have added or eliminated staff during the course of the fiscal year, which can result in a misalignment of funding and FTEs during budget formulation. Further, the review ensures that FTEs—which represent the majority of IRS’s budgetary resources—are fully funded by appropriation accounts and that salary and benefits are aligned with current FTE levels. In its annual budget request, IRS provides funding information at the appropriation account and budget activity levels, but not at the more detailed program activity level. IRS officials stated that they could provide more information at the program activity level; however, reporting on requested funding amounts for program activities could limit IRS’s flexibility to reprogram funds given statutory restrictions. Still, as we reported in May 2010, IRS could provide additional information that highlights new program activities or those that are proposed for either expansion or reduction. This more detailed information could increase transparency and demonstrate the agency’s priorities to congressional decision makers. As an alternative to more detailed information on requested funds, IRS Corporate Budget officials told us they could provide prior year obligations and FTE data in greater detail than reported in the annual budget request, including information for some program activities, organizational entities, and other efforts of interest. For eight areas we selected as examples, IRS officials confirmed that they could provide fiscal year 2012 obligations data for six and partial information for one. They could not provide information for one, as shown in table 1. According to IRS officials, obligations data for identity theft are only partially available because IRS tracks obligations attributed to identity theft from its Wage and Investment fund center, which handles most of its identity theft workload. However, IRS does not track identity theft-related obligations incurred by other IRS business units. According to a senior official, IRS plans to establish a Servicewide internal order code for identity theft at the beginning of fiscal year 2014 because it has become a long-term priority. Officials said they were not able to provide fiscal year 2012 obligations data for the offshore voluntary disclosure program because they had not established a mechanism to track it (in part because the program was operated as a short-term effort in the past and, according to officials, has only recently been made permanent.) IRS has not decided if it will track obligations for the offshore voluntary disclosure program in the future. According to IRS officials, IRS obligations data for program activities, organizational entities, and other efforts of interest have some limitations. For example, the data do not include indirect costs, including costs associated with IRS-wide functions like human capital management and procurement. In addition, obligations associated with IT are tracked separately from non-IT costs in IRS’s financial management system. For example, non-IT costs for merchant card and basis matching are tracked separately from the related Information Reporting and Document Matching (IRDM) IT system. Furthermore, organizational entities and other efforts of interest are not discrete. Obligations can be analyzed in different ways, such as from an organizational perspective or a program perspective. Different analyses should not necessarily be summed together because they may use different perspectives and, when considered together, might result in overlaps or gaps. For example, organizational entities and other efforts of interest have different perspectives where obligations may be counted towards either. For instance, obligations for the Wage and Investment fund center—an organizational entity—may overlap with related efforts of interest, such as identity theft. Users of IRS obligations data should be cognizant of these limitations. IRS’s obligations data show how the elements of IRS’s budget formulation and budget execution structures described in figures 1 and 2 interact, and how those relationships vary in complexity. The data demonstrate the funding streams for IRS’s obligations: they show which appropriation accounts, budget activities, and program activities in IRS’s budget formulation structure received appropriated funds, as well as which fund centers within IRS’s budget execution structure managed and distributed the funds. Some fund centers in IRS’s budget execution structure receive appropriations from a relatively small number of appropriations accounts, budget activities, and program activities in IRS’s budget formulation structure. For example, as shown in figure 3, IRS obligated about $240 million from the Appeals fund center in fiscal year 2012. While most Appeals fund center resources come from the Appeals program activity, it also receives funding from other parts of IRS’s budget formulation structure, including funds from the International exams program activity. Obligations data for organizational entities and other efforts of interest can also have a more complex relationship to IRS’s budget formulation structure than the Appeals example shown above. For example, in fiscal year 2012 the Wage and Investment fund center received funding from 3 of IRS’s 4 appropriations accounts, 5 of its 9 budget activities, and 23 of its 83 program activities (see figure 4). In addition, the ease with which IRS can compile obligations data varies: it can be more difficult for IRS to compile obligations data for programs with a more complex relationship to its budget formulation and budget execution structures. For fiscal year 2014, IRS implemented a new, pre-selection budget formulation process to provide senior leadership with the opportunity to screen, prioritize, and select funding proposals before business units prepare detailed business cases. However, we found that in some instances the information was incomplete. Previously, business unit staff and subject matter specialists developed detailed business cases for each proposed initiative. Business cases include detailed narrative descriptions, multi-year cost estimates, and account breakouts. IRS budget officials told us the new process reduces work substantially for business units, since staff no longer need to develop extensive information for each proposal; instead, they only develop full information for proposals approved by senior leadership. The new pre-selection process requires business unit staff to prepare selective, consistent information utilizing templates. (See appendix III for a sample template.) Templates include high level summary information on the purpose of the initiative, the amount of additional funds and FTEs requested, and the estimated impact of the initiative. Management reviews the templates and identifies a preliminary list of approved initiatives. Next, approved initiatives are communicated to business units via a decision document, along with any suggested changes in scope, such as combining initiatives or changing requested FTE or dollar levels. After preliminary approval, business unit staffs prepare detailed business cases for approved initiatives. process. Business cases may ultimately be included in the budget request. The fiscal year 2014 budget request for IRS shows proposed savings and efficiencies in its base budget in several areas totaling over $217 million as shown in table 2. Some of IRS’s expected fiscal year 2014 savings are anticipated to be achieved through operational changes, such as shifting funds to automate processes and streamlining or consolidating operations and functions to make them more cost-effective. IRS’s 2014 fiscal year budget includes the following savings and efficiencies: Realizing Personnel Savings: Personnel savings are mainly the result of staff attrition and hiring restrictions. Reducing IT Infrastructure: Examples of IT infrastructure changes include providing improved data storage capacity, updating computer servers to be more efficient, adopting common technology platforms and a more efficient telecommunications infrastructure, and changing IT contracting practices. For example, IRS is in the process of implementing virtual computer servers for data storage. Also, the IRS is implementing new technologies for sharing computer services across the agency. Capturing Savings from Space Optimization: IRS has reduced (or plans to reduce) its office inventory, having selected 123 of its 648 offices to be closed, consolidated, or reduced. These rental savings are the result of attrition, hiring restrictions, and changes in working environments (such as increased telework and development of workspace standards that decrease individual office size). Space optimization savings require a one-time investment of $37.5 million to build out new and consolidated space and relocate employees, resulting in expected future rent savings of $76.7 million annually ($39.2 million net). Funding BSM: BSM savings represent differences in projected costs of IT operations from one year to the next, including implementation costs related to the Current Customer Account Data Engine 2; Modernized e-File; Core Infrastructure; and Architecture, Integration, and Management. Implementing Human Capital Efficiencies: Administrative efficiency efforts include the consolidation of employment activities and modifications to training programs. For example, IRS reduced the number of Employment Hiring Centers from 9 to 3. IRS also streamlined processing of requests for personnel-related changes, such as salary increases, transfers, or leaves of absence. In addition, the Employment Operations Unit automated its hiring process to enable electronic completion of pre-hire forms. Some training programs are being modified from classroom training to webinars, reducing travel costs and allowing more employees to attend. Achieving e-File Savings: These savings result from labor reductions related to increased use of electronic filing of tax returns and corresponding decrease in paper returns. In its fiscal year 2014 congressional justification, IRS included projected ROI for 10 enforcement initiatives, including two initiatives that used new approaches in estimating potential ROI: revenue protection and revenue enhancement. According to agency budget officials, IRS developed the new ROI methodologies because it did not have enough historical data on these initiatives to use the same methodology used for other initiatives. Since fiscal year 2008, IRS has included ROI projections for new revenue-generating enforcement initiatives in its congressional justification. IRS has generally calculated ROI projections for revenue generating initiatives based on the amount of additional tax collected. For each initiative, IRS calculated an estimate of the potential revenue collected by requested FTEs using 10 years of historical data. Because 10 years of historical data were not available for the revenue protection and revenue enhancement initiatives, IRS developed new methodologies. For example, the fiscal year 2014 budget for IRS requests $101.1 million for an “Improve the Identification and Prevention of Refund Fraud and Identity Theft” initiative. According to IRS, if funded, the initiative would protect revenue because these activities would identify fraudulent returns related to identity theft and resolve cases prior to issuing a refund. To estimate the amount of projected revenue that would be protected, IRS determined the dollar amount of erroneous refunds detected per FTE from the beginning of the 2012 tax filing season through August of that year. As shown in figure 6, IRS estimated the ROI for this initiative to be 6.2 to 1 in fiscal year 2014, rising to 14.4 to 1 in fiscal year 2016. The fiscal year 2014 budget for IRS also requests $51.7 million for a “Leverage Data to Improve Case Selection” initiative. According to IRS, if funded, this initiative would enhance revenue by improving technology that would enable IRS to improve case selection, issue identification, and enforcement case treatment. This technology would also allow IRS to adapt quickly to changing taxpayer behavior and tax code misuse. To estimate the additional amount of projected revenue that would be generated by this initiative, IRS studied the impact of using certain electronically filed data in a new way to examine selected tax returns and found that it could help classifiers detect more noncompliance. Based on the study’s findings, IRS estimated that using certain electronically filed data would increase examination assessments for taxpayers at all income levels, and used that estimate to calculate the amount of additional revenue that could be collected as a result of the “Leverage Data to Improve Case Selection” initiative. As shown in figure 7, IRS estimated an initial cost of $52 million in fiscal year 2014 followed by an estimated ROI of 1.3 to 1 in fiscal year 2015 and 1.5 to 1 in fiscal year 2016. In December 2012, IRS updated the PPACA cost estimate as part of its on-going practice to refine the cost estimate and address our prior recommendation. The updated cost estimate reflects best practices to a much greater extent, as shown in table 3. Unlike previous versions, the updated cost estimate shows significant progress, as it reflects the full- cycle cost of the program, which will total $1.89 billion from fiscal year 2010 through 2026. With this information, budget decision makers can see the amount IRS plans to spend on the multi-year PPACA effort in a particular year, as well as an estimate of costs that remain to be funded in future years. According to the GAO Cost Guide, a reliable cost estimate must be comprehensive, well documented, accurate, and credible. (See appendix IV for our full assessment of the updated PPACA cost estimate.) In June 2012, we reported that IRS’s cost estimate for PPACA did not fully meet best practices for a reliable cost estimate. When enacted, PPACA contained 47 tax-related provisions that IRS is responsible for implementing with effective dates through 2018. Meeting these statutory requirements has required a significant IT investment in new data models, databases, and IT systems and operations support. IRS’s PPACA cost estimate is important in determining and communicating a realistic view of a program’s likely cost and schedule outcomes that can be used to plan the work necessary to develop, produce, install, and support the program. The cost estimate is also integral to establishing and defending budget requests: it provides context for the $440 million requested for IRS to address PPACA requirements in fiscal year 2014. A few areas remain where IRS could continue to improve the reliability of the cost estimate to better meet best practices outlined in the GAO Cost Guide. We found that the cost estimate only partially met best practices for accuracy and credibility. Specifically, we identified three deficiencies in the accuracy of the cost estimate. First, the cost estimate was not fully adjusted for inflation. IRS adjusted future costs for inflation, and documented inflation rates, but reported past costs in current year dollars. Because sunk costs were not adjusted for inflation, their contribution to the cost estimate is higher than it should be. In August 2013, the IRS Chief Financial Officer told us that, as a result of this finding, IRS has already revised the next update to the PPACA cost estimate to adjust sunk costs for inflation. Second, IRS included actual sunk costs in the estimate, but it did not use earned value management—a process for capturing actual costs and comparing them to estimated costs. Earned value management would enable IRS to continuously update the estimate to reflect actual costs. Third, the estimate shows how it varies from a previous estimate, but it does not explain factors behind the variance. If the reasons for cost variances are not captured, estimators cannot identify lessons learned that would improve future estimates. We also identified three deficiencies in the credibility of the cost estimate. First, IRS conducted a sensitivity analysis on two cost drivers (a sensitivity analysis examines the effects on changing assumptions and estimating procedures to highlight elements that are cost-sensitive). However, it is not clear how the two cost drivers were selected for analysis or whether IRS identified the cost elements that were the most sensitive to change. Second, IRS developed multiple risk management plans and conducted a risk and uncertainty analysis, but it is unclear how the risk management plans were applied to address the risks identified in the cost estimate. In addition, IRS incorrectly treated the total funding risk for the entire program as equal to the sum of the risks of individual elements. A proper risk and uncertainty analysis is necessary to correctly assess the variability in the cost estimate due to program risks. Third, IRS did not obtain a second cost estimate. The GAO Cost Guide specifies that producing two cost estimates that are independent of one another is a best practice because the second cost estimate can validate the first and provide an unbiased test of whether the original cost estimate is reasonable. We discussed each of these deficiencies with IRS officials, who stated they gained a better understanding of the criteria in the GAO Cost Guide based on our meeting. Because it provides a sound basis for informed investment decision making, realistic budget formulation, and accountability for results, developing a reliable cost estimate that meets best practices is critical to PPACA’s success. Although the IT systems used to implement PPACA (which IRS refers to as the ACA IT investment) meet the Department of Treasury’s (Treasury) definition of a major investment, IRS reported the investment as a non- major investment in fiscal year 2014. According to agency officials, IRS did not have time to prepare the information that is typically reported for major investments in the President’s fiscal year 2014 budget request. Unlike major investments, non-major investments are not included for public reporting on the OMB IT Dashboard and are also not required to be included in the “Capital Asset Summary,” referred to as an Exhibit 300. In prior years, IRS’s portion of the ACA IT investment was included with related investments that were managed and reported by the Department of Health and Human Services (HHS). HHS reported ACA IT investment information on the Dashboard. In January 2013, at the end of the fiscal year 2014 budget formulation cycle, IRS’s portion of the ACA IT investment was split from HHS’s portion and IRS assumed responsibility for reporting on its portion. The Clinger-Cohen Act of 1996 requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems and report to Congress on the net program performance benefits achieved as a result of these investments. In addition, OMB’s Capital Programming Guide defines major acquisitions to include capital assets that require special management attention because of high costs, high risk, or a significant role in the administration of agency programs. According to the Treasury budget director and IRS officials, the ACA IT investment is expected to be reported as a major investment in the fiscal year 2015 budget request. IRS and Treasury officials also told us they plan to include an Exhibit 300 and report ACA IT information on the OMB Dashboard. Not reporting information on IRS’s ACA IT investment on the Dashboard or in an Exhibit 300 makes it more difficult to monitor and limits transparency. As a result, information about the implementation of PPAA—such as the individual projects that comprise the investment, evaluation history, current Exhibit 300 and cost and schedule variances— is not as readily available to Congress and the general public. In the fiscal year 2014 budget justification, IRS included new, useful information on its major IT investments that was not included in prior budget justifications, such as life-cycle costs, projected useful life of the current asset, anticipated benefits, and how performance will be measured. This additional information is a significant enhancement to information provided in prior budget justifications, and provides decision makers with more context on IRS’s portfolio of IT investments. However, IRS does not provide information on IT investments in a consolidated format. IT is a significant portion—about 20 percent—of the total IRS budget request for fiscal year 2014. It includes approximately $2.6 billion to fund 18 major IT investments and 131 non-major investments. (See appendix V for a summary of the major IT investments.) Redacted Exhibit 300s do not include sensitive data, such as estimated obligations for future budget years, names and contact information, and operational risks. in the Exhibit 300, such as the projected useful life of the current asset. However, certain other information (such as the start date, funds obligated through the prior fiscal year, and the number of FTEs represented by current costs) is only available in the Exhibit 300. Additional information (such as percent of life-cycle costs obligated through the prior fiscal year) is not available in either report, but can be calculated with information contained in the Exhibit 300 and budget justification. See table 4 for a general description of the type of information included in the budget justification and Exhibit 300. Because different information exists in different documents, decision makers must access multiple public documents to gain a full understanding of IRS’s IT priorities. For example, if a decision maker was interested in the Modernized e-File (MeF) IT investment, the fiscal year 2014 budget justification would show that the projected useful life of the asset extends to 2019, however, to know the start date of the MeF investment—2001—the decision maker would also need to access the Exhibit 300. Likewise, the decision maker would see that the fiscal year 2014 budget justification shows the estimated life-cycle costs for MeF to be $575 million, however, to know how much has been obligated for the investment from the start date through fiscal year 2012—$304 million— the decision maker would have to access the Exhibit 300 and calculate the amount of funding obligated across prior fiscal years. Consolidating key budget and performance information on major IT investments in one budget document (such as the congressional justification) would ensure decision makers have comprehensive, easy to understand information on major IT investments. In particular, consolidating information for complementary data elements (such as start date and projected useful life of the current asset, and amount of funding obligated through prior fiscal years and life-cycle costs) would provide useful context to understand IRS’s progress in implementing long-term IT investments. IRS officials said they can consolidate this information for ease of review. GAO’s Standards for Internal Control in the Federal Government states that information should be communicated in a format that enables management and others to carry out their oversight responsibilities. Further, the GPRA Modernization Act of 2010—which can serve as a guide to leading practices for planning at lower levels within federal agencies—requires that agencies consult with relevant committees when developing or making adjustments to their strategic plans and agency priority goals. Consulting with relevant committees regarding IT investments would provide IRS with an opportunity to share performance information and confirm that various committees are getting the types of performance information needed. Congressional staffs have requested comprehensive, easy to understand information on IRS’s IT investments: providing it in a comprehensive, consolidated format could provide them with better information to guide decisions. Congress directed IRS to include a summary of cost and schedule performance information for its major IT investments in its budget justification for fiscal year 2013; in addition, starting in March 2012, IRS was required to submit a report after every quarter with specific information on IRS’s selected major IT investments, including a “plain English” explanation of the cost and schedule for the previous three months and the cost and schedule performance for the upcoming three months. IRS has been submitting these reports and briefing the Appropriations Committees and other congressional stakeholders. Despite the increased information in the budget justification and the new quarterly report, key budget and performance information on all of IRS’s major IT investments remains in various budget documents. Until such data is consolidated, it is less accessible to congressional stakeholders. Since June 2012, IRS implemented six recommendations made in our prior reviews of its budget justification documents. Three of the implemented recommendations resulted in additional information in the budget request, such as actual ROI and linking funding requests for new initiatives to strategic goals and objectives. Enhanced information in the budget request can aid decision making and provide context to Congress regarding how current operations and requests for new funding support IRS priorities. Table 5 summarizes the recommendations implemented by IRS. As shown in table 6, IRS officials agreed with two of the three budget related recommendations that remain open. IRS initially disagreed with our recommendation to modify its funding request for new hires. However, in August 2013, IRS Corporate Budget officials told us that IRS is considering options to implement the recommendation, including possibly describing in the operating plan how it plans to use available funds not needed for new hires in the first year. IRS has taken important steps to include new and useful information in its budget justification to aid Congress and other stakeholders, including new information on actual ROI and enhanced information on major IT investments. In addition, IRS made significant improvements to the PPACA cost estimate, which is integral to accurately informing budget decision makers of the anticipated long-term costs for both implementation and administration of the new health care program. Furthermore, IRS’s new pre-selection process for determining which initiatives to request funding for was reported by officials to be a more efficient method for commencing budget formulation, although enhanced guidance could make it more effective. We also identified some other actions that could improve the budget request and aid Congress in decision making, including (1) improving the accuracy and credibility of the PPACA cost estimate in future updates because it provides the basis for informed investment decision making, realistic budget formulation, and accountability for results, (2) ensuring the ACA IT investment is publicly reported to increase transparency and (3) reporting consolidated information on major IT investments to make the information more accessible. To enhance the budget process and to improve transparency, we recommend the Acting Commissioner of Internal Revenue take the following four actions: Improve guidance given to business units for the pre-selection budget formulation process, emphasizing the importance of information on the estimated impact—qualitative or quantitative—of proposed budget initiatives. Improve the accuracy and credibility of future updates to the PPACA cost estimate by taking the following actions to more closely follow best practices outlined in the GAO Cost Guide: Use earned value management to capture actual costs and use them as a basis for future updates. Explain why variances occurred between the current estimate and previous estimates. Document how cost drivers are selected for future sensitivity analyses. Conduct future risk and uncertainty analyses consistent with best practices, and develop and document plans to address risks. Validate the original cost estimate by preparing a second, independent cost estimate. Publicly report the ACA IT investment as a major investment on the OMB IT Dashboard and the fiscal year 2015 budget request, including standard cost, schedule, and performance information. Report key data on major IT investments in one consolidated document, such as the congressional justification, in consultation with congressional stakeholders. We provided a draft of this report to the Acting Commissioner of the IRS for his review and comment. We received an email on September 18, 2013 from the Chief Financial Officer, which included an attachment that provided the agency’s comments to our recommendations. This attachment is reprinted in appendix VI. In response to our draft report, the Chief Financial Officer stated that IRS agreed with three of our four recommendations and identified actions planned and taken to address them. For example, IRS agreed to include additional data on its major IT investments in the congressional justification, such as actual obligations for the investment to date. IRS agreed with the majority of the actions associated with our fourth recommendation on improving the accuracy and credibility of future updates of the PPACA cost estimate. IRS disagreed with two actions contained in our fourth recommendation. Specifically, IRS disagreed with our recommended action to use earned value management, a process that would enable IRS to continuously update the cost estimate to reflect actual costs. IRS stated that earned value management is not part of its current program management processes because the cost and burden to use earned value management outweigh the value added. We disagree with IRS’s view as to the benefits of earned value management. GAO has found that programs that establish good earned value management systems realize better project management decision making and fewer cost and schedule overruns. While there is an upfront investment to establish the earned value management system, there are long-term benefits that go beyond the individual project, such as fostering accountability, improving insight into program performance, and providing objective information for managing the program. Further, the OMB Capital Programming Guide states that earned value management is a critical component of risk management for major investments. IRS also disagreed with our recommended action to validate the PPACA cost estimate by preparing a second, independent cost estimate. IRS stated that the cost and burden of having an external organization produce a second, independent cost estimate of the same scope would outweigh the value added. In addition, IRS stated that an external organization would lack the knowledge necessary to produce a reasonable estimate without relying heavily on the IRS group that produced the first estimate. We disagree with IRS’s view. As reflected in the GAO Cost Guide, GAO has found that producing a second, independent cost estimate is considered one of the best and most reliable methods for validating a cost estimate. Without a second, independent cost estimate, decision makers lack insight into a program’s potential costs and may lack confidence that the estimate is reasonable and costs described in the first estimate can be achieved. While preparing a second cost estimate that is independent of the first cost estimate requires additional resources, it is generally based on the same detailed technical and procurement information. Furthermore, an independent cost estimate does not need to be conducted by an external organization, but the estimation team should be outside the acquisition chain and have nothing at stake with regard to program outcome or funding decisions. The major benefit of a second, independent cost estimate is that it provides an objective and unbiased assessment of whether the original estimate can be achieved; reducing the risk that the program will proceed underfunded or costs will exceed its value. We plan to send copies of this report to the Chairman and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We are also sending copies to the Acting Commissioner of Internal Revenue, the Secretary of the Treasury, and the Chairman of the IRS Oversight Board. Copies are also available at no charge on the GAO web site at http://www.gao.gov. If you or your staffs have further questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. We were asked to review the President’s fiscal year 2014 budget request for the Internal Revenue Service (IRS). The objectives of this report were to (1) describe IRS’s capacity to report fiscal year 2012 obligations and full-time equivalent (FTE) data for program activities, organizational entities, and other efforts of interest; (2) assess IRS’s process and the type of information used to prioritize and select new program initiatives; (3) describe proposed base budget savings; (4) describe IRS’s new projected return on investment (ROI) methodologies; (5) evaluate steps IRS took to improve the cost estimate for the Patient Protection and Affordable Care Act (PPACA) in accordance with GAO’s Cost Estimating and Assessment Guide and determine the extent to which IRS is transparently reporting on the Affordable Care Act (ACA) information technology (IT) investment; (6) summarize IRS’s major IT investments and assess the type of information available in the congressional justification; and (7) describe IRS’s progress in implementing our prior budget-related recommendations. To describe IRS’s capacity to report fiscal year 2012 obligations and FTE data, we reviewed documentation related to IRS budget formulation and budget execution, including IRS’s Financial Management Codes Handbook, IRS’s organizational chart, and its fiscal year 2014 congressional budget justification. We also interviewed knowledgeable officials in IRS’s Corporate Budget Office. We selected eight program activities, organizational entities, and efforts of interest for analysis: (1) Appeals, (2) Identity Theft, (3) International Exam and Collections, (4) Merchant Card and Basis Matching and the related Information Reporting and Document Matching (IRDM) IT system, (5) Offshore Voluntary Disclosure Program, (6) Online Services, (7) Telephone and Correspondence Services, and (8) Wage and Investment. We selected this list based on the following criteria: whether it was included in the proposed fiscal year 2014 budget initiatives, whether it was the topic of prior GAO work, and whether the selected list (as a whole) spans most of IRS’s appropriation accounts and encompasses a range of methods by which the data can be obtained from IRS’s financial management system. To determine IRS’s capacity to compile fiscal year 2012 obligations data for these eight items, we interviewed knowledgeable officials in IRS’s Corporate Budget Office. From the eight selected program activities, organizational entities, and other efforts of interest, we selected two illustrative examples to report fiscal year 2012 obligations data and to show how IRS’s budget formulation and budget execution structures interact: Appeals, and Wage and Investment. We selected these two examples largely to demonstrate the differences in structures. While they are both fund centers in IRS’s budget execution structure that manage and distribute funds, Appeals is also a program activity in IRS’s budget formulation structure and shows a relatively simple relationship between these structures. In contrast, Wage and Investment is a complex fund center that receives funding from many segments of IRS’s budget formulation structure. We assessed the reliability of IRS’s fiscal year 2012 obligations data by reviewing relevant IRS documents, including the Financial Management Codes Handbook, as well as prior work we conducted that assesses IRS’s financial statements. We believe that the data are sufficiently reliable for our purposes. We identified some limitations, but they do not affect our illustrative use of the data and are discussed in our report. To assess IRS’s process and the type of information used to prioritize and select new program initiatives, we reviewed IRS documents related to its pre-selection budget formulation process for fiscal year 2014. Documents included guidance from the IRS Corporate Budget Office and pre- selection templates submitted for review by the taxpayer services and enforcement business units to senior leadership between November 2011 and May 2012. We analyzed these submissions for completeness against criteria for budget formulation outlined in OMB Circulars A-94. interviewed IRS officials familiar with the pre-selection process. Office of Management and Budget, Revised, Guidelines and Discount Rates for Benefit- Cost Analysis for Federal Programs, OMB Circular No. A-94, (Revised October 1992). efficiencies, such as human capital, space optimization, and IT infrastructure summaries. To describe IRS’s new projected ROI methodologies (of revenue protection and revenue enhancement) included in the fiscal year 2014 budget request, we discussed the difference between the new methodologies and IRS’s other projected ROI calculations with officials in IRS’s Office of Compliance Analytics and Corporate Budget Office. We also reviewed a related briefing developed by the Office of Compliance Analytics that summarizes the extent to which the use of non-transcribed electronic data impacts the classification of cases. To evaluate steps IRS took to improve the PPACA cost estimate, we compared IRS’s updated PPACA cost estimate, completed in December 2012, with the characteristics of a high-quality cost estimate, identified in the GAO Cost Estimating and Assessment Guide (Cost Guide). Guide identifies four characteristics of a high quality cost estimate. The cost estimate should be (1) comprehensive, (2) well documented, (3) accurate, and (4) credible. We calculated the assessment rating of each criterion within the four characteristics by assigning each an individual assessment rating as follows: does not meet = 1, minimally meets = 2, partially meets = 3, substantially meets = 4, and meets = 5. We then averaged the individual practice scores to determine the overall rating. We shared our preliminary analysis of the updated PPACA cost estimate with program officials. When warranted, we updated our analyses based on the agency’s response and additional documentation provided to us. See GAO, Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs (Supersedes GAO-07-1134SP), GAO-09-3SP (Washington, D.C.: Mar. 2, 2009). perform our assessment, we used relevant criteria on reporting capital investment information, including the Clinger-Cohen Act of 1996 and the OMB Capital Programming Guide. To summarize IRS’s major IT investments (defined by Treasury as investments costing $10 million in either current year or budget year, or $50 million over the 5-year period extending from the prior year through the budget year +2) and to assess the type of information available in the congressional justification, we reviewed fiscal year 2013 and 2014 congressional budget justifications, Exhibit 300s—capital asset summaries—prepared by IRS for major IT investments, as well as IRS and OMB guidance on preparing those documents, such as the Office of Management and Budget Guidance on Exhibits 53 and 300—Information Technology and E-Government. We identified the type of information that is available across budget documents, in particular key information such as cost to date, full time equivalents, life-cycle costs, start date and end date. To describe IRS’s progress in implementing our prior budget-related recommendations, we obtained information from various IRS officials and reviewed relevant documentation, including the fiscal year 2014 congressional budget justification and IRS’s Joint Audit Management Enterprise System (JAMES) reports, which track IRS actions taken to implement GAO recommendations. We then determined which recommendations were implemented. We conducted our work in Washington, D.C., where key IRS officials involved with the budget and IT systems are located. We conducted this performance audit from October 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Figures 9 through 12 outline our assessment of the extent to which IRS’s updated, December 2012 Patient Protection and Affordable Care Act (PPACA) cost estimate meets best practices. This information is repeated in table 7, following the graphics. Total obligations through fiscal year2012 (in millions) Lifecycle costs (in millions) Total obligations through fiscal year2012 (in millions) Lifecycle costs (in millions) Total obligations through fiscal year2012 (in millions) Lifecycle costs (in millions) Lifecycle costs (in millions) Total obligations through fiscal year2012 (in millions) In addition to the contact named above, Libby Mixon, Assistant Director; Remmie Arnold, Amy Bowser, Jennifer Echard, Emile Ettedgui, Mary Evans, Chuck Fox, Paul Middleton, Donna L. Miller, Edward Nannenhorn, Karen O’Conor, Sabine Paul, Laurel Plume, Karen Richey, Erinn L. Sauer, Cynthia Saunders, and Robert Yetvin made major contributions to this report.
The financing of the federal government depends largely on IRS's ability to administer the tax laws, which includes providing service to taxpayers and enforcing the law to ensure everyone pays the taxes they owe. For fiscal year 2014, the President requested $12.9 billion for IRS, an increase of 9 percent over fiscal year 2012 actual levels. Because of the size of IRS's budget and the importance of its programs, GAO was asked to review the fiscal year 2014 budget request. In April and May 2013, GAO reported preliminary observations on IRS's budget. Among other things, this report assesses how IRS prioritizes new initiatives; steps IRS has taken to improve the PPACA cost estimate and the reporting transparency of the related IT investment; and the type of information available in the budget justification about major IT systems. To address these objectives, GAO reviewed the fiscal year 2014 budget justification, compared the updated PPACA cost estimate to GAO's Cost Estimating and Assessment Guide, and interviewed IRS Corporate Budget officials. For the fiscal year 2014 budget formulation process, the Internal Revenue Service (IRS) implemented a new process that uses templates to help screen, prioritize, and select new initiatives before detailed business cases are developed to support funding requests. The template information that GAO reviewed varied in detail and scope; for some, IRS guidance may have contributed to incomplete submissions to senior leadership. According to Office of Management and Budget Circular A-94, in order to evaluate and compare funding initiatives, decision-makers need to be aware of benefits, costs, and strategies related to achieving program goals. By improving guidance on the type of data to include, IRS could help ensure the templates are fully completed. IRS significantly improved its Patient Protection and Affordable Care Act (PPACA) cost estimate. In particular, the December 2012 estimate is more comprehensive, reflecting the full life-cycle cost of the program--estimated at $1.89 billion for fiscal year 2010 through 2026. A few areas of improvement remain, primarily regarding the accuracy and credibility of the cost estimate. For example, IRS showed how the December 2012 estimate differed from the previous estimate, but did not explain the factors that resulted in the variances. In addition, IRS did not obtain a second cost estimate that could be used to assess the reasonableness of the $1.89 billion estimated program costs. Although the information technology (IT) systems for PPACA met dollar thresholds (as outlined in the Department of Treasury's guidance) as a major investment for public reporting, IRS did not report this as such. Officials told GAO they did not have time to prepare the information for the fiscal year 2014 budget justification, but plan to do so for fiscal year 2015. Until IRS publicly reports the IT systems for PPACA as a major investment, transparency about these systems' implementation and administration is limited. Although IRS included new and useful information on its major IT investments in the budget justification (such as life-cycle costs) other important information (such as the start date and percent of life-cycle costs obligated) is reported elsewhere or must be calculated. IRS officials said they could consolidate this information for ease of review. Consolidating key budget and performance data would ensure Congress has comprehensive, easily accessible information on major IT investments to guide decisions. GAO recommends that IRS improve budget formulation guidance for new initiatives; improve the accuracy and credibility of future updates to the PPACA cost estimate as well as report the related IT investment publicly; and consolidate major IT investment reporting. IRS agreed with three of GAO's four recommendations and agreed with the majority of the actions associated with improving the accuracy and credibility of the PPACA cost estimate.
The Coast Guard began a recapitalization effort in the late 1990s to modernize a significant portion of its entire surface and aviation fleets by rebuilding or replacing assets. This effort was formerly known as Deepwater, and included the FRC and NSC programs, among others. In 2006, the Coast Guard acknowledged that it had relied too heavily on contractors and, citing cost increases, took over the role of lead systems integrator. The Coast Guard reorganized the programs that comprised Deepwater and since 2012 has referred to them broadly as the Coast Guard’s recapitalization. The FRC and the NSC are two of the newest assets in the Coast Guard’s fleet. First delivered in 2012 and 2008, respectively, the cutters were designed to provide additional capability beyond that possessed by their predecessors. The FRC is intended to replace the current fleet of 49 110- foot Island Class Patrol Boats—which were first built in the 1980s—with 58 FRCs that provide the Coast Guard with additional capabilities, such as advanced intelligence, surveillance, and reconnaissance technology, and cutter boat deployment. The NSC is intended to replace the 12 High Endurance Cutters—which were first built in the 1960s—with a smaller fleet of NSCs that provides the Coast Guard with additional capabilities, such as the ability to collect, analyze, and transmit classified information as well as to carry, launch, and recover unmanned aerial vehicles, among others. The Coast Guard originally planned eight NSCs to fulfill the capability gap left by retiring the High Endurance Cutter fleet, but Congress directed, in December 2015, that of the funds provided by the Consolidated Appropriations Act, 2016, not less than $640 million be immediately available and allotted to contract for the production of the ninth NSC. Appendix II provides the current delivery schedule of FRCs and NSCs. The FRCs conduct operations in coastal and high seas conditions up to 200 nautical miles from the coast, enabling them to respond quickly to emerging situations. These cutters are expected to spend no more than 185 days away from their homeport and conduct 2,500 operational hours each year, with each patrol lasting roughly 5 to 7 days. Due to its larger crew—126 for the NSC compared to 24 for the FRC— and size—418 feet for the NSC compared to 154 feet for the FRC—the NSC is able to patrol worldwide and conduct extended operations beyond the capabilities of the FRC. The NSCs are expected to conduct operations from at least 50 nautical miles from shore including in extreme climates, such as the Arctic. Coast Guard standards dictate that the NSCs will spend no more than 210 days away from their homeport and conduct 3,780 operational hours each year, with each patrol lasting longer than 60 days. Appendix III provides more detail of the FRC and NSC’s operational capabilities. The Coast Guard defines the goal of cutter maintenance as ensuring optimal readiness to perform missions at the lowest cost over the asset’s service life. As with any ship, maintenance is a major portion of the total ownership costs for Coast Guard cutters. Unnecessary maintenance increases ownership costs and limits a cutter’s availability to conduct missions, decreasing its readiness. In order to optimize the cutter fleet’s mission availability and decrease ownership costs, the Coast Guard employs Reliability Centered Maintenance (RCM), a process used to determine maintenance needs and ensure that maintenance is applicable and effective. RCM is at the center of the Coast Guard’s maintenance philosophy and guides maintenance decisions, determining scheduling and resource requirements. The RCM analysis, which is initiated during the program’s acquisition phase, is unique to each cutter class and is used to determine preventative maintenance requirements by identifying the likely functional failures of hardware and the failures’ impacts. Maintenance procedure cards, which provide detailed instructions on how to complete each maintenance task, including the expected amount of time the task will take to complete as well as the needed tools and parts, are one outcome of the RCM analysis. The Coast Guard employs a bi-level maintenance strategy to meet the preventative maintenance requirements derived from the RCM analysis. Tasks are separated into either organizational-level maintenance or depot-level maintenance. Organizational-level maintenance: Maintenance that is performed by the operating units—i.e., the cutter crews. The Coast Guard assigns maintenance requirements at the operating unit level only if it has been determined that the task is within the ability of the crew to complete, taking into account additional demands such as training, and the availability of tools onboard to complete the assigned task. Depot-level maintenance: Maintenance that is beyond the capability of the crew, including changes and modifications to the cutters deemed too extensive to be performed by the crew. The Surface Forces Logistics Center (SFLC), in Baltimore, MD, is responsible for completing the RCM analysis and determining the responsibility of maintenance tasks for the boats and cutters, including the FRC and NSC, in sustainment. In addition, SFLC is responsible for managing the supply of spare parts and developing maintenance schedules for each fleet. SFLC is divided into five distinct product lines, each of which acts as the point of contact for various maintenance needs of the Coast Guard fleets. Figure 1 shows the organizational structure of SFLC. The product lines were created after SFLC was established in 2009 to optimize the technical, logistical, and depot-level maintenance support for surface assets. Each product line is intended to provide complete naval engineering and logistics support for all assigned surface assets. In order to plan for the individual maintenance needs of each cutter, SFLC generates and maintains a 5-year maintenance plan for each asset class depicting the major depot-level maintenance tasks. Each year, SFLC updates each cutter’s 5-year maintenance schedule to formulate short- and long-term budgets, project shortfalls, and interface with operational commanders for scheduling purposes. Appendix IV shows the timeline of scheduled major maintenance events for the NSC and FRC. The Coast Guard also performs unplanned maintenance, which occurs largely as a result of equipment failures, and is corrective in nature. Unplanned maintenance is performed as necessary by the crew, if possible; dockside by shore technicians; or if the unplanned maintenance is beyond the capabilities of the crew and dockside technicians or requires the cutter to be taken out of the water for repair, the cutter will undergo an emergency dockside or drydock event. In addition to established maintenance procedures, the Coast Guard has several processes by which it can address problematic equipment systems. One such process is the Engineering Change Process, or design change. This process is governed by a process guide that provides detailed instructions on how the Coast Guard should initiate and implement an engineering design change. The Engineering Change Process is the vehicle for implementing changes to assets across the fleet to improve operational capabilities and increase supportability. This process is intended to facilitate configuration control of systems and equipment on all surface assets in an effort to reduce total operating cost over the life of the asset class. The Engineering Change Process includes numerous reviews of relevant product designs, budgets, and procedures and facilitates the delivery of engineering and/or logistics actions to the field. The Coast Guard uses the Electronic Asset Logbook (EAL) program to record all maintenance and operational activities for surface assets, such as the FRC and NSC, among others. EAL records and tracks equipment failures and mission capable statuses, and is updated in real time by cutter crews to allow for total asset visibility across a cutter class. The EAL process is governed by a process guide that provides detailed instructions to product lines and cutter crews to ensure the EAL system is used uniformly across the fleets. The EAL program provides the product lines insight into the percentage of time each cutter is capable of conducting missions by using five ratings: partially mission capable, not mission capable due to maintenance, not mission capable due to supply, and not mission capable due to depot maintenance. Each cutter class has an acceptable target range for the percentage of time that the Coast Guard expects the asset to be mission capable. The target ranges are determined during the acquisition phase for each fleet using the Coast Guard’s employment standards, which dictate the limits for Days Away From Home Port and operational hours as well as the minimum number of depot-level maintenance time needed per year. Figure 2 depicts a notional visualization as to how asset status data is used to assess the health of the fleet from an engineering perspective. The FRCs are expected to be mission capable 48 to 60 percent of the time, while the NSCs are expected to be mission capable 49 to 61 percent of the time. In order to be considered fully mission capable, the cutter must be able to support all of its assigned missions. In order to be considered partially mission capable, the cutter must be able to effectively execute some of its assigned missions, but be unable to fully respond to at least one assigned mission due to an equipment system failure. Taken together, the fully mission capable status and the partially mission capable statuses comprise the cutter’s mission capable rate. Influencing the mission capable rates are the three “not mission capable” statuses. A cutter is considered not mission capable due to maintenance if it has an equipment failure that requires the cutter to return to port for maintenance during the period that the cutter was originally scheduled to be conducting operations. Once the source of the equipment failure has been determined and if the Coast Guard has to wait for a spare part, the cutter will be placed into the not mission capable due to supply status until the spare part is received and the correction can be implemented. The cutter will be placed in not mission capable due to depot-level maintenance status if the cutter is unavailable to conduct operations due to planned depot maintenance (i.e., non crew conducted maintenance). This can include conducting both anticipated and unanticipated maintenance. Warranties and guarantees are contract mechanisms to address the correction of shipbuilder-responsible defects, but they differ in key ways. Warranty provisions are outlined in the Federal Acquisition Regulation and were used for the FRC, whereas the Navy typically uses guaranty provisions, as did the Coast Guard for the NSC contract. We reported on the differences of the FRC’s warranty and the NSC’s guaranty in March 2016 and found that the FRC’s warranty resulted in improved cost and quality by requiring the shipbuilder to pay to repair defects. In contrast, guarantees—such as that for the NSC—did not help improve the cost or quality outcomes of shipbuilding and the government generally paid the shipbuilder to correct problems. Table 1 outlines the differences between a warranty and a guaranty. Over the past few years, when the FRC and NSC began using their current mission capable metrics, they have both met their minimum targets on average. However, from October 2015 to September 2016, both cutters fell below their minimum targets due to depot-level maintenance. During this time frame, the FRC program began a phased drydock maintenance period for the first 13 cutters, which is primarily intended to address problems with equipment systems still covered by its 12-month warranty. From January 2016 to November 2019, at least one FRC will be completely unavailable to conduct missions at any given time. For the NSCs, an approximate 2-year post-delivery maintenance period is affecting mission capable rates. During this period, the NSCs undergo a series of depot-level maintenance events and system upgrades to bring the cutter to full operational capability, but will limit their ability to conduct missions. Since the Coast Guard will be receiving NSCs until at least the end of year 2020, these post-delivery maintenance periods are expected to affect the NSC’s mission capable rates until at least the end of year 2022. In addition, both the FRC and NSC have experienced numerous equipment problems that have hindered operations, but these have not substantially lowered the average fleet mission capable rates. Since March 2012, when the Coast Guard began tracking this metric for the FRC, the cutters have met their minimum mission capable target rate (48 percent), on average. However, from October 2015 to September 2016, the cutters have demonstrated an average mission capable rate below the minimum target. According to Coast Guard officials, this is primarily because of an increase in the amount of time the first 13 cutters are spending in depot maintenance for warranty drydock work, which has reduced the FRC’s ability to conduct operations. Table 2 shows the asset status for the FRCs from March 2012 to September 2016. According to Coast Guard data, from March 2012 to September 2016, the FRC fleet had a cumulative average mission capable rating of 49.3 percent, just above the minimum goal of 48 percent. See figure 3. A cutter is deemed mission capable if it operates in either a fully mission capable or partially mission capable status. The FRCs have operated in a partially mission capable status 2.3 percent of the time since March 2012. This is at least partially due to the short duration of the FRC’s patrol schedule of roughly 5 to 7 days at sea and the capabilities of the cutter. Additionally, the smaller size of the FRCs as compared to the NSCs limits both the crew size and capabilities aboard the cutter making it less likely that the FRC will be able to meet the criteria for partially mission capable, which is the ability to fulfill at least one of its designated missions. For example, the FRC holds one cutter boat, and Coast Guard safe-to-sail equipment requirements dictate that this cutter boat must be operational or the FRC is deemed not mission capable. While the cumulative average monthly mission capable rate for the FRC is above the target since March 2012, when we eliminated prior years and analyzed the FRC’s cumulative average monthly mission capable rate over a more recent time period—October 2015 to September 2016—we found that the average mission capable rate was 42.8 percent, which is below its minimum target (48 percent). This lower rate can be attributed to increased time spent in depot-level maintenance. Figure 4 shows the monthly mission capable rates for the FRC from October 2015 to September 2016 as well as the Coast Guard’s target range. According to Coast Guard officials, the decrease in monthly mission capable rates below the minimum target is primarily because of a phased warranty repair drydock period that was not initially anticipated. These warranty repair drydocks, affecting cutters 1 through 13, began in January 2016 and are scheduled to conclude in November 2019. The average drydock period will last approximately 15 weeks, with at least one FRC not mission capable due to depot-level maintenance at all times from January 2016 to November 2019. Coast Guard officials stated that while these warranty repair drydock periods are scheduled in advance, the repairs were not anticipated when the planned major maintenance schedule was first established for the fleet during the acquisition process. Instead, these drydocks were triggered by continuing structural and equipment problems installed during production, including unreliable connectors that provide the structural integrity of the cutter and continued failures with the main diesel engine. The FRCs will undergo repairs on systems that are still covered by the FRC’s warranty and remain the financial responsibility of the shipbuilder, Bollinger Shipyards. According to the FRC’s contracting officer, Bollinger Shipyards and the FRC program decided to schedule these drydock periods in order to complete several warranty items at one time for each of the 13 cutters. Given that only a few FRCs have completed the warranty drydock to date, it is difficult to determine whether the overall fleet’s mission capable rate will meet its target range once the drydocks are completed. Additionally, Coast Guard officials said that they negotiated an agreement with Bollinger Shipyards to allow the Coast Guard to conduct routine maintenance during these warranty repair drydocks at the Coast Guard’s expense. This routine maintenance includes, for example, a main diesel engine overhaul that is scheduled to occur roughly every 6,000 operational hours. According to Coast Guard officials, the Coast Guard plans to complete this overhaul even though, as of July 2016, the engines have yet to be accepted as contractually compliant. Coast Guard officials explained that routine preventative maintenance on warranty covered systems, such as the main diesel engine overhaul, is the responsibility of the Coast Guard so as to not void the FRC’s warranty. The time spent correcting equipment failures and awaiting spare parts has been minimal and, unlike depot maintenance, has not significantly affected the FRC’s mission capable rate. The FRC’s not mission capable rates due to maintenance (equipment failures) and supply have been below the Coast Guard’s target of no more than 12 percent on average since March 2012. Figure 5 shows the average rates for these not mission capable rates from March 2012 to September 2016 as well as a breakout of the last year of this time frame. The Coast Guard has managed its not mission capable rates due to maintenance (equipment failures) by utilizing the RCM approach, which includes a failure analysis used to develop the required maintenance list. According to Coast Guard officials, the required maintenance list is updated if directed by the results of a maintenance effectiveness review, which is completed on a rolling basis for each equipment system aboard the cutter. From the required maintenance list, maintenance procedure cards are developed that ensure maintenance is conducted uniformly across the fleet. Regarding not mission capable for supply, Coast Guard officials stated that the industry standard is 5 percent or less. In order to meet the industry standards, Coast Guard officials report using a complex algorithm to determine the appropriate level of inventory for each spare part, which takes into account failure rates, time required to obtain the part, and Navy historical data for similar fleets. In addition to this algorithm, the Coast Guard ensured the parts needed to complete scheduled maintenance were available to the maintainers, which involved packaging all necessary tools and parts for an upcoming scheduled maintenance event on a particular FRC from the central inventory warehouse at SFLC. These packages, according to Coast Guard officials, were then shipped in advance of the maintenance to the home port of the cutter. Coast Guard officials expect the percentage of time the FRC fleet spends not mission capable due to supply to increase slightly once the warranty expires, as the Coast Guard will have to rely on the commercial market to obtain parts as opposed to these parts being provided by Bollinger Shipyards. Coast Guard officials noted however, that they do not expect this increase to exceed the industry standard of 5 percent. The three equipment systems with the most problems from 2014 to 2016 resulted in about 827 combined lost operational days and partially mission capable days for the FRC. These three equipment systems include: the main diesel engine, the C4ISR system, and the ventilation system. While the not mission capable rates due to maintenance—from equipment failures—and supply are important metrics in understanding the effectiveness of the Coast Guard’s maintenance planning, they alone do not convey the complete health of the FRC fleet. As such, the Coast Guard tracks the equipment systems that result in lost operational days for the cutters. Failures associated with the main diesel engine have been particularly problematic. The engine is still covered by the warranty clause for each FRC, but problems resulted in roughly 355 days spent not mission capable due to maintenance. The FRC’s contracting officer stated that as of October 2016, all of the 18 operational FRCs have undergone various corrective repairs on their main diesel engines, including replacing engines on 6 of the cutters. While Coast Guard officials report that Bollinger Shipyards has resolved many of the concerns surrounding the main diesel engines, design changes to satisfy unresolved problems are ongoing. One such problem is the harmful buildup of soot in the exhaust while traveling at low speed. Once an acceptable solution has been determined, the new equipment will be retrofitted onto the other FRCs at the shipbuilder’s expense. Coast Guard officials have not identified an anticipated timeframe for a solution. In addition to tracking the systems that resulted in lost operational delays, the Coast Guard also solicits feedback annually from each cutter’s crew in an engineering report to identify trends in equipment systems that are hindering the cutters while underway. The engineering reports provide a forum for the cutter’s commanding officer to provide his or her opinion of the cutter’s top equipment issues, overall summary of the cutter’s structural condition, and the top human performance problems experienced by the cutter over the preceding 12 months. Officials at the Patrol Boat Product Line review all of the FRC’s engineering reports for the fleet to identify trends and to take corrective action where necessary. The Patrol Boat Product Line then consolidates the top 5 equipment concerns noted by the FRC commanding officers and provides a response explaining the corrective actions to be taken or the rationale for inaction. Our analysis of FRC engineering reports from 2012 through 2015 found that the top three equipment concerns that occurred most frequently were a lack of maintenance procedure cards, issues with the Machinery Control and Monitoring System, and paint and corrosion on board the cutters. For example, the engineering reports mentioned that inaccurate or incomplete maintenance procedure cards interfered with the crew’s ability to complete maintenance as these maintenance procedure cards provide detailed instructions on how to complete maintenance activities for each of the equipment systems. Coast Guard officials noted that inaccuracies found in maintenance procedure cards are largely due to incorrect part numbers for pieces of equipment as these numbers can change, for example, due to obsolescence. According to Coast Guard officials, as of July 2016, 89 percent of the maintenance procedure cards for the FRC have been published and the majority of those unpublished are conditional cards that have not been published because the triggering condition, most likely a system failure, has not yet occurred in order to validate the maintenance procedure cards. Similar to the FRCs, the NSCs have met their minimum mission capable target range (49 percent) on average since the Coast Guard began tracking this metric in November of 2013 until September of 2016. However, over the last 12 months of this timeframe, from October 2015 to September 2016, the cutters demonstrated an average mission capable rate below the minimum target. This is primarily due to the increase in depot maintenance associated with post shakedown availabilities on the newly delivered NSCs (Hamilton and James). See table 3. According to Coast Guard data, from November 2013 until September 2016, the NSC fleet had a cumulative average mission capable rating of 54.2 percent, above the minimum goal of 49 percent. See figure 6. A cutter is deemed mission capable if it operates in either a fully mission capable or partially mission capable status. The NSCs have operated in a partially mission capable status 22.4 percent of the time since November 2013, which is more than the FRC. This is at least partially due to the complexity of the NSC’s mission set and operational schedule. For example, the NSC was designed to support 8 of the 11 Coast Guard statutory missions and, if even 1 of the 8 missions is unable to be performed, the NSC will operate under the partially mission capable status. Additionally the NSCs are scheduled for patrols lasting roughly 2 to 3 months in duration as opposed to the FRCs, which are scheduled for patrols lasting about 5 to 7 days, making it much more likely that the NSCs will be conducting missions in a partially mission capable status at some point during their lengthy patrol. The size of the NSC allows for additional capabilities to be available during patrols. For example, the NSC is equipped with three cutter boats, and known failures with the dual-point davit crane launch system frequently render at least one of the three cutter boats inoperable, causing the NSC to become not fully mission capable. Instead, the NSC becomes partially mission capable as it is able to conduct operations with the remaining two cutter boats. While the cumulative average monthly mission capable rate for the NSC is above the target since November 2013, when we eliminated prior years and analyzed the NSC’s cumulative average monthly mission capable rate over a more recent time period—October 2015 to September 2016— we found that the NSC’s average mission capable rate was 37.2 percent, which is below its minimum target of the range of 49 percent. Figure 7 shows the monthly mission capable rates for the NSC from October 2015 to September 2016. From October 2015 to September 2016, the not mission capable rate due to depot maintenance was 60 percent. While both the FRC’s and NSC’s inability to meet its mission capable target rate is attributable to the increase in depot-level maintenance, the cause differs. Unlike the FRC’s mission capable rates, which are influenced by the warranty repair drydock periods, the NSCs mission capable rates are influenced by the roughly 2-year post-delivery period called the post shakedown availability, which is scheduled for each newly delivered NSC. During this time the cutters will undergo depot-level maintenance and other activities to bring the cutter to full operational capability. Further, whereas the FRC’s warranty repair drydock periods were unanticipated, the NSC’s shakedown periods were planned during the acquisition phase. During this shakedown period, the NSC will be rendered not mission capable due to depot-level maintenance for a majority of its time. For example, from January 2015 until September 2016, the NSC Hamilton spent 70.9 percent of its time in depot-level maintenance, and the NSC James spent 82.6 percent of its time in depot-level maintenance from September 2015 to September 2016. With only five NSCs in operation as of September 2016, having two cutters spend the majority of their time not mission capable due to depot-level maintenance is having a negative effect on the overall fleet’s mission capable rates. This will continue as the Coast Guard introduces new NSCs into the fleet and the last cutter completes its 2-year post shakedown period—scheduled for 2022 as the ninth cutter is scheduled for delivery in 2020. While the first three NSCs achieved their mission capable rate targets on average from January 2014 to September 2016, it is uncertain if the overall fleet mission capable rate will increase once all NSCs complete their post shakedown availabilities. The average time spent correcting equipment failures or waiting for supplies has been below the Coast Guard’s target of no more than 12 percent since November 2013. Figure 8 shows the average not mission capable rates due to maintenance— from equipment failures— and supply from November 2013 to September 2016 as well as the last 12 months of this time frame. From November 2013 to September 2016, the NSC fleet achieved an average not mission capable rate due to maintenance (equipment failures) of 2.1 percent. In the last year of this time frame, the average not mission capable rate due to maintenance (equipment failures) was 2.8 percent. The Coast Guard keeps this metric low by using its RCM analysis and by arranging for the NSC’s drydock periods to conduct preventative maintenance based on equipment failure risk. This enables the cutter to receive maintenance that potentially avoids equipment failures while conducting missions. According to the Coast Guard, this is accomplished by performing the drydock in such a way that equipment systems not believed to be in need of repair are included as optional items in the depot maintenance contract. These optional items can be exercised as needed at a previously negotiated fixed price. The NSC fleet has met the industry standard of less than 5 percent for not mission capable rates due to supply. The Coast Guard employs the same algorithm and pre-positioning of parts as discussed above with regard to the FRC to meet both its internal and industry standards for not mission capable due to supply rates. The three equipment systems with the most problems from 2014 to 2016 resulted in 993 combined lost operational days and partially mission capable days for the NSC over this period of time. These systems include the cutter boat launch and recovery system and the reliability of the ship service diesel generator and the main diesel engine. Unlike the FRC’s warranty, the Coast Guard is required to pay a portion of the cost for equipment problems corrected under the NSC’s guaranty. Nearly all of these lost operational days resulted from the NSCs operating in a partially mission capable status due to the equipment problems. The cutter boat launch and recovery system, which includes the gantry crane and dual point davit, have rendered the fleet in a partially mission capable status for 278 days from 2014 to 2016. Across the fleet, overheating bearings in the ship service diesel generator have resulted in the crew’s inability to use one or more of the generators. According to the Coast Guard policy, the NSC requires at least two (one specific generator and either of the remaining two) of its three generators to be operational in order to conduct missions. The cost to repair this issue is substantial, with each bearing costing roughly $100,000 to resolve. In addition, the NSCs continue to experience failures associated with the main diesel engines. The main diesel engines used by the NSCs are manufactured by MTU, the same manufacturer responsible for the main diesel engines employed on the FRCs, and have been problematic since the NSC fleet became operational. As we found in January 2016, the engines overheat in waters above 74 degrees Fahrenheit, which constitutes a portion of the NSC’s operating area given that they are intended to be deployed worldwide. This can cause the cutters to operate 2 to 4 knots below their top speed of 28 knots, which could hinder the cutter in successfully conducting operations. The NSC’s inability to achieve top speed in warm waters has inhibited the cutters’ ability to complete their regularly scheduled full power trials, which are periodic tests of the propulsion plant operated at maximum rated power. Figure 9 depicts the number of attempted and successful full power trials conducted by the NSCs from 2012 to 2015. The full power trial results advise operating and maintenance personnel of the cutter’s full power performance characteristics and the results can provide the basis for maintenance activity. From 2012 to 2015, the operational NSCs conducted 7 full power trials out of 14 total possible tests. Of those 7 tests, 4 were considered successful. In order for a full power trial to be considered successful, the cutter must complete the trial requirements, which include testing at a specific engine speed and minimum water depth, while not exceeding design pressures, temperature, and other operating parameters. Performance issues or equipment failures with the propulsion system were listed as the most frequent cause for not conducting the full power trial or for unsuccessful tests. We previously recommended in January 2016 that the Department of Homeland Security (DHS) conduct an acquisition review board once the Coast Guard concludes a root cause analysis on both the main diesel engines and the generators. DHS concurred with this recommendation and plans to hold an acquisition review board no later than December 2017. In an attempt to resolve the continued propulsion plant problems, DHS issued an Acquisition Decision Memorandum to the Coast Guard in April 2016 directing, among other actions, that the Coast Guard conduct a propulsion study to develop a permanent solution to the main diesel engine failures by December 2017. The Coast Guard has a propulsion study underway with MTU America, Inc. that is on track to meet the Acquisition Decision Memorandum’s deadline according to program officials. Pending the completion of the propulsion study and identification of corrective actions, the Coast Guard issued an engineering advisory to the NSCs in January 2016 in an effort to ensure the main diesel engine service life expectations are met, improve the engine’s operational reliability, maximize the engine’s performance, and minimize the engine’s maintenance costs. This advisory provides actionable steps the NSC crews can take while underway to achieve the aforementioned goals, such as to ensure the quality of the lubricating oil used and monitored regularly, to use harbor mode—which only engages one of the two diesel engines when operating below 10 knots—and to minimize engine idle time, to name a few. The Coast Guard is also in the process of developing prototype components to address issues with the main diesel engine in advance of the completion of the propulsion study. Similar to the FRC, the Coast Guard also solicits feedback annually from each NSC’s crew in an engineering report to identify trends in equipment systems that are hindering the cutters while underway. Officials at the Long Range Enforcer Product Line review all of the engineering reports for the NSC fleet to identify any trends and to take corrective action where necessary. This product line then provides a response to each of the commanding officers’ equipment concerns explaining the corrective actions to be taken or the rationale for inaction. The top three equipment problems that occurred most frequently in the NSC’s engineering reports from 2012 to 2015 are the Auxiliary Seawater System (ASW), the propulsion plant reliability, and the stern doors/gantry crane. For example, issues with the ASW included piping failures, ill-fitting valves, and corrosion. To address these issues, the Coast Guard has begun an engineering design change to reduce the flow rate throughout the ASW system that it plans to implement in three phases due to the dispersed nature of the system throughout the cutter. During operations and testing, the FRC and NSC have experienced problems that require engineering design changes or repairs. Some of these design changes are being implemented to correct issues discovered during testing, while others are being conducted to make systems less maintenance-intensive or to increase the reliability of the systems. The FRC program has identified several design changes that it is installing on the cutters at the expense of the Coast Guard. The FRC’s warranty is also covering several repairs, which the FRC’s contracting officer stated avoided at least $77 million in potential maintenance costs. Replacing the FRC’s engines, which contributed to the cutter’s lost operational days, represented about $52 million of the costs avoided. The NSC program is also implementing several design changes, the cost of which is the Coast Guard’s responsibility. The estimated cost for the NSC design changes has increased $57.6 million since January 2016. In addition, at least three design changes on the NSC are being conducted post-delivery for all nine NSCs, meaning that the Coast Guard will have to spend time and money conducting maintenance on systems with known defects until the cutters are retrofitted. Further, the cost analysis supporting the decision to install these three design changes post- delivery was not documented, which entails risk that the Coast Guard may not be choosing the most cost effective path forward. The Coast Guard has encountered several issues on systems aboard the FRC that were discovered during operations and testing and require design changes and retrofits to correct. Some of these design changes are being conducted during the FRC’s ongoing warranty repair drydock. According to Coast Guard documentation, the Coast Guard is responsible for paying for these design changes as they are outside the scope of the program’s warranty. Table 4 shows the list of design changes for the FRC valued at $1 million or greater. The cost for some of these design changes, such as the structural enhancements, has already been incurred, while other design changes have only recently begun to be incurred. According to program officials, the structural enhancements were identified early in the production of the FRCs during a review of the standards that were used to build the cutter, which resulted in the Coast Guard increasing the strength of the hull by installing extra supports to ensure its safety. These enhancements were retrofitted on the first seven FRCs and were then included in production beginning with the eighth FRC. The rudder replacement is intended to reduce fuel consumption, reduce paint failures, and lengthen the part’s lifespan, which will reduce sustainment costs. This design change is planned to be retrofitted on the first 20 FRCs and then incorporated into production with subsequent cutters. In addition to the design changes listed above, repairs are being conducted on the FRC that are covered by the program’s warranty and are being performed at no additional cost to the Coast Guard. According to the FRC’s contracting officer, as of August 2016 the FRC’s warranty has avoided about $77 million in potential maintenance costs for the Coast Guard. Table 5 shows the systems on the FRC that have been repaired or replaced under the warranty at no additional cost to the Coast Guard. As was mentioned previously, issues with the FRC’s main diesel engines were one of the systems that led to the most lost operational days. This problem was first reported in the cutter’s Initial Operational Test and Evaluation report in July 2013. The Navy’s Commander of Operational Test and Evaluation Force, which serves as the Coast Guard’s independent test agent, found multiple problems with the main diesel engines that resulted in 275 lost operational hours during the test event. These problems have continued, with a total of 20 diesel engines being replaced as of August 2016. Most recently, the diesel engines were replaced on the Joseph Tezanos (the 18th FRC) and the Benjamin Dailey (the 23rd FRC) in May 2016 during production, indicating that the problems with the diesel engines are ongoing. Additionally, the problems with the diesel engines have varied widely, making it difficult and time- consuming for MTU and the Coast Guard to identify a definitive root cause that could solve the issues fleet-wide. Most issues have required fleet-wide retrofits, which can reduce the cutters’ mission capable rates due to the increased depot maintenance work required to install corrections. According to Coast Guard officials, the FRC contracting team holds monthly meetings with MTU to review the corrective actions and hold this manufacturer accountable. The program estimates that 60 percent of the current problems have been resolved with retrofits complete. However, the variation in the issues experienced thus far make it difficult for the Coast Guard to predict future failures, and the problem may continue to affect the operational availability of the FRC. The NSC is also undergoing several design changes for issues discovered during operations and testing, including those that require additional maintenance above what was expected. The total cost of these changes has increased $57.6 million from the amount we found in January 2016, for a total of almost $260 million. Program officials attributed the increase to the revised cost of structural enhancements on NSCs 1 and 2 based on actual contract values and the addition of the ninth NSC. Table 6 shows the list of design changes for the NSC estimated to cost at least $1 million. The design change with the largest cost increase is the structural enhancements, with a cost estimate of $70.6 million. This involves cutting into large sections of the hull in order to add reinforcing metal so that the first two NSCs are more likely to meet their full 30-year service life. This design change was incorporated into production on the third NSC. The original estimate of $38 million for this work was established in January 2014, and the Coast Guard awarded the contract in February 2016. The current contract value of over $70 million represents a cost increase of about 86 percent from the original estimate. According to NSC program officials, the large cost increase is due to a better understanding of the work that would be required to complete this effort, such as the costs associated with getting the cutter into a drydock, removing sensitive equipment, and the technical complexity of the task. They also stated that the contractor factored risk into its bid for completing this technically difficult work. In order to complete this work, the Coast Guard will place the first two NCSs in a not mission capable due to depot maintenance status for at least 11 months each to correct structural deficiencies. The Coast Guard plans to conduct additional design changes, such as the gantry crane and single-point davit replacements, during this period as well to save money. In order to minimize the cost increase for some of these design changes and to adhere to their production schedule, the Coast Guard plans to maintain the original equipment during production for all NSCs and then later conduct retrofits after accepting delivery of the cutters. This means that systems with known defects or deficiencies will be installed during production only to be replaced later, requiring maintenance on some of these systems until the retrofits are complete. Figure 10 shows selected systems that will require retrofits after all nine cutters are built. The following equipment will be included on the cutters currently being built or under contract and later removed or upgraded: Gantry Crane Replacement: The gantry crane was not designed for a maritime environment and is inadequately sealed to prevent water intrusion, leading to accelerated corrosion and the need for excessive repairs that are not considered sustainable over the NSC’s life cycle. Post-operational reports stated that the gantry crane requires hundreds of man-hours to keep it operational. This design change was initiated in January 2010 and, according to Coast Guard officials, the new crane system has been successfully prototyped on the Stratton and has been approved for fleet-wide replacement. However, all of the remaining NSCs to be produced will be built with the gantry crane installed and will then have it removed during their post- shakedown periods when the new crane system will be installed. Problems with the gantry crane have plagued the NSC since it began operations and are expected to continue until all cutters have their gantry crane replaced, which is not planned to be completed for several years. The fleet-wide replacement of the gantry crane is anticipated to cost $34.9 million, which represents a cost increase of about 13 percent since January 2016. Single-Point Davit Replacement: The single-point davit, which is used to lift cutter boats for launch and recovery from the starboard side of the cutters, is unable to reliably lift the cutter boats in high seas. This has caused the crews of the NSC to express concern about the safety of the single-point davit system when operating in higher sea state conditions. All of the NSCs have been or will be delivered from the shipbuilder with the single-point davit system installed, despite this design change being initiated in March 2010. The replacement of the single-point davit will be installed during the remaining cutters’ post-shakedown period and is expected to cost the Coast Guard $14.0 million, which includes a cost increase of about 12 percent since January 2016. Upgrades to Two Ammunition Hoists: According to Coast Guard officials, the ammunition hoists are difficult to use in their current configuration, and the crew of the NSC prefers to carry ammunition for the Close-in Weapon System by hand rather than use the hoist. As a result, the Coast Guard plans to modify the design of this equipment. Despite the Coast Guard initiating this design change in October 2012, beginning with NSC 4, the remaining NSCs are being built without ammunition hoists and instead are delivered with a vacant space, which officials stated resulted in savings to the Coast Guard. The Coast Guard is installing the new ammunition hoists post-delivery on all NSCs. These changes are expected to cost the Coast Guard a total of $7.0 million, which represents a cost increase of about 11 percent since January 2016. Coast Guard officials stated that no formal analysis is developed or documented to determine whether a design change should be installed during production or post-delivery. Instead, they used the professional judgment of Coast Guard and shipyard officials to determine the most cost efficient timing of when to install design changes. Keeping the NSC delivery dates on schedule was one of the primary reasons officials gave for not installing the three design changes noted above during production on the NSCs that have not yet been delivered (NSC 6-9). Given that the program has been aware of these three design changes for many years, the Coast Guard had an opportunity to install the design changes during production instead of during the post delivery period. According to Coast Guard officials, one hindrance to installing systems during production is that the shipyard would likely want to revalidate any engineering work on the design change that was conducted by Coast Guard officials since the shipyard is responsible for delivering a ship that meets the specifications of the contract. This revalidation work could delay the production schedule and lead to cost increases. They also stated that it is more cost effective to install these three design changes post-delivery for all NSCs, but were unable to produce any documents supporting this claim. For example, officials explained that installing replacement systems for the gantry and single-point davit cranes during production would have cost an additional $7 million to $10 million per cutter. This is compared to their estimates of $4.5 million to $5 million to conduct these changes post-delivery. However, officials could not provide documentation supporting their analysis. Further, the Coast Guard’s Joint Surface Engineering Change Process Guide, which governs the design change process and provides instructions for how design changes should be planned and installed, does not require such cost analyses to be documented. Federal internal control standards state that significant decisions should be documented in a manner that allows documentation to be available for examination. In addition, GAO best practices state that cost estimates should be documented for management to make an informed decision regarding a program’s affordability. With the Joint Surface Engineering Change Process Guide not requiring that a cost analysis be performed and documented to support its decision on when to install design changes, the Coast Guard cannot be certain that it is making the most cost-effective decision when determining the optimal time to install design changes. The FRC’s and NSC’s annual depot-level expenditures have generally been well below their estimated levels since 2010. Combined, these cutters have used $106.6 million less than estimated since 2012 and 2010, respectively. The Coast Guard has used this $106.6 million to pay for maintenance on legacy vessels and other assets. The Coast Guard uses its standard support level to estimate the annual depot-level maintenance needs of each asset. However, officials stated that standard support levels are not updated on a regular basis with information on actual expenditures, which can hinder the Coast Guard’s ability to determine what its actual depot budget needs are. The Coast Guard’s annual estimates for depot-level maintenance— known as standard support levels—consistently do not reflect actual expenditures for the FRC and NSC. Depot maintenance expenditures from 2012 to 2016 for the FRC and 2010 to 2016 for the NSC were $106.6 million less than estimated. Figure 11 shows the estimated and actual maintenance expenditures for the FRC and NSC since 2012 and 2010, respectively. Expenditures for the FRC program from 2012 to 2016 were $66.4 million less than expected for depot maintenance—85 percent under its standard support level—and expenditures for the NSC program from 2010 to 2016 were $40.1 million less—26 percent under its standard support level. Coast Guard officials stated that early in a cutter’s life cycle, the depot- level expenditures are expected to be less than what is planned for in the standard support level since the cutters are not conducting all of their regularly scheduled depot-level maintenance yet. Then, as an asset ages, its expenditures will gradually meet or exceed its standard support level in certain years when a cutter has an increased amount of planned depot- level maintenance. The FRC has yet to meet or exceed its estimates and officials attributed the large disparity between the FRC’s expenditures and its standard support level to the program’s warranty, saying that they are not fully responsible for conducting all maintenance yet. The Coast Guard does not expect the FRC’s depot-level expenditures to match its estimates until after the 15-week warranty repair drydock for the first 13 cutters is complete in late 2019. The NSC exceeded its estimates only one time—in 2014—which coincided with the first scheduled drydock event on the Bertholf. NSC officials stated that drydocks are the most expensive depot-level maintenance event. However, the Stratton had its first scheduled drydock in fiscal year 2016 and the NSC expenditures did not exceed its standard support level in that fiscal year, indicating that the NSCs estimates may have excess capacity that is not needed in order to conduct all depot maintenance. The NSC fleet had its largest difference between its estimates for depot-level maintenance and actual expenditures in fiscal year 2016, which was also the first year that all five operational NSCs were included in the program’s standard support level. The difference in funds from a cutter’s standard support level and its actual expenditures is used by the Coast Guard to help cover the depot maintenance costs of legacy assets. According to Coast Guard officials, the combined difference of $106.6 million in depot maintenance funds from the FRC and NSC remained in a centrally managed surface asset depot maintenance account, which is available for use on other Coast Guard surface assets, such as the High Endurance Cutter, which officials explained requires additional maintenance funding over what was originally planned. The standard support levels used to create an asset’s annual estimates for depot-level maintenance are created early in an asset’s acquisition life cycle and are established as part of the program’s life cycle cost estimate. Once the standard support level is established, it is then used as part of the initial budgeting process for the cutter class. However, Coast Guard officials stated that annual depot-level maintenance budgets are based on previous years’ enacted appropriations rather than actual expenditures or standard support levels. The previous years’ enacted budgets are adjusted for the assets that were added and those that were removed from Coast Guard service and then the budget is submitted to Congress. This means that the annual Coast Guard budgets do not reflect the actual needs of the assets. Further, the surface asset depot-level maintenance budget line in the annual Coast Guard budget submission does not include the details for any asset class. Officials explained that attempting to do so would be unnecessarily difficult and would not help the Coast Guard manage its depot maintenance funds. Officials further explained that the Coast Guard manages its surface asset depot maintenance as a portfolio. The diverse portfolio of surface assets includes brand new vessels under warranty as well as 50-year old vessels that often demand significant unplanned maintenance to keep them operational. While standard support levels are created early in an asset’s acquisition life cycle, Coast Guard officials stated that they are not normally adjusted or updated over the lifespan of an asset class except for major program events, such as a service life extension program. In July 2012, we found that the standard support levels for at least two legacy cutter classes had not been updated in more than 20 years while another cutter’s standard support level had not been updated in almost 50 years. According to Coast Guard officials, they plan to update the NSC’s standard support level to account for the addition of a ninth NSC, which was not a part of the original program of record. In July 2012 we also found that the Coast Guard’s process to create standard support levels did not fully meet best practices. We recommended that this process conform to cost-estimating best practices, with which the Coast Guard concurred. DHS’s response raised three issues that we found could limit the Coast Guard’s implementation of the recommendation. First, DHS stated that cost estimating best practices are most applicable to new acquisitions. We disagreed, stating that our cost estimating guide is intended to be applicable to programs and assets in all stages of their life cycles, including maintenance and support. Updating standard support levels periodically would lower the Coast Guard’s budgetary risk by using actual data to better inform future depot maintenance estimates. Second, DHS described how sustainment and maintenance costs can be uncertain and challenging to estimate, which the Coast Guard mitigates through centralized management of its depot-level maintenance funds for all assets. We again disagreed, stating that best practices can help ensure that cost estimates are comprehensive and accurate, which can help ensure that funds will be available when needed. Third, DHS explained that given the fiscal environment, the Coast Guard would focus on improvements that do not require additional resources. We stated that a well-documented cost estimating process and the use of accurate historical data should enable the Coast Guard to operate more efficiently. By not updating the standard support levels with information on actual expenditures, the Coast Guard does not know what the actual depot-level maintenance needs are of its assets. GAO best practices state that programs should be monitored continuously for their cost effectiveness by comparing planned and actual performance against the approved baseline. Effective program and cost control requires ongoing revisions to the cost estimate, budget, and projected estimates at completion. Further, a competent cost estimate is the key foundation of a sound budget. Not updating the estimated costs with actual expenditures could lead to ineffective planning by those responsible for conducting depot- level maintenance. Coast Guard officials stated that they do not update their depot maintenance estimates with actual expenditures because doing so would cause individual budget line items to constantly change. Nonetheless, by not reviewing and updating the standard support levels for the FRC and NSC, the Coast Guard cannot accurately know what the actual depot maintenance needs are for each asset class. This can hinder decision makers as they seek to wisely spend scarce taxpayer dollars in support of more modern and capable Coast Guard assets. As the Coast Guard continues to field FRCs and NSCs with improved capabilities over legacy cutters in an effort to modernize its fleet, it is important that these cutters are ready to support the Coast Guard’s missions when needed and with the capabilities expected when they were developed. The FRC and NSC both have met their mission capable target rates over the long term, and there are factors that explain the recent declines below their respective target ranges. While Coast Guard officials have stated that these factors are temporary, it is too soon to tell whether the FRC’s and NSC’s mission capable rates will meet their target ranges once these temporary periods are complete. Further, while maintaining production schedules for the NSC is important, this should not be the overriding factor when considering when to implement design changes. Rather, the Coast Guard should take into account all factors and costs when considering its options. Visibility into decisions on how and when to implement planned design changes on the NSCs—including those not yet constructed—is currently limited because the Coast Guard’s guidance does not require programs to perform and document cost analyses that support the cost and timing of when the changes should be incorporated. The Coast Guard’s estimates for depot-level maintenance costs are out of step with actual spending. The difference in estimated and actual depot maintenance costs realized from the FRC and NSC fleets since 2012 and 2010, respectively, indicates that standard support levels should be reviewed and updated to more closely reflect the actual expenditures. Not having an updated assessment of depot maintenance costs for each asset limits the information decision makers have to determine future budget needs, and limits transparency into which of the Coast Guard’s many surface assets, such as aging legacy assets that require additional maintenance funding, are benefitting from any differences between depot- level maintenance estimates and actual costs. A more thorough accounting for both the potential costs of design changes and the actual costs of keeping these cutters in service could improve information used by decision makers on how to spend scarce taxpayer dollars in support of a modern, capable Coast Guard surface fleet. To ensure that the Coast Guard makes effective use of its resources, specifically regarding its budget, we recommend the Secretary of DHS direct the Commandant of the Coast Guard to take the following two actions: Update the Joint Surface Engineering Change Process Guide to require a documented cost analysis to provide decision makers adequate data to make informed decisions regarding the expected costs and when it is most cost effective to install design changes. Periodically update standard support levels to account for actual expenditures so that the Coast Guard follows best practices and to provide decision makers an understanding of the actual depot-level maintenance funds required for Coast Guard assets. We provided a draft of this report to DHS for review and comment. DHS concurred with both of our recommendations and provided a date by which the actions will be complete. DHS’s written comments are reprinted in appendix V. DHS and the Coast Guard also provided technical comments that we incorporated into the report as appropriate. As agreed with your office, unless you publically announce the contents of the report, we plan no further distribution of it until 30 days from the date of this letter. We are sending copies of this report to the Secretary of Homeland Security and the Commandant of the Coast Guard. In addition, the report is available on our website at http://gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. To examine the maintenance, equipment failures, and spare parts availability for the Fast Response Cutter (FRC) and National Security Cutter (NSC), we reviewed the mission capability data provided from the Coast Guard from the Electronic Asset Logbook (EAL) database for both cutters and compared their rates to the target ranges for each cutter established by the Coast Guard over at least a 12-month time frame as Coast Guard officials stated was the most meaningful use of the data. We reviewed this data from when each cutter class first began to use the metric (March 2012 for the FRC and November 2013 for the NSC) to September 2016. We also assessed the reliability of the data from the EAL system to determine the extent to which we could use the data to support our findings and found that it was reliable for our purposes. We gathered data on the Coast Guard top operational degraders (lost operational days) from 2014 to 2016 and reviewed the FRC’s and NSC’s engineering reports from 2012 to 2015 to determine the top equipment issues the cutters experienced from the perspective of the cutter captains. We also reviewed the Patrol Boat Product Line’s response to the FRC’s engineering reports and the Long Range Enforcer Product Line’s response to the NSC’s engineering reports to see how the Coast Guard planned to remedy the issues the cutters were experiencing. We interviewed officials with the Coast Guard Office of Naval Engineering; the Surface Forces Logistics Center in Baltimore, MD; the Long Range Enforcer Product Line in Alameda, CA; and the Patrol Boat Product Line in Norfolk, VA. We also toured the NSC Stratton while it was in drydock at Mare Island Drydock in Vallejo, CA; Coast Guard Base Miami Beach to view FRC maintenance; and the Coast Guard Yard in Baltimore, MD to understand the Coast Guard’s ability to conduct drydocks and to understand how it plans for, stocks, and ships spare parts to cutters in the deployed locations. We also interviewed officers from the NSC Stratton, FRC Bernard Webber, FRC Margaret Norvell, and officials at Coast Guard Base Miami Beach that operate and conduct maintenance on the FRCs. shipbuilder or the Coast Guard was responsible for the costs of the repairs and design changes. For the NSC, we compared the cost of the design changes to the costs we previously found in January 2016 to determine the extent to which costs had changed. In addition, we reviewed the Coast Guard’s Joint Surface Engineering Change Process Guide and interviewed officials from FRC and NSC program offices and the Coast Guard’s Office of Naval Engineering and the Office of Budget and Programs. We compared the Coast Guard process for designing and implementing engineering changes to GAO’s best practices for cost estimating and to the internal control standards for the federal government. To examine the extent to which the Coast Guard’s cost estimates for depot maintenance reflects actual expenditures for the FRC and NSC, we reviewed the Coast Guard’s standard support levels, which are the estimated costs for depot-level maintenance each year over the course of an asset’s life cycle, and compared that to the depot-level maintenance expenditures for both cutters from fiscal years 2012 to 2016 for the FRC and from 2010 to 2016 for the NSC. We also reviewed the process whereby the Coast Guard creates standard support levels and interviewed officials from the Coast Guard’s Office of Budget and Programs to determine how the standard support levels are used in the annual budget development process. We compared the Coast Guard’s process for creating and updating standard support levels to GAO’s best practices for cost estimating. We conducted this performance audit from February 2016 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As of September 2016, the Coast Guard planned to acquire a total of 58 Fast Response Cutters (FRC) and 9 National Security Cutters (NSC) in an effort to modernize its aging fleet. The Coast Guard took delivery of the first NSC in 2008 with the delivery of the first FRC occurring in 2012. As of September 2016 the Coast Guard has received 5 NSCs and 19 FRCs. Tables 7 and 8 depict the anticipated delivery dates of the first 34 FRCs and all 9 NSCs. The Fast Response Cutters (FRC) and National Security Cutters (NSC) were designed to provide the Coast Guard with modernized capabilities above those already provided by aging assets. Tables 9 and 10 highlight the capabilities of the FRCs and NSCs in comparison to the legacy vessel the cutters are planned to replace. In order to plan for the individual maintenance needs of each cutter, the Coast Guard’s Surface Forces Logistics Center (SFLC) generates and maintains a 5-year maintenance plan for each asset class depicting the major depot-level maintenance tasks. Each year, SFLC updates and adds to each cutter’s 5-year maintenance schedule to formulate short- and long-term budgets, project shortfalls, and interface with operational commanders for scheduling purposes. Figure 12 shows the timeline of scheduled major maintenance events for the Fast Response Cutter (FRC) and National Security Cutter (NSC). Each cutter has a number of anticipated major maintenance events throughout its life cycle. Some of those include: Drydock: This refers to a period of time, lasting between 2 to 4 months for the FRCs and NSCs, when the cutter is hoisted out of the water to conduct maintenance. Maintenance conducted during this time period is only capable of being done on dry land and includes items such as repainting of the hull and shaft removal and reinstallation among others. Drydocks for the FRCs occur every 4 years while drydocks for the NSCs occur every 5 years. Main Diesel Engine change out: This maintenance event involves the replacement of the main diesel engines, which occurs every 12,000 hours of engine operations for the FRC and every 24,000 hours of engine operations for the NSC. Operating the FRCs at no more than 2,500 hours per year would mean the cutters should expect to undergo a main diesel engine change out at a minimum of just under 5 years, while operating at no more than 3,780 hours per year, the NSCs should expect to undergo a main diesel engines change out at a minimum of just over every 6 years. Ship Structure and Machinery Evaluation Board: This review is designed to examine the cutter’s material condition and provide information on the remaining service life of the cutter. The first Ship Structure and Machinery Evaluation Board is completed when the lead ship of the class reaches the 10-year mark and at a 5-year interval thereafter. According to officials, one of the possible outcomes of this review is that a midlife maintenance availability is triggered for the cutter to enable it to reach its expected service life. Midlife Maintenance Availability: This maintenance event is designed to correct system obsolescence issues and maintain asset reliability and supportability throughout the remainder of the cutter’s service life. This is completed near a cutter’s midpoint, which would be roughly 10 years into the FRC’s planned 20 year operational life and 15 years into the NSC’s planned 30-year operational life. Michele Mackin, (202) 512-4841 or [email protected]. In addition to the contact above, Richard A. Cederholm, Assistant Director; Katherine Trimble, Assistant Director; Peter W. Anderson; Charles W. Bausell Jr.; Erin Butkowski; John Crawford; Kristine Hassinger; Jenna Tischler; and Roxanna T. Sun made key contributions to this report.
The Coast Guard is procuring the FRC and NSC to replace its aging cutters. Both cutters have had operational problems—such as propulsion system issues—that are being addressed through maintenance. Prior GAO work identified issues related to performance and maintenance of these vessels, particularly related to the main diesel engines on both cutters. The House Subcommittee on Coast Guard and Maritime Transportation, Committee on Transportation and Infrastructure asked that GAO examine maintenance of the FRC and NSC. This report addresses the extent to which (1) maintenance issues are affecting FRC's and NSC's operational status, (2) design changes affect the maintenance of the cutters, and (3) the Coast Guard's cost estimates reflect actual expenditures for maintenance for the FRC and NSC. To conduct this work, GAO analyzed data on cutter maintenance and operations; analyzed the costs and timing of design changes; reviewed Coast Guard budgets and compared GAO best practices in cost estimating to the Coast Guard's process for estimating depot maintenance costs; and interviewed Coast Guard officials. Maintenance work for the Fast Response Cutter (FRC) and National Security Cutter (NSC) has lowered the operational availability of each fleet. Although both cutters on average have met their minimum mission capable targets over the long term, increased depot maintenance has more recently reduced each cutter's rates below targets. The FRC's rate is lower, in part, because of a series of unanticipated drydock periods to correct issues covered by its 12-month warranty. The NSC's lower rate is primarily because of anticipated 2-year maintenance and system upgrade periods performed on each newly delivered NSC. Both cutters have experienced problems with the diesel engines, which caused lost operational days and hindered operations while underway. The Coast Guard's 154-foot Fast Response Cutter and 418-foot National Security Cutter The Coast Guard has initiated design changes on the FRC and NSC, but some of the NSC's changes to address maintenance problems will not be installed until after each cutter is delivered. While the Coast Guard plans at least $17 million on FRC design changes, officials estimate the warranty has helped avoid $77 million for repaired systems. This includes about $52 million to replace 20 diesel engines that have degraded FRC operations since first discovered in July 2013. Design changes on the NSCs are expected to cost the Coast Guard at least $260 million. In order to maintain production schedules, several changes will be completed after delivery of each NSC, including the ninth NSC, which has not yet begun construction. Thus, systems with known deficiencies are being installed, only to be replaced later. Officials stated this approach is more cost effective; however, the Coast Guard did not document its cost analyses, in accordance with GAO cost estimating best practices. Without such documentation, the Coast Guard cannot demonstrate that it is making cost-effective decisions. Since 2010, depot maintenance expenditures for the FRC and NSC have been $106.6 million less than the Coast Guard estimated. This amount remains in a centrally managed account and is made available for other surface assets, such as aging, legacy vessels. Coast Guard officials stated that depot maintenance estimates are not adjusted or updated over the service life of an asset class. Periodically updating depot maintenance cost estimates—in accordance with GAO cost estimating best practices—for each asset class could provide decision makers with much needed information with which to determine future budgets. To ensure that it effectively uses its resources, the Coast Guard should document cost analyses on the cost and timing of engineering design changes and periodically evaluate and update its depot maintenance cost estimates. The Department of Homeland Security agreed with both recommendations and provided timeframes for actions to address them.
Within the Department of Justice, EOIR immigration judges conduct hearings to determine whether an alien is removable from the United States and whether he or she is eligible for a form of relief or protection from removal. If an immigration judge determines that an alien is removable from the United States and not eligible for relief or protection from removal, including voluntary departure, the immigration judge can issue an order of removal. The removal order becomes administratively final when all avenues for appeal with EOIR to remain in the United States have been exhausted or waived by the alien, and the alien is to be removed from the United States. Once an order of removal is final, ICE is responsible for carrying out the removal. In fiscal year 2013, ICE reported removing 368,644 aliens from the United States. While immigration judges have the authority to make custody determinations, ICE also makes the initial decision as to whether to detain aliens in ICE custody or release them to the community pending removal proceedings, subject to certain laws. The Immigration and Nationality Act, as well as other legislation, requires that under specified circumstances ICE detain certain aliens, including those arriving without documentation or with fraudulent documentation, those who are inadmissible or removable on criminal or national security grounds, and Even if not required to those aliens subject to a final order of removal.do so, ICE may detain aliens who it believes pose a threat to public safety or are flight risks, with the option for some aliens to be subsequently released. In fiscal year 2013, ICE booked 440,557 aliens into detention facilities. ICE uses one or more release options when it determines that an alien is not to be detained in ICE’s custody—including bond, order of recognizance, order of supervision, parole, and on condition of participation in the ATD program. If an alien is not a threat to public safety, presents a low risk of flight, and is not required to be detained, ICE may release him or her on (1) a bond of at least $1,500 or (2) an order of recognizance that requires the alien to abide by specified release conditions but does not require the alien to post a bond. DHS may release an alien on an order of supervision, despite such alien being subject to a final order of removal, where there is no significant likelihood of removal in the reasonably foreseeable future, because, for example, travel documents are not forthcoming. An alien subject to a final order of deportation or removal may also request a stay of deportation or removal. ICE may release certain aliens on parole for urgent humanitarian reasons or significant public benefit, or for a medical emergency or legitimate law enforcement objective, on a case-by-case basis. Finally, an alien can also be placed in the ATD program, which requires that, among other things, aliens released into the community agree to appear at all hearings and report to ICE periodically. In fiscal year 2013, ICE released aliens under these various options 113,690 times, as shown in table 1. To assist ICE officers in their decisions whether to detain aliens in ICE custody or release them, ICE developed an analytical tool known as the Risk Classification Assessment (RCA). The RCA, which ICE fully deployed in February 2013, considers several factors related to an alien’s public safety and flight risks—such as criminal history, prior removal data, ties to the local community, and gang affiliation—and recommends each alien for detention or release. An ICE officer reviews the RCA results along with other factors, such as an alien’s final order status, and, after obtaining supervisory approval, makes a custody determination. ICE officials stated that they generally do not use the RCA for aliens that ICE must detain by law or that are likely to be removed from the United States within 5 days. ICE created the ATD program in 2004 as another condition of release to help ensure that aliens released into the community appear at their immigration proceedings. The ATD program seeks to provide an enhanced monitoring option for those aliens for whom ICE, or an immigration judge, has determined that detention is neither mandated nor appropriate, yet may need a higher level of supervision than that provided by the less restrictive release conditions.case for possible placement in ATD, officers are to consider the alien’s criminal history, compliance history, community and family ties, and humanitarian concerns. ICE may require participation in the ATD program as a condition of the alien’s release during immigration proceedings, or upon receipt of the alien’s final order of removal or grant of voluntary departure. When reviewing an alien’s For fiscal year 2003, ICE was allocated $3 million for alternatives to detention to promote community-based programs for supervised release from detention. Subsequently, ICE created the first iteration of the ATD program in 2004 across eight cities; this iteration ran until 2009 and consisted of three separate programs operated by ICE and two companies under separate contracts.varying levels of alien supervision intended to help improve alien attendance rates at scheduled immigration court proceedings. By the end of 2009, the ATD program expanded to all of the 24 ICE ERO field offices and 5 of 186 suboffices. ICE initiated the second phase of the ATD program in 2009 with a 5-year contract with a private contractor and consolidated the program into a single contract with two components—Full-service and Technology- only. Behavioral Interventions, Inc. (BI), the contractor, operates the Full-service component out of stand-alone sites or out of ICE offices— currently in 45 cities. To be eligible for the Full-service component, aliens must be at least 18 years old and generally must reside within about 75 miles of the contractor’s office, depending on the field office. The contractor maintains in-person contact with the alien, which includes requiring periodic office visits and conducting unscheduled home visits, and monitoring the alien with either Global Positioning System (GPS) equipment or a telephonic reporting system. The contractor also provides case management services, which may include helping aliens understand the legal process, acquiring travel documents, and developing travel plans; reminds aliens to attend immigration proceedings; and handles initial alerts and violations for aliens. Last, the contractor documents aliens’ attendance at court hearings and compliance with electronic monitoring and in-person supervision requirements. ICE officers are ultimately responsible for removing aliens from the United States and responding to program violations. ICE officers can change the level of supervision regardless of where an alien is in his or her immigration proceedings, per a contract modification. According to contractor officials, this is typically done in special circumstances. released into the community after a post-order custody review; the level of supervision determined for these aliens depends on whether their removal from the United States is significantly likely in the reasonably foreseeable future. ICE ERO field office officials manage the Technology-only component of the ATD program, which is available in 96 locations, utilizing the contractor’s systems and equipment. The Technology-only component offers a lower level of supervision at a lower contract cost than the Full- service component and allows ICE to monitor aliens’ compliance with the terms of their release using either telephonic reporting or GPS equipment ICE officers are responsible for providing provided by the contractor.case management, in addition to removing aliens from the country and responding to violations. In locations where Full-service and Technology- only are available, ICE officers can de-escalate aliens from the Full- service component to the Technology-only component (or vice versa) at their discretion. For both components, ICE officers determine when an alien’s participation in the program should be terminated. ICE terminates aliens from the ATD program who are removed from the United States, depart voluntarily, are arrested by ICE for removal, or receive a benefit or relief from removal. ICE may also terminate an alien from the program when aliens are arrested by another law enforcement entity, abscond, or otherwise violate the conditions of the ATD program. Further, ICE may terminate an alien from the program if ICE officers determine the alien is no longer required to participate. The program requirements for the various levels of supervision for aliens in the Full-service and Technology- only components are shown in figure 1. The number of aliens participating in the ATD program increased from fiscal year 2011 to fiscal year 2013, in part because of increases in either enrollments or the average length of time aliens spent in one of the program’s components; and ICE changed the focus of the program to align with changes in agency priorities. Pursuant to ICE guidance in 2011, ICE also recommended that ERO field offices transition aliens among the two ATD program components—or levels of supervision—to help facilitate cost-effective use of the ATD program; however, ICE has not monitored the extent to which ERO field offices have consistently implemented the guidance. ICE plans to increase the average daily participation level of both ATD program components with increased funding, but ATD program officials stated that several factors affect their ability to identify future capacity and expand the program. ICE increased the number of aliens participating in the ATD program over the last 3 fiscal years with some differences between the Full-service and Technology-only components, and this increase, in part, can be attributed to increased enrollments and the increased average length of time aliens spent in the Technology-only component of the program. Specifically, the total number of unique aliens who participated in the program increased from 32,065 in fiscal year 2011 to 40,864 in fiscal year 2013, with most aliens participating in the Full-service component, as shown in figure 2. These numbers include all aliens in the ATD program for each of these years—regardless of the year in which they were initially enrolled. The increase in the number of aliens in the program over this time occurred primarily in the Technology-only component. Specifically, the overall number of aliens participating in the ATD program grew by 27 percent; the number of aliens in the Technology-only component increased by 84 percent; and the number of aliens in the Full-service component increased by 23 percent. During this time, the composition of aliens in the ATD program also changed to align with agency priorities. Specifically, ICE shifted its overall enforcement priorities with a June 2010 policy memorandum that detailed the priorities for alien apprehension, detention, and removal as follows: (1) aliens who pose a danger to national security or a risk to public safety—including aliens convicted of crimes—and (2) recent illegal entrants and aliens who are fugitives or otherwise obstruct immigration controls.memo, ICE has resources to remove only approximately 400,000 aliens per year from the country, less than 4 percent of the estimated illegal alien population in the United States. According to ICE data, about 50 percent of aliens in the ATD program met an ICE enforcement priority in ICE established such priorities because, as stated in the fiscal year 2012, such as aliens convicted of crimes. As of April 2014, ICE reported that about 90 percent of aliens in the ATD program met ICE enforcement priorities and 51 percent were criminal aliens. One factor contributing to the increase in ATD program participation was that ICE generally increased the number of aliens it enrolled in the program each year. Specifically, the total number of unique enrollments in the ATD program increased by 26 percent—from 16,252 in fiscal year 2011 to 20,441 in fiscal year 2013—although there was a slight decline in fiscal year 2012 before increasing in fiscal year 2013. As shown in figure 3, the increase from fiscal years 2011 to 2013 was due to enrollments in the Full-service component—which increased by 60 percent during this time. Information on alien enrollments showed that the extent to which ICE booked aliens into detention facilities or released them into the community varied across fiscal years 2011 through fiscal year 2013, as shown in table 2, although these enrollments were not unique. For example, these enrollments include aliens who may have been enrolled in both detention and one or more release options in the same fiscal year and aliens who may have been booked multiple times into detention facilities or released multiple times under the same option in the same fiscal year. During this time, ICE also expanded use of the ATD program across ERO field office and suboffice locations. Specifically, ICE expanded use of the Full-service component from 38 ERO field offices and suboffices in fiscal year 2011 to 44 field offices in fiscal year 2013. During this time, ICE expanded the use of the Technology-only component from 70 to 76 ERO field office and suboffice locations that were actively using the component. Another factor contributing to the increase in the number of aliens in the Technology-only component of the ATD program was an increase in the length of time these aliens were in the program. While the average length of time aliens spent in the ATD program has remained fairly constant, differences existed across the program’s components. The average length of time that aliens spent in the Full-service component decreased by about 20 percent from fiscal year 2011 to fiscal year 2013, while the average length of time increased nearly 80 percent for aliens in the Technology-only component during this same time, as shown in table 3. Specifically, aliens enrolled in the Full-service component in fiscal year 2013 spent about 10 months in the component, and those enrolled in Technology-only in fiscal year 2013 spent about 18 months in this component. of guidance that directs field office officials to move compliant aliens from the more expensive Full-service component to the Technology-only component after 90 days, which is discussed later in this report. ICE increased the number of aliens terminated from the ATD program since 2011 after guidance directed ERO field offices to more cost- effectively use the ATD program; however, ICE has not monitored the extent to which field offices have implemented the guidance. Under the original ATD contract, ICE officials stated that aliens enrolled in the ATD program generally stayed in the program from the time of enrollment through completion of the immigration process (i.e., completion of a final court hearing or, if ordered removed at the final hearing, removal from the United States). However, concerned about the time it has taken for aliens to complete immigration proceedings and the subsequent impact on ATD program costs, ICE recommended in 2011 that ERO field offices help facilitate cost-effective use of the ATD program. Pursuant to this guidance, ICE officials recommended that field officials reserve more intense and costly supervision options under the Full-service component for (1) aliens who are newly enrolled in ATD who do not have an order of removal or an immediate immigration court date and (2) aliens who have already received a final order of removal from the country—the latter of which is seen as a best practice, according to ICE. Specifically, pursuant to ICE’s guidance, ICE recommended that ICE ERO field office officials assess whether aliens in the Full-service component demonstrated compliance with the conditions of their release, at least every 90 days, and if so, terminate them from the Full-service component after 90 days and de-escalate them to lower levels of supervision at a lower cost by moving them to the Technology-only component of the ATD program. Conversely, ICE recommended that ERO field office officials terminate aliens from the Technology-only component who received their final order of removal or grant of voluntary departure and escalate them to the Full- service component so that ICE, along with the contractor, could more easily monitor and ensure their departure. Subsequently, ICE increased the number of terminations from the two components of the ATD program. Specifically, as shown in figure 4, ICE increased the number of terminations from the Full-service component by 82 percent and the number of terminations from the Technology-only component by 299 percent from fiscal year 2011 to fiscal year 2013. ICE does not have complete data to identify the specific reasons field officials decided to terminate aliens from the program and therefore cannot determine whether ERO field offices are implementing the guidance for changing an alien’s level of supervision between the ATD program components with the goal of cost-effectively implementing the ATD program. According to ICE officials, because the individual circumstances for each alien’s case can vary, the decision to terminate or change an alien’s level of supervision is made by the field officer, who decides whether to keep aliens in the Full-service component, de- escalate aliens from the Full-service component to the Technology-only component, or terminate aliens from the ATD program entirely by placing them in detention or releasing them under their own recognizance or another release option. While ICE collects some data on the reasons for termination decisions made by field officials, ICE does not collect data on the specific reason why field officials would determine an alien is no longer required to participate in the program. For example, our analysis of termination data for the Full-service component showed that 13 percent of terminations from the Full-service component were made after confirmation of an alien’s removal and departure from the United States or after the alien had been granted relief and benefits to remain in the country, and another 15 percent of terminations were made for reasons including that aliens had violated the terms of the ATD program, had absconded, had been arrested, were pending departure, or other reasons. However, it was unclear why ERO field office officials made most terminations—71 percent—before the completion of the aliens’ immigration proceedings or removal from the United States, because the reason provided was that a field official determined that an alien was no longer required to participate in the program. As a result, ICE officials stated that they did not know if field office officials made the majority of these terminations in response to its guidance recommending changing the levels of supervision that could result in more cost-effective operation of the program, or for other reasons.Full-service terminations in fiscal years 2011 through 2013. ICE intends to increase the average daily participation level of both ATD program components with increased funding, according to ICE’s fiscal year 2015 budget justification; however, ATD program officials stated that several factors affect their ability to identify future capacity and expand the program. These officials said that one of these factors was limited information for determining how many aliens who were detained or otherwise released could have been considered suitable for the ATD program. For example, ATD program officials said that ERO field office officials who manage the ATD program have the ability to see the cases that are referred to the ATD program, but not the cases that resulted in the alien being detained in ICE custody or released under other options after the RCA process is completed. Nationwide, the RCA tool recommended that 91 percent of the 168,087 aliens processed by the RCA in fiscal year 2013 be detained in ICE custody—some of whom were subsequently eligible for bond—and that the remaining 9 percent (15,162 aliens) be released under ATD or other release options. However, ICE field officials managing the ATD program may not have seen the cases that resulted in detention or release and accordingly, are limited in their ability to estimate to what extent ATD program capacity could be expanded or changed in their location. To help increase the number of cases referred for ATD program consideration, ICE has issued guidance to its ERO field offices emphasizing that all nondetained criminal aliens should be given priority consideration for ATD program enrollment. Accordingly, this guidance directs the Criminal Alien Program and Fugitive Operations teams that generate case referrals in the field to coordinate with their local ATD component for enrollment consideration, including aliens released on a bond. ICE reported that field offices coupling a bond with ATD as a condition of release have shown an increased rate of success in alien removals from the United States. Other factors that ICE officials identified as affecting their ability to identify capacity and expand the ATD program are federal and state statutes and agency guidance. For example, ICE reported that from fiscal year 2011 to fiscal year 2013, 77 percent to 80 percent of aliens in detention facilities were required to be detained under federal law, and were not eligible for In addition, federal law requires that consideration in the ATD program. ICE maintain a minimum of 34,000 detention beds each day,part of its fiscal year 2015 budget justification, ICE reported that a decrease in the number of detention beds required to be maintained would result in an increase in the number of aliens who could be enrolled in the ATD program. In regard to state statute, one state, for example, passed a law whereby law enforcement officials have the discretion to and as cooperate with federal immigration officials by detaining an individual on the basis of an immigration hold after that individual becomes eligible for release from custody, only where certain criteria are met. In regard to agency guidance, ICE has instructed ERO field offices generally not to enroll aliens who are not likely removable, as well as aliens who were brought to the United States as children and may be eligible for the Deferred Action for Childhood Arrivals program. ICE officials stated that they did not plan to expand use of the ATD program to additional ERO field office locations until after the new contract was in place; however, officials reported that several factors could affect whether a field office could be or is willing to implement the program. For example, ICE reported in May 2014 that five field offices had requested to implement the Full-service component in their office but ICE did not approve the requests because the field offices did not have the necessary resources to implement the program. Such resources include officers’ time to respond to instances of alien noncompliance with the terms of the program and review ATD cases and make supervision and termination decisions. ICE established two program performance measures to assess the ATD program’s effectiveness in (1) ensuring alien compliance with court appearance requirements and (2) ensuring removals from the United States, as well as performance rates to evaluate the program’s performance, but limitations in data collection hinder ICE’s ability to assess overall program performance. Compliance with court appearances. ICE established a program performance measure in 2004 to monitor alien compliance with requirements to appear at their immigration hearings. Data collected by ICE’s ATD contractor for the Full-service component of the ATD program from fiscal years 2011 through 2013 showed that over 99 percent of aliens with a scheduled court hearing appeared at their scheduled court hearings while participating in this component of the ATD program, with the appearance rate dropping slightly to over 95 percent of aliens with a scheduled final hearing appearing at their final removal hearing, as shown in figure 6. However, ICE does not collect similar performance data or report results on the court appearance rate for aliens enrolled in the Technology-only component of the ATD program—which constituted 39 percent of the overall ATD program in fiscal year 2013. According to ICE officials, the agency did not require the contractor to capture similar data for the Technology-only component because when the ATD program was created, it was envisioned that most aliens would be in the Full-service component for the duration of the immigration process, and data for aliens in the Full-service component are collected by the contractor.officials stated that they did not have sufficient resources to collect such data for the Technology-only component, given other priorities. ICE has taken steps to address the lack of data collection for the Technology-only component. Specifically, during the course of our review, ICE initiated a pilot program with its contractor in May 2014 to establish improved data collection efforts, as well as expanded supervision options. The pilot, which is being tested in eight cities, increases the role the ATD contractor has in collecting and tracking data on aliens in the Technology- only component. Specifically, the contractor tracks compliance with release requirements, including court appearance requirements, for aliens enrolled in the Technology-only component, as it already does for aliens enrolled in the Full-service component. Under the new contract, ICE plans to implement key aspects of the pilot across all program locations, including giving ICE officers the ability to require the contractor to track data on aliens in the Technology-only component—including data on court appearances—to the contractor, according to the request for proposal for the new ATD program contract. However, ICE officials will not be required to have the contractor collect these data under the contract. While ICE’s plan to expand data collection for the Technology- only component under the new contract is a positive step that will help provide more information for assessing the performance of that program component, ICE may not have complete data for assessing program performance without requirements that ICE or contractor field staff collect these data. Standards for Internal Control in the Federal Government states that agencies should employ control activities to monitor their More specifically, agencies should develop mechanisms performance.to reliably collect data that can be used to compare and assess program outcomes related to entire program populations. Requiring ERO field offices to collect, or have the contractor collect, court appearance data on the Technology-only component of the ATD program would help ensure that ICE has complete data for assessing the performance of that program component as well as the overall ATD program, particularly in light of ICE’s guidance issued in fiscal year 2011 directing field offices to transition more aliens to the Technology-only component. Removals from the United States. ICE established a new program performance measure in fiscal year 2011 to assess the number of aliens removed from the country who had participated in the ATD program. ICE officials said the decision to replace the court appearance goal with the removal goal was based on the fact that the court appearance rate had consistently surpassed 99 percent and the program needed to establish another goal to demonstrate improvement over time. For this program performance measure, a removal attributed to the ATD program counts if the alien (1) was enrolled in ATD for at least 1 day, and (2) was removed or had departed voluntarily from the United States in the same fiscal year, regardless of whether the alien was enrolled in ATD at the time the alien left the country. As shown in table 5, ATD met its goal for removals in fiscal years 2012 and 2013. In fiscal year 2012, their goal was to have a 3 percent increase from their fiscal year 2011 total, and in fiscal year 2013, a 3 percent increase from their 2012 removal goal. Performance rates. ICE also uses four performance rates to evaluate how well the ATD program is operating while aliens are participating in the program. These four performance rates (success rate, failure rate, absconder rate, and removal rate)—though not measures of how well aliens in the ATD program comply with court appearances or removal orders—assess the status of an alien’s case at the time the alien is terminated from the ATD program. These performance rates are based on outcomes defined as favorable, neutral, and unfavorable. Favorable outcomes reflect cases where the final outcome of an alien’s immigration proceeding resulted in either a verified departure from the United States or a grant of relief and benefits to remain in the country while the alien was an active participant in the ATD program. Neutral outcomes do not reflect final outcomes of immigration proceedings, but rather include aliens who are terminated from the ATD program while awaiting departure, after being arrested, or because ICE determined the alien no longer needed to participate in the program—which could be because the case was administratively closed, the alien moved to a jurisdiction that did not have the ATD program, or ICE determined to lower or raise the alien’s level of supervision by moving him or her to a detention facility or another release option. Unfavorable outcomes include aliens who were terminated from the ATD program after absconding or violating program requirements. Specifically, the success rate reflects the percentage of aliens whose cases resulted in either favorable outcomes or neutral outcomes. This rate essentially measures the ATD program’s effectiveness in being able to track and monitor an alien while in the program, according to ICE officials. The failure rate is the converse of the success rate, measuring the percentage of unfavorable outcomes, including noncompliance with program terms or absconding from the program. The absconder rate measures the percentage of aliens whom ICE terminated from the program as a result of aliens absconding from the program. The removal rate approximates the percentage of aliens in ATD who will be removed or depart after the completion of their immigration proceedings. ICE calculates these rates for both its Full-service and Technology-only components. Using these performance rates, ICE reported that for the Full-service component over the last 3 years, ICE, along with its contractor, was able to track and monitor 90 percent or more of aliens until they were terminated from the Full-service component of the ATD program, with variance in the rate of aliens who had absconded from the program or who were projected to be removed from the country. During that same time, ICE reported improved ability to track and monitor aliens in the Technology-only component from nearly 80 percent in fiscal year 2011 to nearly 90 percent in fiscal year 2013. See table 6 for Full-service and Technology-only performance rates over these last 3 years. However, ATD program performance measures and rates provide limited information about the aliens who are terminated from the ATD program prior to receiving the final disposition of their immigration proceedings or were removed or voluntarily departed from the country. Specifically, with respect to program performance measures, ICE counts an alien who was terminated from the program and was subsequently removed from the United States toward his or her removal performance measure as long as the alien was in the program during the same fiscal year he or she was removed from the country. However, aliens who were terminated from the program do not count toward court appearance rates if they subsequently do not appear for court. Further, performance rates, for example, did not reflect whether the 87 percent of the aliens whom ICE terminated from the ATD program in fiscal year 2013 were removed, voluntarily departed from the United States, or were granted relief. ICE officials reported that it would be challenging to determine an alien’s compliance with the terms of his or her release after termination from the ATD program given insufficient resources and the size of the nondetained alien population. In accordance with ICE guidance, staff resources are instead directed toward apprehending and removing aliens from the United States who are considered enforcement and removal priorities. The ATD program is intended to help ICE cost-effectively manage the aliens for whom ICE, or an immigration judge, has determined that detention is neither mandated nor appropriate, yet may need a higher level of supervision when released into the community until they are removed from the United States or receive approval to remain in the country. ICE has altered its implementation of the ATD program to address the cost associated with keeping aliens in the program in light of lengthy immigration proceedings. However, by analyzing data that ICE plans to collect on supervision levels and specific reasons aliens are terminated from the program, ICE could be better positioned to monitor ERO field offices’ implementation of guidance intended to ensure cost- effective management of the ATD program. Further, collecting reliable data on both components of the program would help ensure that ICE has more complete data for assessing the relative performance of these program components as well as the overall ATD program. To strengthen ICE’s management of the ATD program and ensure that it has complete and reliable data to assess and make necessary resource and management decisions, we recommend that the Secretary of Homeland Security direct the Deputy Assistant Secretary of ICE to take the following two actions: analyze data on changes in supervision levels and program terminations to monitor ERO field offices’ implementation of ICE guidance intended to ensure cost-effective management of the program, and require that field offices ensure that ICE or contractor staff collect and report data on alien compliance with court appearance requirements for all participants in the Technology-only component of the ATD program. We provided a draft of this report to the Departments of Homeland Security and Justice for their review and comment—both provided technical comments, which we incorporated as appropriate. We provided selected excerpts of this draft report to the ATD contractor to obtain its views and verify the accuracy of the information it provided, and the contractor had no technical comments. DHS also provided written comments, which are summarized below and reproduced in full in appendix I. DHS concurred with the two recommendations in the report and described actions under way or planned to address them. With regard to the first recommendation, that ICE analyze data on changes in supervision levels and program terminations to monitor field offices’ implementation of guidance intended to ensure cost-effective management of the program, DHS concurred. DHS stated that ICE recognized that, because of contractual limitations, information on termination codes was not amenable to detailed reporting and analysis, which limited the program’s ability to adapt and improve. To address this, ICE established a requirement under its new ATD contract that information on termination codes must be collected and reported and that this would allow for more in-depth analyses that may yield avenues for further program refinements. DHS provided an estimated completion date of December 31, 2014. These planned actions, if fully implemented to include monitoring of field offices’ cost-effective management of the program, should address the intent of the recommendation. With regard to the second recommendation, to require that field offices ensure that ICE or contractor staff collect and report data on alien compliance with court appearance requirements for all participants in the Technology-only component of the ATD program, DHS concurred. DHS stated that ICE was aware that this recommended enhancement would greatly improve the program and that ICE began working in early fiscal year 2014 to implement the enhancement while developing the requirements for the new ATD contract. Under the new contract, ICE will have the opportunity to select a variety of case management services, including EOIR case tracking for any participant in the ATD program. DHS provided an estimated completion date of December 31, 2014. These planned actions, if fully implemented to include oversight on the extent that field offices are ensuring that ICE or contractor staff collect and report data on alien compliance with court appearance rates, should address the intent of the recommendation. We are sending copies to the Secretary of Homeland Security, the Attorney General of the United States, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix II. In addition to the contact named above, Lacinda Ayers (Assistant Director), Tracey Cross, Landis Lindsey, David Alexander, Pedro Almoguera, Frances Cook, Jon Najmi, Jessica Orr, and Eric Warren made significant contributions to the work.
Aliens awaiting removal proceedings or found to be removable from the United States are detained in ICE custody or released into the community under one or more options, such as release on bond and under supervision of the ATD program. Within the Department of Homeland Security (DHS), ICE is responsible for overseeing aliens in detention and those released into the community. In 2004 ICE implemented the ATD program to be a cost-effective alternative to detaining aliens. ICE administers the program with contractor assistance using case management and electronic monitoring to ensure aliens comply with release conditions—including appearing at immigration court hearings and leaving the United States if they receive a final order of removal. The Joint Explanatory Statement to the 2014 Consolidated Appropriations Act mandated that GAO evaluate ICE's implementation of the ATD program. This report addresses (1) trends in ATD program participation from fiscal years 2011 through 2013 and the extent to which ICE provides oversight to help ensure cost-effective program implementation, and (2) the extent that ICE measured the performance of the ATD program for fiscal years 2011 through 2013. GAO analyzed ICE and ATD program data, reviewed ICE documentation, and interviewed ICE and ATD contractor officials. From fiscal year 2011 through fiscal year 2013, the number of aliens who participated in the U.S. Immigration and Customs Enforcement's (ICE) Alternatives to Detention (ATD) program increased from 32,065 to 40,864, in part because of increases in either enrollments or the average length of time aliens spent in one of the program's components. For example, during this time period, the number of aliens enrolled in the Full-service component, which is run by a contractor that maintains in-person contact with the alien and monitors the alien with either Global Positioning System (GPS) equipment or a telephonic reporting system, increased by 60 percent. In addition, the average length of time aliens spent in the Technology-only program component, which offers a lower level of supervision at a lower contract cost than the Full-service program component and involves ICE monitoring of aliens using either telephonic reporting or GPS equipment provided by a contractor, increased by 80 percent—from about 10 months to about 18 months. In 2011, ICE recommended practices in guidance to its Enforcement and Removal Operations (ERO) field offices to better ensure cost-effective implementation of the program. For example, ICE recommended that field officers move aliens who have demonstrated compliance under the Full-service component to the less costly Technology-only component. GAO's work showed differences in ERO field offices' implementation of the guidance. However, ICE headquarters officials said that because of limitations in how they collect and maintain program data, they do not know the extent to which field officers have consistently implemented this guidance. ICE plans to institute new data collection requirements to address these limitations and use these data for a variety of purposes; however, ICE has not considered how to analyze these data to monitor the extent to which ERO field offices are implementing the guidance. Analyzing these data, once collected, could help ICE better monitor the extent to which ERO field offices are implementing the practices in its guidance intended to ensure more cost-effective program operation. ICE has established ATD program performance measures to, among other things, assess alien compliance with requirements to appear in court and leave the country after receiving a final order of removal, but it has not collected complete data for assessing progress against these measures. Specifically, ICE's ATD contractor collected data for the Full-service component, and from fiscal years 2011 through 2013, these data showed that over 99 percent of aliens with a scheduled court hearing appeared in court as required. However, ICE did not collect similar performance data to report results for aliens enrolled in the Technology-only component—which composed 39 percent of the overall ATD program participants in fiscal year 2013—because when the program was first created, ICE officials stated that they envisioned that most aliens would be in the Full-service component with data tracked by the contractor. ICE plans to expand the contractor's role in data collection but does not plan to require collection of performance data for aliens enrolled in the Technology-only component; rather ICE plans to leave it to the discretion of field officials as to whether to require the contractor to collect these data. Without requirements to collect these data, ICE may not have complete information to fully assess program performance. GAO recommends that ICE analyze data to monitor ERO field offices' implementation of guidance and require the collection of data on the Technology-only component. DHS concurred with the recommendations.
SSS is an independent agency within the executive branch of the federal government. Its missions are to (1) provide untrained manpower to the Department of Defense (DOD) for military service in the event of a national emergency declared by the Congress or the President, (2) administer a program of alternative service for conscientious objectors in the event of a draft, and (3) maintain the capability to register and forward for induction health care personnel if so directed in a future crisis. SSS’ authorizing legislation, the Military Selective Service Act, requires that all males between the ages of 18 and 26 register with SSS under procedures established by a presidential proclamation and other rules and regulations. Men are required to register within 30 days of reaching age 18. SSS operations have fluctuated since the end of the draft in 1973. In 1975, President Ford terminated registration under the act by revoking several presidential proclamations. In 1976, SSS state and local offices were closed, placing the agency in a deep standby. In 1980, following the Soviet invasion of Afghanistan, President Carter issued a proclamation to establish the current registration procedures. Under these procedures, SSS has been registering young men between the ages of 18 and 26, but not classifying them for a potential draft. According to SSS officials, the September 30, 1996, version of the registration database contained about 13 million names of men between the ages of 18 and 26 and represented about 92 percent of the eligible universe of males subject to registration. Men are most vulnerable to being drafted during the calendar year they reach age 20 and become increasingly less vulnerable each year through age 25. SSS officials estimate that registration compliance for men considered “draft eligible,” those aged 20 through 25, is 95 percent. A detailed description of registration methods appears in appendix I. Currently, SSS operates as a backup for the recruiting efforts of the volunteer armed forces in case an emergency compels a reintroduction of the draft. To carry out its operations, SSS is authorized a staff of 197 civilians (166 on board as of June 1, 1997); 15 active military personnel (2 additional positions are funded by the Air Force); 745 part-time authorized reservists (518 are funded); 56 part-time state directors (one in each state, territory, the District of Columbia, and New York City); and 10,635 uncompensated civilian volunteer members of local, review, and various appeal boards. The state directors would manage state headquarters and oversee their states’ Area and Alternative Service Offices and boards for SSS in the event of a mobilization. The local and district appeal boards would review claims that registrants file for draft deferments, postponements, and exemptions in a mobilization. Under the Alternative Service Program, civilian review boards review claims for job reassignment based on conscientious objector beliefs. SSS’ 1997 budget is $22,930,000, which is divided as follows: $7,810,000 for operational readiness (includes all boards activities), $7,360,000 for registration (includes public awareness activities), and $7,760,000 for administration. (All cost figures provided in this report are in 1997 dollars.) Although DOD does not currently foresee a military crisis of a magnitude that would require immediate reinstatement of the draft, it continues to support registration for all men between the ages of 18 and 26. The registration process furnishes a ready pool of individuals that could be drafted when needed to meet DOD’s emergency manpower requirements. Until 1994, DOD required the first inductees to be available 13 days after mobilization notification and 100,000 to be available 30 days after notice. That year, DOD modified its requirements, prescribing accession of the first inductees at 6 months plus 13 days (that is, on day 193) and 100,000 inductees at 6 months plus 30 days (that is, on day 210). For a draft of doctors, nurses, and other medical personnel, the first inductees are presently slated to report on day 222. SSS officials stated that they can provide personnel to DOD in the event of an unforeseen emergency assuming adequate funding and staff. DOD based its time line modifications on the expectation that active and reserve forces would be sufficient to respond to perceived threats, thereby mitigating the need for an immediate infusion of inductees. We did not validate the current DOD requirements for inductees. However, according to DOD, the current requirements maintain an adequate margin of safety and provide time for expanding military training capabilities. The portions of the $22.9 million 1997 budget that could be most affected by the alternatives total approximately $15.2 million: $7.4 million for the registration program and $7.8 million for operational readiness. Registration program activities include handling and entering information into the database on new registrants, producing and distributing publicity material about the requirement to register, running subprograms on registration compliance and address updates, deactivating registrants who no longer remain eligible because of age, and verifying the registration of individuals who may be applying for federal or state employment or other benefits. Operational readiness activities include organizational planning; National Guard and reserve training and compensation; tests and exercises; and various boards’ operations, including training, automatic data processing support, and other logistical types of support. Suspending the current registration requirement, with or without maintaining the boards, would generate cost savings primarily through reduced personnel levels. However, savings derived from implementing either option would be partially offset by the cost of downsizing the agency to accomplish planning and maintenance missions only and by severance costs associated with reducing personnel levels. SSS officials estimate one-time severance costs (including severance pay, unemployment insurance, lump sum leave, and buyouts) of $1.6 million for the suspended registration alternative and of $2.8 million for the deep standby alternative. Also, under current federal law and a number of state laws, certain benefits may be denied to individuals who fail to register for a draft. SSS officials estimate that the current cost to verify registration to ensure compliance with such provisions totals about $1.6 million annually. Therefore, the amount of savings under either alternative would depend upon whether the agency is required to continue its verification function (for individuals who were subject to registration prior to suspension) or whether the applicability of such provisions is suspended. According to SSS officials, under the suspended registration alternative, 74 civilian, 4 active duty military, and 241 part-time reserve positions may be eliminated. SSS officials estimated first-year cost savings of $4.1 million and subsequent annual cost savings of $5.7 million under this alternative. SSS would maintain the various boards, their training and operating programs, and the ability to update automated data processing capabilities as technology advances. The agency also informed us that it would continue readiness planning and training plus conduct or participate in mobilization field exercises to test and fine-tune its role in national security strategies. SSS officials told us that under the deep standby alternative, 120 civilian, 7 active duty military, and 440 part-time reserve positions may be eliminated. Also, 10,395 local and appeal board members and 240 civilian review board members (all unpaid volunteers) would be dismissed. In addition, the 56 state directors would move to an unpaid status. SSS officials estimated first-year cost savings of $8.5 million and subsequent annual cost savings of $11.3 million under this alternative. SSS would be placed at a level at which it could accomplish planning and maintenance missions only, including the ability to update automated data processing capabilities as technology advances. Under either the suspended registration or the deep standby alternative, reactivation of a draft registration process would be initiated upon receipt of authorization. The President can reinstate registration requirements by issuing a proclamation, but the Military Selective Service Act does not currently allow induction into the armed forces. The Congress would have to pass legislation giving the President induction authority. Two major concerns relating to the implementation of either of the alternatives are whether SSS could meet DOD’s requirements, given the time needed to make the agency fully operational, and how much reconstitution would cost. SSS officials estimate that recovering from suspended registration or a deep standby and delivering the first draftees to the induction centers would take more time than DOD’s current 193-day requirement. They estimate it would take about 24 more days to deliver the first draftees after recovering from the suspended registration alternative. The officials expect that the recovery costs would total about $17.2 million. SSS officials also estimate that revitalizing the agency from a deep standby posture and delivering the first draftees would take about 181 more days than DOD’s current requirement and would cost about $22.8 million. These costs cover rehiring personnel; obtaining data processing capability; and acquiring equipment, supplies, and other resources needed to conduct a mass registration and return the agency to its present operating capability. These costs also cover acquisition of necessary additional office and data processing space. SSS officials informed us that if the agency reinstated registration after having operated under either the suspended registration or deep standby option, it would need to conduct a time-limited registration of the 19- and 20-year-old groups and then conduct a continuous registration of all males in the remaining age groups (those between the ages of 21 and 26). The agency’s experience in conducting a 2-week registration of the 19- and 20-year-old age groups was very successful during the peacetime reinstatement of registration in 1980. However, the agency could not project with a high degree of confidence that it would similarly succeed when conducting a time-limited registration during wartime or a national crisis. SSS officials stated that unless the mass registration program can achieve high levels of compliance (at least 90 percent of the targeted population), the fairness and equity of the ensuing draft could be called into question. Additionally, officials said the “lottery,” which would be used to determine the order of call in a draft, could be delayed until high compliance is achieved to preclude men with birthdates that draw low numbers from willfully refusing to register. In 1980, SSS demonstrated that it could achieve a high percentage of compliance during a time-limited registration. At that time, SSS conducted two time-limited registrations, after recovering from a deep standby posture. During these registrations, 87 percent of the young men born in 1960 and 1961 (19- and 20-year-olds) registered during a 2-week period in July 1980, and 77 percent of the young men born in 1962 (19-year olds) registered during a 1-week registration period in January 1981. SSS officials indicated that these mass registrations occurred after 6 months of publicity and public debate and with no threat of an impending draft. In the view of SSS officials, a return to registration from either alternative described in this report is likely to be in connection with a war or crisis, and they believe early compliance rates cannot be predicted in a crisis environment. SSS officials stated that the agency’s main problem in gearing up in 1980 was in reinstating and activating the local, district appeal, and national boards in preparation for a possible draft. They said the process would be time-consuming because more than 10,000 volunteers forming 2,000 boards would need to be identified, appointed, and trained. SSS officials also stressed that to help ensure fairness, the composition of the boards should racially and ethnically reflect the demographics of the young men in the communities they would serve. Given the agency’s experience in recovering from a deep standby in 1980, SSS officials added extra time to their current estimates of the time required to make the agency fully operational. SSS officials believed that the variables that could affect the timeliness, fairness, and equity of a future draft made it prudent to build additional time into their estimates to conduct a draft, should registration be suspended or the agency placed in deep standby. SSS reviewed a draft of this report and stated that the report did an excellent job of analyzing the dollar requirements of peacetime registration and estimating the structure and funding changes that may result if national security policy was changed to abandon the current registration requirement. SSS also commented that our report did not address some aspects of continuing peacetime registration that it characterized as equally important, but less tangible. Those aspects included viewing peacetime registration as (1) low-cost insurance against unforeseen threats, (2) a sign to potential adversaries of U.S. resolve, and (3) a link between the all volunteer military force and society at large. We did not review these implications of continuing peacetime registration as part of our audit scope and clarified the report to reflect this fact. SSS also provided technical comments, which we incorporated as appropriate. SSS comments are presented in appendix II. In performing our review, we interviewed and obtained documents from SSS officials in Financial Management; Planning, Analysis, and Evaluation; Operations; Public and Congressional Affairs; and Information Management. We identified SSS’ current mission and operating parameters, focusing on the draft registration system. We made preliminary inquiries regarding four alternatives to SSS’ present operations, that is, two passive registration alternatives, a suspended active registration alternative, and a deep standby alternative. Since passive registration alternatives would raise constitutional issues and possibly encourage lawsuits regarding fairness and equity of such systems during mobilization, we did not address these alternatives. For the two remaining alternatives, we obtained from SSS estimates of costs that could be saved upon implementation of either alternative. Since the cost savings would surface through reductions in personnel, we obtained from SSS the effect of implementing either alternative on its staffing levels. In addition, we obtained from SSS cost estimates associated with revitalizing registration or with moving the agency from a deep standby posture to full operational status. SSS also gave us time estimates for the revitalization of both the registration process and the board structure and its assessment of the alternatives’ effects on meeting DOD’s manpower and mobilization time frame requirements. We did not validate the cost and time estimates but made judgments on their reasonableness by discussing the methods and assumptions SSS used to develop the estimates and by matching baseline information to agency backup documents. We did not review the policy implications of changing or continuing the peacetime registration program. We conducted our review between December 1996 and July 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen and Ranking Minority Members of the House Committee on Government Reform and Oversight, Senate Committee on Governmental Affairs, House and Senate Committees on Appropriations, House Committee on National Security, Senate Committee on Armed Services, and House and Senate Committees on the Budget; and the Director of the Selective Service System; the Secretary of Defense; and the Director, Office of Management and Budget. We will also make copies of the report available to others, on request. Please contact me on (202) 512-5140 if you have any questions concerning this report. Major contributors to this report are listed in appendix III. Men between the ages of 18 and 26 can register with SSS in six ways: (1) fill out an SSS form at U.S. Postal Service facilities throughout the nation and at U.S. embassies or consulates overseas; (2) complete and return a registration reminder mail-back postcard or a compliance postcard required as a result of having been identified by SSS from various databases; (3) join the military or Job Corps; (4) complete a registration form provided by volunteer registrars; (5) register when applying for student financial assistance; and (6) initiate registration by computer using the Internet. Men may register at any one of the more than 34,000 post offices in the United States and U.S. territories by completing SSS Form 1. During fiscal year 1996, about 386,000 individuals used this procedure to register. Registrants should receive a registration acknowledgement and a Selective Service number within 90 days. If the registrant does not receive acknowledgement within this time frame, he is required to contact SSS. SSS sends reminder postcards to young men about to turn 18, based on driver licenses lists received from states’ departments of motor vehicles and similar lists from other sources. In fiscal year 1996, over 2 million young men were sent reminder mail-back registration postcards, and 792,435 men returned the registration portion. SSS also does list matching to identify eligible males who have not registered as required, using data from each state’s departments of motor vehicles, Department of Defense high school recruiting lists, the U.S. Immigration and Naturalization Service’s files of individuals seeking citizenship or legal residency status, voter registration files, and the Department of Education. Once identified as possible nonregistrants, the individuals are sent a reminder, including a compliance postcard. About 343,300 men registered after receiving at least one communication requiring compliance. The names of those who did not register or respond are referred to the Department of Justice for possible prosecution. The third registration method is the automatic registration of active duty and reserve military personnel as well as males in the Job Corps who have not reached age 26 at the time of their enlistment. Approximately 55,400 military personnel and about 16,700 Job Corps members were automatically registered through this method in fiscal year 1996. Beginning in fiscal year 1998, the U.S. Immigration and Naturalization Service plans to include on its forms language for automatic registration of all eligible male aliens applying for citizenship or adjustment of status. SSS also has more than 10,000 volunteer registrars in public and private schools who advise eligible males of their responsibility to register. The volunteers provide registration forms and collect and forward the completed forms to SSS. Additionally, SSS has about 4,300 volunteer registrars in the National Association of Farmworkers program and in various state agencies and state military departments. Approximately 60,500 men were registered by volunteer registrars during fiscal year 1996. The electronic registration procedure can be used by students applying for student financial assistance and by individuals who initiate registration through the Internet. In 1982, the Congress amended the Military Selective Service Act to provide that any student who is required to register with SSS but has failed to do so is ineligible for student assistance under title IV of the Higher Education Act of 1965. Since then, the Department of Education and SSS have implemented a telecommunications datalink that is used for electronic registration and registration verification. A student is automatically registered by marking the box “register me” on the Application for Federal Student Aid. During fiscal year 1996, about 177,600 men registered using this method. Beginning in March 1997, men who have access to the Internet can initiate the registration process by filling in name, date of birth, address, and social security number on an on-line registration form. This information is downloaded to SSS, which sends the registrant a card requesting that the information be verified. When the verification card is returned and SSS sends the registration acknowledgement to the registrant, registration is completed. All new registrants receive an acknowledgement card from SSS. The card serves as proof of registration and gives each registrant a unique Selective Service number. Mark C. Speight The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the organization and costs of the Selective Service System (SSS) draft registration program and estimates of the comparative costs and organizational structure changes of two selected alternatives: (1) a suspended registration alternative, under which most of SSS' infrastructure would remain intact, including a significant portion of its staff and all of its local, district appeal, civilian review, and national boards; and (2) a deep standby alternative, which would suspend registration, reduce a substantial portion of the workforce, and disband the local, district appeal, civilian review, and national boards. GAO noted that: (1) most of SSS' potential cost reductions, under either a suspended registration or a deep standby alternative, would result from reductions in personnel; (2) SSS estimates that the suspended registration alternative would reduce authorized and assigned civilian, active military, and part-time military reserve personnel by about 33 percent; (3) these reductions would produce first-year cost savings of $4.1 million and subsequent annual cost savings of $5.7 million; (4) SSS estimates that the deep standby alternative would reduce authorized civilian, active military, and part-time reserve personnel by about 60 percent; (5) the latter alternative reflects a dismissal of thousands of trained, unpaid local, review, and appeal board volunteers; (6) under the deep standby alternative, the part-time state directors, who according to SSS officials are paid for an average of 14 days of work per year, would not be paid; (7) altogether, these reductions would produce first-year cost savings of $8.5 million and subsequent annual cost savings of $11.3 million; (8) under both alternatives, mass registrations would be needed if a mobilization were authorized; (9) SSS' plans show that the agency could currently meet the Department of Defense's (DOD) requirement to provide the first draftees at 193 days; (10) in contrast, SSS officials believe that the agency would be unable to meet DOD's current requirements for unpaid manpower under either alternative; (11) the reason cited is the time needed to reinstate an active registration system (for either alternative), to reconstitute and train the boards, and to rebuild their supporting infrastructure (for the deep standby alternative); (12) SSS officials estimate that in reinstating registration after suspension, they could meet DOD's requirement for the first draftees in about 217 days; (13) they also estimate that in reinstating a registration system, reconstituting and training the boards, and rebuilding the supporting infrastructure after a deep standby posture, they could meet DOD's requirement for the first draftees in about 374 days; (14) officials told GAO that these estimates represent their best assessment of the time required to return to full operations; and (15) SSS officials also estimated that the cost to reinstate a suspended registration could total about $17.2 million and the cost to revitalize the agency from a deep standby posture could total about $22.8 million.
The scope of the nation’s transportation system is vast and increasingly congested. Two key components of the transportation network are the nation’s highways and transit system. There are approximately 4 million miles of highway in the United States, which serve to provide mobility to millions of passengers and millions of tons of freight each day. In addition, over 600 transit agencies provide a range of transit services to the public, including rail and bus service. Each workday, about 14 million Americans use some form of transit. Over the last 20 years, all levels of government, including the federal government, have spent hundreds of billions of dollars on the nations’ highways and transit systems to enhance mobility as well as meet other needs. Despite these expenditures, increasing passenger and freight travel has led to growing congestion. For instance, annual delays per traveler during rush hour have almost tripled, increasing from 16 hours in 1982 to 46 hours in 2002. According to DOT forecasts, passenger and freight travel will continue to increase in the future.There are a number of strategies, such as preventive maintenance, improving operations and system management, and managing system use through pricing or other techniques, which can be taken to help address the nation’s mobility challenges. One of the key strategies is to invest in new physical capacity in the transportation system. While such investment is the subject of this report, as we have noted in the past, a targeted mix of these strategies is needed to help control congestion and improve access. (See app. IV for additional information about the level of usage of and investment in the nation’s highway and transit systems.) The funding for new transit and highway projects comes from a variety of sources, including federal, state, and local governments; special taxing authorities and assessment districts; and user fees and tolls. The Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) and TEA-21 continued the use of the federal Highway Trust Fund as the mechanism to account for federal highway user-tax receipts that fund various highway and transit programs. Once Congress authorizes funding, FHWA makes federal-aid highway funds available to the states annually, at the start of each fiscal year, through apportionments based on formulas specified in law for each of the several formula grant programs. Ninety-two percent of the federal-aid highway funds apportioned to the states in fiscal year 2003 were apportioned by formula.According to DOT officials, the majority of federal-aid highway funds are used for maintenance purposes, not new investments. FTA also uses formulas to distribute federal urbanized and nonurbanized funds for capital and operating assistance to transit agencies and/or states. FTA also has discretionary transit programs, including the New Starts program. The New Starts program provides funds to transit providers for constructing or extending certain types of transit systems and is the primary source of funding for new transit capacity. FTA generally funds New Starts projects through full-funding grant agreements, which establish the terms and conditions for federal participation in a project, including the maximum amount of federal funds available for the project. To compete for a full-funding grant agreement, a transit project must emerge from a regional planning process. The first two phases of the New Starts process—systems planning and alternatives analysis—address this requirement. The systems planning phase identifies the transportation needs of a region, while the alternatives analysis phase provides information on the benefits, costs, and impacts of different corridor-level options, such as rail lines or bus routes. The alternatives analysis phase results in the selection of a locally preferred alternative— which is intended to be the New Starts project that FTA evaluates for funding. After a locally preferred alternative is selected, the project is eligible for entry into the New Starts process. FTA oversees the management of projects from the preliminary engineering phase through construction and evaluates the projects for advancement into each phase of the process, as well as annually for the New Starts report to Congress. FTA’s New Starts evaluation process assigns ratings on the basis of a variety of statutorily defined criteria, such as mobility improvements, and determines an overall rating. FTA uses the evaluation and ratings process, along with its consideration of the state of development of the New Starts projects, to decide which projects to recommend to Congress for a full- funding grant agreement. ISTEA and TEA-21 also established an overall approach for transportation planning and decision making that state, regional, and local transportation agencies must follow to receive federal funds. This approach includes involving numerous stakeholders, identifying state and regional goals, developing long- and short-range state and regional planning documents, and ensuring that a wide range of factors are considered in the planning and decision-making process. For example, transportation officials must consider safety, environmental impacts, system connectivity, and accessibility, among other things. While the federal requirements specify a wide range of factors that must be considered when selecting a project from alternatives, they generally do not specify what analytical tools, such as benefit-cost analysis, transportation officials should use to evaluate these factors. Instead, local, regional, and state agencies are largely responsible for selecting the methods used to analyze these factors. Federal requirements also do not mandate that local, regional, and state agencies choose the most cost-beneficial project. Rather, transportation officials at these agencies have the flexibility to select projects on the basis of their communities’ priorities and needs. Even in the more structured New Starts program, state, regional, and local agencies have discretion in selecting the preferred alternative, although, according to FTA, these agencies are likely to consider New Starts requirements in the decision making process. Various analytical approaches, including benefit-cost, cost-effectiveness, and economic impact analyses, have been refined over time to better calculate the benefits and costs of transportation investments and provide decision makers with tools to make better-informed decisions. (Table 1 describes the purposes of the different economic analyses.) The Office of Management and Budget (OMB), DOT, and GAO have identified benefit- cost analysis as a useful tool for integrating the social, environmental, economic, and other effects of investment alternatives and for helping decision makers identify projects with the greatest net benefits. In addition, the systematic process of benefit-cost analysis helps decision makers organize and evaluate information about, and determine trade-offs between, alternatives. Because the federal-aid highway program is funded under a formula program and projects are therefore not subject to an evaluation process at the federal level, there are no federal requirements for economic evaluation of highway investment costs and benefits—except that FHWA does ensure that federal highway funding is being spent on an eligible roadway for eligible purposes. In contrast, FTA’s New Starts program is discretionary, and FTA is authorized to establish various requirements that sponsors of transit capital investments need to meet in estimating a project’s benefits and costs, including calculating the cost-effectiveness of a proposed project and providing information on expected land-use effects, to obtain federal funding. However, transit agencies are not required to conduct a formal benefit-cost analysis, and FTA is prohibited by TEA-21 from considering the dollar value of mobility improvements in evaluating projects, developing regulations, or carrying out any other duties. FTA officials noted that the New Starts evaluation process results in greater federal oversight and scrutiny for New Starts projects, compared with the level of federal oversight for federally funded highway projects. The types of direct benefits that transit and highway projects may produce include user benefits, such as travel-time savings, and benefits that accrue to users and nonusers alike, such as reductions in the adverse environmental impacts of transportation. These direct benefits can in turn produce indirect benefits, such as economic development and employment that affect the regional or local economy; however, these indirect benefits may constitute transfers of economic activity from one area to another or are a result of the direct benefits filtering through the economy. Although these indirect benefits represent real benefits for the jurisdiction making the transportation improvement, they represent transfers and not real economic benefits, from a national perspective. Transportation investments also produce costs, including the direct costs to construct, operate, and maintain the project as well as other potential social costs resulting from the construction and use of the facility, such as unmitigated environmental effects. The potential benefits and costs of any specific highway or transit investments will depend on the specifics of the project being considered and the local economic and transportation conditions. However, measuring all the potential benefits and costs of proposed highway and transit investments can be challenging and subject to several limitations and sources of error. For example, in current practice, benefit- cost analysis and economic impact analysis may not include all potential benefits. In addition, there are many limitations in being able to accurately predict changes in traveler behavior, land use, or the use of nearby roadways or alternative travel options resulting from a new investment. Sources of error can also include double counting of benefits and not comparing a project to a viable alternative or improperly defining the “do- nothing” case for comparison. The key categories of potential direct user benefits from highway investments include travel-time savings, reductions in accidents, and reductions in vehicle operating costs. These user benefits are historically included in benefit-cost analysis of such investments. The User Benefit Analysis for Highways Manual developed by the American Association of State Highway and Transportation Officials (the AASHTO Manual) provides guidance on how these benefits should be estimated. In addition to benefits that accrue solely to users, social benefits such as reductions in environmental costs—including reduced emissions, noise, or other impacts—are also potential sources of direct benefits of highway projects. However, these benefits are more difficult to quantify and value; and as a result, they are less often included in benefit-cost analyses of transportation investments. Guidance from FHWA’s Office of Asset Management, the Economic Analysis Primer, discusses these benefits along with user benefits.Experts we consulted also cited improvements in travel-time reliability as a major source of potential direct-user benefits, particularly for freight transportation, although officials at FHWA stated that this benefit is complex and the best means to incorporate it into benefit-cost evaluations has not been resolved. For transit investments, direct benefits include improving travel times for existing transit users, improving travel times for autos and trucks on alternative roadways, lowering user and environmental costs of auto use by attracting riders out of their vehicles, and providing a back-up or future option for nonusers of transit. These types of benefits are described in guidance on conducting benefit-cost analysis for transit projects published by the Transit Cooperative Research Program (TCRP) (this report is known as the Transit Manual). Another TCRP report on transit benefits describes other types of potential benefits, which may result from the project but may be more difficult to include in a benefit-cost analysis, such as improved job accessibility for individuals who are dependent on transit and those who do not or cannot drive a car. See Table 2 for the categories of direct benefits described in the AASHTO Manual, the Economic Analysis Primer, and the TCRP reports. In addition to direct benefits, a number of indirect benefits are also attributed to highway and transit investments. Lowering transportation costs for users and improving access to goods and services enables new and increased economic and social activity. Over time, individuals, households, and firms adjust to take advantage of those benefits, leading to several indirect impacts. These indirect impacts include changes in land use and development, changes in decisions to locate homes and businesses in areas where housing and land are less expensive or more desirable, and changes in warehousing and delivery procedures for businesses in order to take advantage of improved speed and reliability in the transportation system. These impacts then lead to increased property values, increased productivity, employment, and economic growth. Economic impact analysis is generally used to estimate the extent to which direct benefits translate into indirect economic impacts. Table 3 shows the types of indirect benefits that are included in economic impact analysis. The extent to which these indirect benefits are relevant depends to some degree on whether the project is viewed from a local or a broader perspective. These economic impacts may represent transfers of economic activity from one area to another; and, while such a transfer may represent real benefits for the jurisdiction making the transportation improvement, it is not a real economic benefit from a national perspective because the economic activity is simply occurring in a different location. For example, a highway improvement in one county may induce businesses to relocate from a neighboring county, bringing increased tax revenue and providing jobs; but the neighboring county then loses that tax revenue and employment. Indirect benefits may also represent capitalization of the direct user and social benefits, and therefore should not be added to the direct benefits. For example, a project’s transportation benefits, in terms of improved travel times, can lead to increased demand for more remote properties, and thus lead to increases in those property values. In this instance, the users are transferring their travel benefits to property owners through a higher purchase price. Including the increased property value and the travel-time benefit in an overall project evaluation would constitute counting the same benefit twice. However, some experts we consulted and literature we reviewed indicated that there could be some residual benefit from these indirect effects that is not accounted for in travel-time benefits or other direct impacts and argue that this portion should be incorporated into a comprehensive estimation of project benefits and costs. Transportation investments also produce costs—such as the costs to construct, operate, and maintain the project; traffic delay costs during construction of the project; and other potential social costs, resulting from the construction and use of the facility—such as unmitigated environmental effects or community disruption. For example, while a project may have an indirect benefit of increasing some land values, it may also reduce land values elsewhere due to negative impacts from noise and emissions that may result from the improved roadway or transit line. In addition, a transportation improvement can entail costs for some regions if it diverts economic activity away from a particular area. The size and type of benefits and costs that will manifest from highway and transit investments depend critically on local conditions, such as existing travel conditions and the extent of congestion, economic conditions and development patterns, and the extent of the existing road and transit networks. In addition, the type of project, its design, and other specifics will also affect the types of benefits and costs the project may produce. Each particular project must be evaluated on its own merits, in comparison with any other viable alternatives to address the transportation and other goals of the region. For example, research indicates that transit projects can result in peak period, travel-time savings for users of alternative roadways when those roadways are heavily congested, the transit project has a separate ROW and a fixed schedule, and door-to-door travel times on the transit line are competitive or lower than door-to-door travel times on the roadway in peak periods for some road users. Building a rail line alongside a road that is not frequently traveled will clearly not result in similar benefits. Similarly, the extent to which a highway investment will result in reductions in travel times and the extent to which new travelers will return the highway to previous levels of congestion and delay, depend on the level of congestion on alternative routes, the extent of the local transit system, and local economic conditions. Research further indicates that to realize desired land-use changes and higher density development, transit investments need to be coordinated with supportive local land-use policies and that impacts need to occur more readily in rapidly growing regions with demand for high-density development.In a similar fashion, the extent to which highway investments will result in improvements in freight productivity will depend on economic conditions; the amount of freight traffic on the local network; the presence of alternative freight modes, such as rail or waterways; and various other locally specific factors. In addition, specific projects will also affect different areas and groups differently. A transportation project that is projected to produce large benefits may cut through one neighborhood and provide excellent access to another, thereby imposing costs on one area and creating benefits for another or providing service to wealthy areas at the expense of lower income areas. The costs of highway investments and various transit alternatives can vary significantly, based on the location and specifics of the project. For example, according to a 2002 report from the Washington State DOT, average construction costs for a lane mile of highway range from $1 million to over $8 million across 25 states the department surveyed, with some projects costing far more than these averages suggest. In a recent study on different transit modes, we found that light rail construction costs vary from $12.4 million per mile to $118 million per mile. As with construction costs, the costs to operate and maintain highway and transit systems also vary significantly, based on the specific project and area. For example, according to the National Transit Database, operating costs per-vehicle revenue mile for heavy rail systems ranges from about $5 to about $15, whereas for light rail, these costs range from a little over $5 to over $20 in some locations. Experts we consulted and literature we reviewed cited several limitations in current practice, and some major sources of error in evaluating transportation projects that can lead to over or underestimation of a project’s benefits and costs. The following sections discuss some of these limitations and sources of error. One of the key challenges in measuring and forecasting benefits and costs is the inability to accurately predict changes in traveler behavior, land use, or the usage of nearby roadways or alternative travel options resulting from a highway or transit project using current travel models. For example, according to FHWA guidance, travel models do not generally anticipate the impact of a transportation improvement on travelers who change their time of travel or make entirely new trips in response to the relatively lower trip cost resulting from the improvements. Current transportation demand models are also unable to predict the effect of a transportation investment on land-use patterns and development, since these models take land-use forecasts as inputs into the model. Nonetheless, expected land use and development impacts are often the major drivers of transportation investment choices. In addition, the effect of a highway or transit investment on alternative roadways or on other modes is rarely taken into account and is difficult to forecast. In fact, according to the DOT Inspector General, transit’s effect on alternative roadways is not reliably estimated by local travel models, although this effect can be a major source of benefits in some cases. These same models are also used in making highway investment decisions. Compounding these shortcomings is the considerable variation in models used by local transportation planning agencies. The federal government gives local transportation planning agencies the flexibility to choose their own transportation models without being subject to minimum standards or guidelines. This flexibility reflected varying local conditions and expertise in applying these models. However, one expert pointed out that this strategy has had the unintended consequence of making local planning agencies very dependent on outside expertise because they usually contract with independent consultants who have their own software packages. This strategy also has produced significant variation in forecast quality and limited the ability to assess quality against the general state of practice. Data quality is a pivotal concern to the challenges in modeling, as the available data provide critical input for travel models. For example, data about traffic flow throughout the day, rather than at a single time, are crucial to produce valid representations of travel needs and problems. However, reliable and complete data are not always available—which can result in forecasting errors. Collecting the data needed for modeling is growing more expensive and difficult. For instance, a home survey of travel habits, which identified basic transportation needs and travel patterns of a region and is the foundation of transportation modeling, is now beyond most local transportation agencies’ annual budgets, according to experts. Moreover, obtaining data through telephone surveys is difficult and willingness to participate is declining. Experts we consulted and literature we reviewed also indicated that benefit-cost analysis and economic impact analysis often do not include all potential benefits, some of which are very difficult to quantify. For example, according to one expert we consulted, transit projects are often put at a disadvantage in terms of estimating benefits and costs relative to highway projects because several types of benefits specific to transit are not typically evaluated and are difficult to quantify. A review of economic analyses conducted for over 30 transit projects found that these analyses routinely omitted benefits to noncar owners, often did not include environmental benefits, and often did not evaluate the economic development benefits related to the project. Experts we consulted also highlighted the importance of taking account of which groups benefit from a project and which bear the costs, although these distributional impacts are commonly ignored in evaluation of a project’s benefits and costs. In theory, a benefit-cost analysis could take such considerations into account, but the outcome of a benefit-cost analysis is a net value, which under standard assumptions eliminates any distinction between groups who benefit and groups who do not. Project appraisals often double count benefits and count certain project expenditures as benefits. As previously discussed, for the most part, indirect benefits are more correctly considered capitalization of direct user benefits or transfers of economic activity from one area to another.Therefore, estimating and adding such benefits to direct benefits would constitute double counting and lead to an overestimation of a project’s benefits. Some evaluations of particular transportation projects also cite jobs created, or the economic activity resulting from the construction of the project, as benefits of the project. Experts we spoke with indicated that job creation from transportation spending would only be a true benefit if the person getting the job would otherwise be unemployed, and thus the reduction in unemployment benefits could be considered a benefit of the project. Nonetheless, local decision makers generally view such expenditures as producing benefits for their jurisdiction. In some evaluations decision makers also count the avoided cost of some other alternative project as a benefit of the project under consideration. For example, in some evaluations, decision makers have considered the foregone expense of improving the highway as a benefit of a transit project, or the foregone expense of adding general-purpose lanes as a benefit of adding high-occupancy vehicle lanes. Instead, those costs should be included in the benefit-cost analysis of the alternative and then compared with the benefits and costs of all other alternatives. In some appraisals, such cost savings have been the largest source of project benefits. Not Discounting Future Benefits Another expert we interviewed stated that state departments of and Costs Properly transportation often do not discount future benefits into present values. Benefits and costs incurred in the future have lower values than those incurred in the present because, in the case of benefits, the benefits cannot be enjoyed now; and in the case of costs, the resources do not need to be expended now. Benefits and costs are worth more if they are experienced sooner because of the time value of money. Failure to discount future benefits or using an inappropriate discount rate can severely affect the results of a benefit-cost analysis. Not discounting at all will greatly overestimate a project’s benefits. An unreasonably high discount rate will underestimate a project’s benefits. OMB provides guidance on choosing appropriate discount rates for different types of investments. Another source of error when calculating transportation projects’ potential benefits and costs occurs because current travel demand models tend to predict unreasonably bad conditions in the absence of a proposed highway or transit investment. Travel forecasting, as previously discussed, does not contend well with land-use changes or effects on nearby roads or other transportation alternatives that result from transportation improvements or growing congestion. Before conditions get as bad as they are forecasted, people make other changes, such as residence or employment changes to avoid the excessive travel costs. In one area we visited, local officials told us that the “do-nothing” scenario for a particular project evaluation predicted that travel delays would grow to almost 80 minutes for a typical commute after 20 years, and impacts on travel-time reductions were then calculated for the proposed investment. However, officials noted that traffic did not degrade as they had predicted in the years leading up to construction—with delays of 13 minutes by 1999, although they had predicted delays of 40 minutes or more by that time. The officials noted that generally, commuters only stand for a certain amount of delay before they shift their own behavior to avoid the delay. In addition, experts indicated that projects are often not compared to viable alternatives, or to projects in other modes, to enable adequate comparisons of investment alternatives. We found in our case studies of five New Starts projects and five highway projects that the transit projects we reviewed were compared with other transit modes, such as increased bus service, but not to new highway investment alternatives; and none of the highway projects we reviewed were compared with a transit alternative. However, in some cases, differently designed alternatives can prove to be a superior option. For example, one study of transportation decision making in Houston found that, if the bus alternatives to the preferred light rail system were designed to cost as much as the light rail option, the resulting bus system would carry more passengers and be more cost-effective than the rail option; however, local planners and decision makers did not consider such an alternative. Another recent evaluation compared a transit and a highway project with common economic yardsticks—such as a benefit-cost ratio and a rate of return—and found that under certain circumstances, transit can perform favorably compared with a highway alternative. According to our survey results and case studies, although the costs and benefits of projects were almost always considered in some way, formal analyses such as benefit-cost analysis were not usually conducted when considering project alternatives, and they were completed less frequently for proposed highway projects than transit projects. Additionally, officials reported that the results of formal economic analyses were just one factor among many considered in project selection, and it was not necessarily the most important factor. Other important factors included qualitative assessments of the potential land use or economic development benefits of the project, public opinion and political support, and funding availability. Most state DOT and transit agency officials that responded to our survey said that when alternatives are considered for a proposed project, they complete some analysis of either costs or benefits of the various alternatives, but they complete a formal benefit-cost analysis, economic impact analysis, or cost-effectiveness analysis less frequently (see fig. 1). These results indicate that many state and local transportation agencies are not consistently using formal economic analysis as part of their investment decision-making process to evaluate project alternatives. In addition, in the locations that we visited, we did not find any examples of completed benefit-cost analysis for the 10 projects that we examined. According to our survey results, when comparing alternatives for proposed projects, economic analyses were more likely to be conducted for transit projects than highway projects (see fig. 2). We saw a similar pattern in our case studies. For instance, a cost-effectiveness analysis was completed for all five transit projects that we examined in our case studies. We also found additional studies for the transit projects that included qualitative examination of such potential project impacts as regional economic development opportunities, distribution across social groups, increased transit reliability, and increased transit ridership. For the highway projects we studied, we found that project documents contained little, if any, economic analyses on the various alternatives. We did find that for some highway projects, safety and environmental impacts were quantified, but not put into dollar terms. Local and state officials noted that these economic analyses are done more often for transit projects because of the New Starts requirements. For example, FTA requires project sponsors to calculate a project’s cost- effectiveness in order to be eligible to receive New Starts project funding— and the results of this analysis are used in FTA’s evaluation of the project.In contrast, there are no similar federal requirements for economic analysis of highway projects because highway projects are funded under a formula program, and there is no federal analysis of project economic worthiness. In addition, because New Starts projects may require a higher local funding share compared with federally funded highway projects, officials suggested that more economic analysis is generally completed for transit projects, especially if a special taxing authority is required or the project becomes controversial and subject to public scrutiny. In our past work, we found that numerous factors shape transportation investment choices and that factors other than those considered in analyses of projects’ benefits and costs can play a greater role in shaping investment choices. Some of the factors considered reflect local or regional priorities and needs; others are required to be considered in the decision-making process by federal legislation. For example, as a result of the National Environmental Policy Act (NEPA) of 1969, transportation officials must make project decisions that balance engineering and transportation demands with the consideration of social, economic, and environmental factors, such as air quality and impacts on communities. Some of these factors may not be easily considered in traditional benefit- cost analysis. Similarly, TEA-21 requires local, regional, and state transportation agencies to consider a range of factors in their planning, including environmental compliance, safety, land use, and public input. Our case studies also demonstrated that officials often place value on a variety of indirect impacts that may be difficult to estimate and are often not quantified in project analyses. For example, we found that many of the projects we examined were expected to result in desirable changes in land use and economic development in the region, although these types of impacts were not quantified or systematically analyzed in the planning documents we reviewed for both highway and transit investments. For example, one proposal discussed the light rail transit project’s potential for attracting new businesses and developers to the surrounding low-income community, but it did not present projections of the potential impact or estimates of the types of benefits these impacts might produce. Transportation officials indicated that these factors were just as important, if not more important than the results of their cost-effectiveness analysis in the decision to pursue the project. Similarly, our survey of transit agencies and state DOTs also showed that the results of economic analysis of a project are not necessarily the most important factor considered in highway and transit investment decision making. For highways, political support and public opinion, the availability of state funds, and the availability of federal matching funds were ranked most often as important factors in highway project decision making within state DOTs (see fig. 3). Thirty-four state DOTs said that political support and public opinion are factors of great or very great importance in the decision to recommend a highway project, whereas only eight said that the ratio of benefits to costs was a factor of great or very great importance. For transit, results from our survey showed that the factors ranked with “great or very great importance” most often included political support/public opinion, the availability of local funds, and the availability of federal matching funds. Specifically, of the 19 transit agencies that responded to these survey questions, 17 said that political support/public opinion and the availability of local funds were factors of great or very great importance in project decision making (see fig. 4). Survey respondents also provided a number of examples of other factors that figure into the decision-making process. For example, one state DOT highway survey respondent mentioned that in the respondent’s state, projects are often built as a basic public good, regardless of the relative benefits and costs. Another state DOT highway survey respondent said that the geographic distribution of funds plays a large role in determining the priority of highway projects. One transit agency survey respondent commented that comprehensive, long-range planning is a major component in evaluating and selecting projects, and the criteria are not solely based on economic factors; other typical considerations include population growth, land-use projections, environmental factors, and housing. To further analyze the relationship between the results of economic analyses of transportation projects and decisions made in selecting the project, we conducted a regression analysis of the relationship between the results of benefit-cost analyses completed for state transportation projects in California and the subsequent decisions to program construction funds for projects in the Statewide Transportation Improvement Plan. The benefit-cost analyses used by California considered travel-time savings, vehicle operating cost reductions, and safety benefits. In our analysis, we found that projects with higher benefit-cost ratios had a higher probability of receiving funding for construction. However, the analysis explained little of the overall variation—for example, some projects with high benefit-cost ratios received funding while others with relatively lower ratios also received funding, indicating that other factors were likely considered in the decision. Results from our literature review and case studies indicate that both completed highway and transit investments result in higher than expected costs and in usage that is different from what was projected. Transportation officials we interviewed generally contend that completed projects have achieved other outcomes that were projected to flow from the highway and transit investments, such as positive changes in land use and economic development. In most cases, however, these outcomes of highway and transit projects are not regularly quantified or evaluated after the projects are completed. Rather, transportation officials relied on limited and anecdotal evidence to support their statements about the impacts of the projects. Officials we met with cited several reasons that evaluations of completed projects are not regularly conducted, including lack of funding and technical challenges. A number of studies have shown that both completed highway and transit investments often result in outcomes that are different from what was projected. The following examples highlight such problems for both highway and transit projects. A study of over 250 transportation projects in Europe, North America, and elsewhere found that costs for all projects were 28 percent higher than projected costs at the alternatives analysis stage, on average. Rail projects showed the highest cost escalation, averaging at least 44.7 percent, while road projects averaged escalations of 20.4 percent. This study further found that cost underestimation has not improved over time, indicating systematic downward bias on costs. Initial results from an ongoing study of New Starts projects by FTA show that nearly half of the 19 projects, for which ridership was reviewed, will achieve less than two-thirds of forecast ridership by the forecast year. In addition, costs escalated on 16 of the 21 projects reviewed from the alternatives analysis stage, where decisions are made to go forward with a preferred alternative, to the completion of the project—with 4 of those projects experiencing increases of between 10 and 20 percent and 9 projects with increases over 20 percent. In a 1997 report, we collected and analyzed data for 30 highway projects costing $100 million or more. We found that cost growth occurred on 23 of 30 projects when comparing actual costs to costs estimated at the alternatives analysis stage, with about half of the projects experiencing increases of more than 25 percent. A 1996 study that compared actual toll-road revenues to forecasted revenue streams, found that 10 out of the 14 projects studied fell short of projections by 20 to 75 percent, while a majority of the projects missed or are likely to miss revenue forecasts in the second year by 40 percent or more. We found similar patterns for our case studies of 10 transit and highway projects in 5 metropolitan areas. Table 4 provides descriptions of the projects we reviewed in each metropolitan area. In summary, we found the following: Comprehensive data on the projected and actual costs and usage of all the highway projects we examined were not readily available. In particular, we were not able to obtain estimates of the projects’ costs at a consistent point in the project development cycle (e.g., alternatives analysis). As a result, it is difficult to draw overall conclusions on how the projected costs compared with the actual costs for the five projects. However, the limited cost data we were able to obtain suggest that at least two of the five highway projects experienced cost escalation. In one case, the capital costs were originally budgeted in the state’s capital funding program at approximately $62.7 million (in inflation-adjusted 1999 dollars); but the actual expenditures for the project, in 1999, approached about $94.4 million, 50 percent higher than the estimate. In another case, construction costs for the preferred alternative, at the alternatives analysis phase, were estimated at $16.6 million (in inflation- adjusted 2001 dollars), while actual construction costs in 2001, according to officials, approached $25.4 million, a 53 percent increase. In addition, in at least two locations, traffic after the improvement was greater than had been expected after project completion, leading to less congestion relief than had been expected. FHWA is working to improve the cost estimates of federal-aid highway projects. For example, in June 2004, FHWA issued guidance for developing cost estimates, including steps for producing more realistic early estimates. FHWA also established help teams that travel to states that ask for assistance in developing better estimates. The five New Starts transit projects we reviewed had more extensive information on the projected costs of the projects and had estimates from several different points in the project development process. When comparing as-built costs to cost estimates at the alternatives analysis stage—where decisions are made on the preferred alternative but the project is likely not at final design—three out of five New Starts transit projects we reviewed had actual costs in excess of projected costs by more than 10 percent. When comparing costs from the Full-Funding Grant Agreement stage—where the preferred alternative has been selected and the project is at its final design—only two projects had costs escalate, one by 6 percent and one by over 40 percent. At the time ridership figures were reviewed, the forecast years—that is, the years for which the ridership projections were made in the project’s planning documents—for four of the five New Starts projects remained in the future; therefore, final conclusions about whether the projects exceeded or fell short of ridership projections are premature. Currently, only one of the projects achieved the ridership levels projected; however, four of these five projects have surpassed 50 percent of the projected level of ridership for the forecast year. According to FTA, the agency has introduced a number of measures since these projects were planned and developed to improve ridership and cost estimates. For example, FTA is more rigorously examining ridership forecasts of projects, requiring before and after studies for all new projects, and conducting risk assessments of select projects to identify all significant risks related to the project’s schedule and budget and to ensure that mitigation measures or contingencies are in place, among other things. In addition, FTA is currently examining the projected and actual ridership of New Starts projects that opened in the last 10 years to assess whether these projects achieved their estimated ridership levels and to improve the reliability of forecasting procedures. FTA also instituted a pilot program in 2003 to hold FTA senior executives accountable for project outcomes. Specifically, FTA’s senior executive service team bonuses are tied, in part, to project cost control—that is, New Starts projects with full funding grant agreements must not exceed their current baseline cost estimate by more than 5 percent. Transportation officials offered several reasons that the actual costs and levels of usage differ from those projected. For example, transportation officials from one metropolitan area we visited attributed lower than expected transit ridership to a severe economic downturn and slower than anticipated development around transit stations. The economic downturn also affected the highway project in this area, resulting in less traffic than expected. This had the effect of reducing congestion, although the transit project was credited with contributing to congestion reduction as well. In addition, inflation, changes in the project’s scope, and changes in costs of building materials could also explain differences between the projected and actual costs of the project. For example, officials commented that estimated costs of a project always change as the project moves through the planning, design, and construction processes—becoming more accurate as more specifics about the project are known. When the cost of the project is initially estimated, sponsors do not know exactly how the scope/design of the project may change or what environmental problems may arise. However, by the time the New Starts project has reached the Full Funding Grant Agreement stage, or the highway project has had construction funds programmed, much more about these costs are known. Comparing costs from this stage to actual costs will reveal less variance than comparing costs with estimates from earlier stages in the process, such as the alternatives analysis stage. However, it is important to note that estimates from these earlier stages are generally used by project sponsors to select the preferred alternative. Outcome evaluations of completed projects are not usually conducted to determine whether proposed outcomes were achieved. For most of the highway and transit projects we reviewed, several of the proposed outcomes were not defined in any measurable terms in the project planning documents we reviewed. Moreover, officials stated that many of the projected outcomes were not usually quantified, tracked, or evaluated after the projects were complete. Of the 10 projects we reviewed, 6 did not have any type of outcome evaluation completed. Before and after studies for four projects had been completed or were being conducted—three for transit projects, and one for a highway project. Although these studies provide a description of corridor conditions before and after the project, they do not compare or evaluate actual outcomes with projected goals. Results from our survey also indicate that outcomes are not typically evaluated, although evaluations for transit projects tend to be conducted more so than for highway projects. In particular, 16 of 43 state DOTs reported that they have analyzed completed highway projects to determine whether proposed outcomes were achieved, while 13 out of the 20 transit agencies reported that they have conducted such evaluations. Although evaluations were not often conducted, officials we interviewed provided some limited evidence as to the outcomes resulting from the projects we reviewed. Table 5 shows the types of outcomes that project officials and planning documents cited for each project and the extent to which these outcomes were measured. As table 5 indicates, the projects were often expected to result in indirect impacts that are difficult to forecast and measure, such as positive changes to land use, and economic development, among other things. According to project officials, these outcomes, while not forecasted in measurable terms, were important reasons that the projects were pursued. For some outcomes, as table 5 indicates, transportation officials only had anecdotal or qualitative pieces of evidence about whether the projects achieved their proposed outcomes. For example, in one area, transportation officials cited personal experiences and public comments about reduced congestion on nearby roadways. In other areas, officials showed us developments that had been constructed around stations, or areas near the improvements where development was expected to occur as evidence of the projects’ impacts. Transportation officials we spoke with offered several reasons why they do not typically conduct evaluations of the outcomes of highway and transit projects. In particular, transportation officials and experts agreed that there is little incentive to direct available funding toward doing outcome evaluations. Because state and local funding is limited and these studies can be costly and difficult, local officials indicated that studies of completed projects were not as high a priority as pursuing and conducting studies on future projects. Several transportation officials stated that once a project is completed, it is considered successful; and planners then turn their attention to other projects. Some officials also noted that these projects inherently improve safety, mobility, and economic development and that evaluation of these outcomes is not needed. Thus, project evaluations for completed projects do not fare well in competition for limited planning funds. The Senate-proposed bill (S. 1072) to reauthorize federal surface transportation programs, which was considered by the 108th Congress in 2004, would increase funds available to support local transportation planning. The funds provided under such a provision could potentially be used to fund outcome evaluations. Experts and transportation officials we spoke with also stated there were many technical challenges to designing and completing outcome evaluations. For example, experts stated that it is very difficult to determine the economic impacts that can be attributed to a transportation project, given the multitude of other factors that can influence development. According to experts and transportation officials, once transportation investments are completed, they become a part of an entire transportation system; and, therefore, the effects of the individual project become difficult to isolate, evaluate, and attribute to the individual project. Finally, experts and transportation officials contend that a major disincentive to doing outcome evaluations is that the benefits of doing the analysis may be smaller than the potential risks. Transportation projects are concrete and cannot be easily redesigned or adjusted once completed, so some officials believe there is little incentive to find out that a project is not providing the intended benefits. Therefore, agencies tend to declare success once the project begins operating. There are options for providing state, regional, and local decision makers with more and better analytic information for making investment choices. These options focus on improving the value of this information for decision makers to make more fully informed choices and in helping ensure that projects can be evaluated on the results they produce. At the federal level, these options could be implemented either through incentives or mandates. However, each of these implementation approaches has a degree of difficulty in such matters as the time required and the impacts on federal programs and resources. In addition, any attempts to increase the use of such information should be tempered with the knowledge that other factors, such as the structure of federal programs and the requirements of legislative earmarks, will affect the extent to which such information can be used. These other factors often have a strong effect on decisions about which projects are funded. The experts who served on our panel provided a variety of options for improving information available to decision makers and potentially giving such information a greater role in highway and transit investment decisions. The options are of three main types: (1) improving the quality of data and transportation modeling, (2) improving the quality and utility of benefit-cost analysis methods and tools, and (3) evaluating the results of completed transportation projects. These options focus on making the analytic information more useful and relevant to investment decisions, according to experts. Experts noted two important caveats in considering these options, however. First, no single analytic tool can answer all questions about the impacts of transportation investment choices. Second, even when benefit and cost information is available, it may play a relatively limited role in investment decisions. As a result, the best information and analysis may not result in the most beneficial highway and transit investments. Local and state transportation agencies require valid, reliable data and transportation models in order to conduct analyses, including benefit-cost analysis. Yet, experts have expressed concerns about the quality of local data and transportation models and have proposed improvements in both areas. Several options have been proposed to improve data and modeling quality. For example, TRB, with DOT sponsorship, is undertaking a study to gather information and prepare a synthesis of local planning agencies’ current modeling state of practice so that this baseline can be used to identify data that these models require. In addition, an expert proposed adopting an approach used outside the transportation sector—that is, accept existing data but specify the degree of uncertainty associated with the data. This approach is based on the idea that consistent data and measures are more important than perfect data and measures. To improve the accuracy of local travel models used to support New Starts projects, FTA introduced new reporting and analysis software— “Summit”—in the fiscal year 2004 rating process. Summit is intended to produce a computation of user benefits from locally developed forecasts, as well as standardized analytical summaries of both the forecasts and user benefits. According to FTA, these reports and summaries have provided both FTA and transit agencies a means to (1) identify and diagnose travel forecasting problems related to assumptions regarding fare and service policies, regional transportation networks, land use, and economic conditions as well as (2) help ensure that the local forecast is utilizing comprehensive and up-to-date data on travel behavior and local transportation systems. As evidence of the impact of Summit, FTA officials noted that they required 22 of 29 projects rated in the fiscal years 2004 and 2005 rating cycles to correct flaws in their underlying local forecasting models. Despite these improvements, however, forecasting of transit user benefits currently has a critical shortcoming. FTA has discovered that current models used to estimate future travel demand for New Starts are incapable of estimating reliable travel time savings as a result of a New Start project. According to DOT’s Inspector General, this limitation is due to unreliable local data on highway speeds. FTA is studying ways to remedy this problem. Improving the Quality and Utility Experts said local, regional, and state transportation officials could have of Benefit-Cost Analysis Methods more reason to use benefit-cost analysis if it produced information more and Tools relevant to the investment choices that they face. In this regard, they cited various steps that could be taken to make benefit-cost analysis more accessible to these officials without making it more complex. Table 6 describes the improvements they identified. A third set of options suggested by the experts dealt with conducting more analyses of completed projects. Information about the outcomes of completed highway and transit projects can be used not only to better determine what a particular project accomplished, but also to improve decisions on other projects. For example, a study of how federal agencies use outcome information indicates that this information can help decision makers maximize project effectiveness by identifying “best practices” and better allocate limited resources. However, as noted previously, the outcomes of completed projects are not typically evaluated. Experts noted that such studies are more regularly conducted in other sectors, such as health and education programs. Such evaluations provide an opportunity to increase accountability in the planning process by documenting and measuring the results of projects. Outcome evaluations also offer the opportunity for officials to learn from successes as well as the shortcomings of past projects. FTA has recently adopted a requirement for project sponsors to complete before and after studies for New Starts projects. In particular, sponsors seeking federal funding for their New Starts project must submit to FTA a plan for the collection and analysis of information that addresses how the project’s estimated costs, scope, ridership and operating plans prepared during planning and project development compared with what actually occurred. According to FTA officials, this requirement is intended to hold transit agencies accountable for results and identify lessons learned for future projects. The Senate-proposed bill to reauthorize federal surface transportation programs, which was considered by the 108th Congress in 2004, would codify this requirement. Neither the House nor Senate reauthorization bills that were considered in 2004, or FHWA regulations, would require similar studies for most highway projects, although the Senate bill provides for evaluating projects funded by the Congestion Mitigation and Air-Quality program. Incentives, mandates, or a combination of both, could be used to increase decision makers’ use of analytic information and improve accountability for investment choices. Each strategy has factors that affect its feasibility—the difficulty of implementation, time required, and impacts on federal programs and resources. Each strategy also has its unique advantages and disadvantages, according to experts. Several experts also emphasized that the question of strategy is important because, although many ingredients for benefit-cost analysis already are in place as a result of local agencies’ compliance with extensive environmental and clean air analytic requirements, they have not taken the extra step toward this analysis. Incentives could be used to increase state, regional, and local agencies’ utilization of analytical information and tools. For example, funding could support additional analysis; training for state, regional, and local agency personnel in using the analytical tools; and performance incentives. Using incentives would also be consistent with what one expert described as the appropriate federal role—supplying funds to improve data and modeling practices, providing guidance regarding best practices, and evaluating completed transportation projects. State, regional, and local transportation agencies also may view the use of incentives—as opposed to a new federal mandate—as giving them more flexibility to respond to their stakeholders’ interest in how modal and distributional trade-offs are made. However, using incentives to increase the use of economic analytical tools, such as benefit-cost analysis, would be reasonably labor intensive for the respective federal agencies and require strong program management; clear strategies for setting goals and practices; and a workable method to ensure that state, regional, and local transportation agencies have good analytical tools, according to experts. FTA and FHWA are working to provide incentives that encourage greater use of analytical tools. For example, FTA and FHWA have collaborated to establish the Transportation Planning Capacity Building program, which provides training and technical assistance to state, regional, and local transportation officials on using analytical tools in the decision-making process. Federal mandates could also be used to increase state, regional, and local transportation agencies’ use of analytical tools, such as benefit-cost analysis. However, in some cases, mandates would require legislative change. For example, benefit-cost analysis cannot currently be required as a condition of receiving highway funds because the federal government does not have exclusive approval power over the worthiness of these projects, and states maintain the sovereign rights to determine which projects shall be federally funded. In addition, it would also be necessary to change TEA-21’s prohibition on placing dollar values on transit mobility improvements in order to require a benefit-cost analysis as part of the New Starts process. As a strategy based on compliance with rules, mandates are comparatively simple to implement. However, detecting mistakes and enforcing mandates as well as creating mechanisms for sanctioning noncompliance would require considerable attention for effective oversight. As our survey responses showed, decisions about transportation investments are based on many things besides the results of economic analyses of a project’s benefits and costs, such as the availability of funding or public perception about a project. Improving the quality of information about projects does not make these other matters disappear. Experts, other transportation researchers, and our past work have identified several overarching factors that can affect the extent to which additional analytical information may be used in making decisions about projects. Four such factors, each discussed below, would likely continue to affect the extent to which analytic information, even significantly improved, would be used as the dominant factor in making investment decisions. Structure and Funding of Federal Programs: According to several experts, the highly compartmentalized structure and funding of federal highway and transit programs work against an advantage of benefit-cost analysis—the ability to evaluate how well alternative investments meet transportation problems. Separations between federal programs and funds give state, regional, and local agencies little incentive to systematically compare the trade-offs between investing in different transportation alternatives to meet passenger and freight travel needs because funding can be tied to certain programs or types of projects, according to several experts. For example, only fixed guideway transit projects, such as rail projects, are currently eligible for New Starts funds.As a result, certain bus rapid transit projects, which have compared favorably with the per-mile costs of light rail projects, are not eligible for New Starts funds. Both the Senate- and House-proposed bills (S. 1072 and H.R. 3550) to reauthorize federal surface transportation programs, which were considered by the 108th Congress in 2004, would allow certain nonfixed guideway transit projects (e.g., bus rapid transit operating in nonexclusive lanes) to be eligible for New Starts funding. The Transportation Research Board reported that most local agency staff continues to be in a single transportation sector “silo.” Federal funding of highway and transit projects is also not linked to performance or the accomplishment of goals or outcomes. As a result, the federal government misses an opportunity to use financial incentives to improve performance and to hold agencies accountable for results. In a previous report, we identified possible options for how the federal highway program could be restructured to increase flexibility and accountability, including linking funding with performance and outcomes. Legislative earmarks: Legislative earmarks target transportation funds to specific local uses. As a result, these designated projects do not compete for funding against other alternatives, which removes the reason and incentive for transportation agencies to conduct benefit-cost analyses. Multiple federal requirements: Federal legislation and regulations place many demands on state, regional, and local transportation agencies’ analytic resources and—in some cases—give them compelling reasons to dedicate their analytic resources to areas other than benefit-cost analysis or to choose an alternative that is not the most cost beneficial. For example, one expert emphasized that local transportation agencies have especially strong incentives to focus their modeling and analytic resources on achieving air-quality goals, as mandated by federal statute. Demonstrating that these goals are met is a high priority because failing to do so creates the very tangible risk that transportation project funding could be blocked. In addition, TEA-21 requires local, regional, and state transportation agencies to consider a number of factors in their planning that are not easily quantified. As a result, these statutorily defined factors, which are considered in a more qualitative manner, can be more important than the results of a benefit-cost analysis in selecting a transportation project for funding. Expense of analysis: Experts told us that analysis can be quite expensive. For example, a formal benefit-cost analysis can typically cost over $100,000 for a multimodal urban corridor that is several miles long. The high cost of such analyses puts pressure on local agency budgets that are already stretched to meet other competing demands and poses a significant disincentive to using benefit-cost analysis or conducting outcome evaluation. As noted earlier, the Senate proposed bill (S. 1072) to reauthorize federal surface transportation programs that was considered by the 108th Congress in 2004 would increase funds to support local transportation planning, and those additional funds could presumably be used to support economic analyses. With growing concerns about the size of federal and state budget deficits, combined with the future mandatory commitments to Social Security and Medicare set to consume a greater share of the nation’s resources, the prospects of future fiscal imbalances are a certainty. Given the current and long-term fiscal challenges, careful decisions need to be made to ensure that transportation investments systematically consider the benefits of each federal dollar invested. Through federal regulations, laws, and guidance, a framework has been established for transportation planning that state, local, and other decision makers must follow to receive federal transportation dollars. Although the framework identifies factors for consideration during transportation investment decision making, it does not specify analytical tools to be applied for evaluating project merits—nor does it require that the most cost-beneficial project be chosen. Furthermore, many of the factors that are required to be considered are not easily incorporated in economic analysis, and methods for estimating dollar values associated with those factors may not be readily accepted. This results in some factors being considered more qualitatively and thus weighted differently than those factors that can be more easily incorporated in an economic analysis. Academic institutions, research organizations, and experts in the field continue to seek new methods and tools for estimating transportation project benefits and costs. Such advancements could help federal funding recipients improve their project analyses and thus improve the information available to decision makers, although these methods should be appropriately tested and vetted within the transportation community. Throughout this report, we have acknowledged the very tangible difficulty of comprehensively and accurately estimating the benefits and costs of transportation projects, which, in part, leads to the relatively infrequent use of benefit-cost analysis in determining which projects to pursue. Further, we have recognized that transportation investment decision making does not occur in a vacuum. State, regional, and local officials consider a variety of factors in making transportation investment decisions, including the community’s needs and priorities as well as federal requirements—and these factors can play a greater role in shaping investment choices than the analysis of a project’s benefits and costs. In addition, overarching factors, such as the funding compartmentalization of federal transportation programs and legislative earmarks that target transportation funds to specific uses, inhibit more widespread use of benefit-cost analysis. Nevertheless, the increased use of systematic analytical tools such as benefit-cost analysis, and the continued improvement of such tools through dissemination of new methods and advancement of existing techniques, can provide important additional information that can be used to inform discussions about community needs and values, which could then lead to better-informed transportation investment decision making. We obtained comments from DOT, including FTA and FHWA. Overall, DOT said that the report presented a clear and useful assessment of the status of economic analysis in its application to evaluating transportation projects. While recognizing the utility of economic analysis for maximizing benefits associated with public investment in transportation capacity, DOT agreed with the limitations associated with the use of these techniques that we described in our report. DOT indicated that a combination of factors, including difficulties in measuring and forecasting benefits, along with local political, land use, and public support factors can limit the practical utility of formal economic analysis in making local transportation decisions. Nonetheless, at the federal level, representatives from FTA said that it had made significant strides incorporating state-of-the-art analytical tools into its New Starts Program. For example, as described in our report, FTA developed software capable of calculating transportation user benefits, based on locally originated data, and grantees are required to use it in making statutorily required New Starts submissions. Representatives from FTA also said that FTA is more rigorously reviewing ridership forecasts, requiring before and after studies for all new projects, and is conducting risk assessments to identify significant risks to project budgets and schedules, as described in our report. Finally, both FTA and FHWA offered a number of technical comments, which have been incorporated in this report, as appropriate. We are sending copies of this report to the Secretary of Transportation, Administrators of the Federal Highway Administration and Federal Transit Administration, and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at [email protected], or (202) 512-2834. Key contributors to this report are listed in appendix VI. To identify the categories of benefits and costs that can be attributed to highway and transit investments and the challenges in measuring these benefits and costs as well as options to improve the information available to decision makers, we reviewed the economics literature, academic research, and transportation planning studies containing evaluations of various economic analytical tools, with an emphasis on benefit-cost analysis. A GAO economist reviewed these studies, which were identified by searching economics literature databases and consulting with researchers in the field, and found their methodology and economic reasoning to be sound and sufficiently reliable for our purposes. We also reviewed federal laws, regulations, and guidance on the transportation planning process in order to determine the extent to which considerations of project benefits and costs are required or encouraged. In addition, we interviewed federal transportation officials in the Department of Transportation’s (DOT) Office of the Inspector General, Federal Highway Administration (FHWA), Federal Transit Administration (FTA) and the Volpe Transportation Center, as well as representatives from think tanks, consulting firms, academic institutions, and the Transportation Research Board’s Transit Cooperative Research Program and National Cooperative Highway Research Program. We also contracted with the National Academy of Sciences (NAS) to convene a balanced, diverse panel of experts to discuss the use of benefit- cost analysis in highway and transit project decision making and gather views about options to improve the information available to decision makers. The NAS Transportation Research Board (TRB) identified potential panelists who were knowledgeable about benefit-cost analysis, transportation policy and planning, highway and transit use, and transportation decision making. We worked closely with TRB to select panelists who could adequately respond to our general and specific questions about conceptualizing, measuring, improving, and using benefit and cost information in investment decisions (see app. III for more information about the panelists). In keeping with NAS policy, the panelists were invited to provide their individual views, and the panel was not designed to build consensus on any of the issues discussed. After the expert panel was conducted on June 28, 2004, in Washington, D.C., we used a content analysis to systematically analyze a transcript of the panel’s discussion in order to identify each expert’s views on key questions. To determine how state, local, and regional decision makers consider the benefits and costs of new highway and transit investments and the extent to which select capacity-adding highway and transit investments met their projected outcomes, we conducted a survey and a series of case studies. Specifically, we conducted a self-administered e-mail survey of all state DOTs (excluding the District of Columbia and Puerto Rico) and the 30 largest transit agencies in the United States. We sent the survey to state DOT planning officials and transit agency general managers and asked them to coordinate responses with agency officials most knowledgeable about particular issues raised in the survey. Although we did not independently verify the accuracy of the self-reported information provided by these agencies, we took a series of steps, from survey design through data analysis and interpretation, to minimize potential errors and problems. To identify potential questions, we spoke with numerous transportation experts, agency officials, and officials at organizations relevant to transportation planning and decision making, including, the American Association of State Highway and Transportation Officials (AASHTO), the American Public Transportation Association, and the Association of Metropolitan Planning Organizations (AMPO). To verify the clarity, length of time of administration, and understandability of the questions, we pretested the questionnaire with 12 transit agencies, state DOTs, and metropolitan planning organizations (MPO). We also had the questionnaire reviewed by a survey expert and AMPO staff. In addition, we examined survey responses for missing data and irregularities. We analyzed the survey data by calculating descriptive statistics of state DOT and transit agency responses. A copy of the Survey of State Department’s of Transportation and Transit Agencies—The Costs and Benefits of Transportation Projects can be found in appendix II. We used AASHTO’s standing committee on planning to identify state highway officials in each state. We also used the National Transit Database to identify the top 30 transit agencies nationwide as well as obtain contact information for the general managers of the agencies. We also interviewed officials from several MPOs on the types of analysis they used in planning, but we did not include them in the survey population because MPO officials told us that state DOTs and transit agencies are typically project sponsors and are responsible for identifying and evaluating specific project alternatives. While MPOs are involved in the project planning process, we decided to limit our survey to those agencies that most likely had completed project specific analyses. We conducted the survey from August through October 2004. We initially contacted state DOT and transit agency officials via telephone, and we then sent the survey via e-mail to each official. To maximize response rates, we sent periodic e-mail reminders with copies of the survey to nonrespondents in September 2004. Each of these messages contained instructions for completing the survey and contact information to submit questions. We extended the initial deadline from September 15, 2004 to October 8, 2004, to allow additional agencies to submit completed questionnaires. Finally, we telephoned officials that had not yet responded between September 22, 2004, and September 28, 2004, to remind them to complete the questionnaire. Overall, 43 of the 50 state DOTs responded to our survey and 20 of the 28 transit agencies. We supplemented our survey data with in-depth information from state and local transportation officials about 10 highway and transit projects in five major metropolitan areas: Baltimore, MD; Dallas, TX; Denver, CO; Miami, FL; and San Jose, CA. We chose these five metropolitan areas because they each had both a New Starts project and a capacity-adding highway project completed within the last 10 years and were identified by the Texas Transportation Institute as among the top 25 most congested areas in the United States. (Table 7 provides a description of each project.) In these locations, we interviewed officials from transit agencies, MPOs, and state DOTs in order to understand the type of analysis that was completed for the highway and transit projects, the factors that drove project decision making, and the types of project outcomes that were achieved and tracked. We also analyzed available planning and project documents, such as Environmental Impact Statements and Project Study Reports. We also collected available cost and usage information from the planning and project documents or from project officials. To examine the relationship between benefit-cost ratios computed for state transportation projects in California and the subsequent decisions to program construction funds for those projects in the Statewide Transportation Improvement Plan, we used a logit model. This model is one of the most commonly used statistical techniques for estimating problems involving outcome variables that take discrete values---in this case, the outcome variable is that the projects either received funding or they did not. The data for this analysis were provided to us by the California DOT. In the statistical analysis, we also included population density and total employment to both account for plausible effects from these demographic factors and to check for the sensitivity of the estimated relationship. These county-level demographic variables, obtained from Census Bureau’s 2000 census, were matched to counties in which the projects were to be constructed. Finally, to determine trends in public expenditure, capacity, and usage for highway and transit systems over a 20-year period (1982 to 2002), we analyzed information from FHWA’s Highway Statistics, FTA’s National Transit Database, and DOT’s Conditions and Performance Report. We adjusted expenditures to 2002 dollars using the price index for state and local government gross fixed investment in highways and streets estimated by the Bureau of Economic Analysis (BEA) of the Department of Commerce. The adjusted expenditures using the BEA index will be slightly different from expenditures calculated by FHWA using its bid-price index because BEA adjusts the FHWA bid-price index. We used BEA's index because it uses a 12-quarter phasing pattern that more consistently captures expenditure patterns for capital highway projects. To determine the reliability of the data, we (1) reviewed available documentation about these databases and the systems that produced them and (2) interviewed knowledgeable agency officials. We determined that the data were sufficiently reliable for the purposes of this report. The U.S. Government Accountability Office, an agency of Congress, is conducting a study of the costs and benefits of federal investments in transportation projects. As part of this study, we are surveying officials at State DOTs about analyses of costs and benefits conducted when comparing alternatives in planning and developing highway projects. Please help us to inform Congress about this important issue by responding to this brief questionnaire. Your responses to this survey are critical in helping Congress to understand the costs and benefits of transportation investments. Without your responses, we cannot provide meaningful information to Congress about this important issue. Please complete this questionnaire and return it to GAO before September 15, 2004. This questionnaire should take approximately 20 minutes to complete. If you have any questions regarding this survey, please contact either Andrew Von Ah (by phone at 213-830-1011 or by email at [email protected]) or contact Heather MacLeod (by phone at 206-654-5574 or by email at [email protected]). Thank you for participating in this survey. For transit agency surveys this statement read as follows, “As part of this study, we are surveying officials at the 30 largest transit agencies about analyses of costs and benefits conducted when comparing alternatives in planning and developing transit projects.” For highway: In this section of the survey, please consider only highway projects that used federal funds and that were designed to expand the physical capacity of the highway system, such as adding HOV lanes, reconfiguring interchanges, and constructing new roads. Do not consider operations, maintenance, or rehabilitation projects such as signal timing or road repaving. For transit: In this section of the survey, please consider only transit projects that used federal funds and that were designed to expand the physical capacity of public transit systems. Please consider public transit systems to include buses, subways, light rail, commuter rail, monorail, passenger ferryboats, trolleys, and inclined rails. Do not consider operations, maintenance, or rehabilitation projects. 1. When alternatives are considered for a proposed highway/transit capacity-adding project, does your agency complete any analysis of either the benefits or the costs of the various alternatives? Please check your response. (Skip to question 6) (Skip to question 6) 2. How often does your agency complete a Cost-Effectiveness Analysis when evaluating alternatives for proposed highway/transit capacity-adding projects? Please consider a cost-effectiveness analysis to be the determination of the annualized capital and operating costs divided by some unit of output for each project, such as cost per passenger mile of travel or cost per hour of travel time savings. Please check your response. 6. Typically, how much importance would you say that cost-effectiveness has in your decision to recommend a project from among its various alternatives?Please check your response. 7. Typically, how much importance would you say that the ratio of benefits to costs has in your decision to recommend a project from among its various alternatives?Please check your response. 8. Typically, how much importance would you say that economic impacts have in your decision to recommend a project from among its various alternatives?Please check your response. For the transit agency survey this question was worded as follows, “forecasted cost-effectiveness.” For the transit agency survey this question was worded as follows, “the result of a cost-benefit analysis.” For the transit agency survey this question was worded as follows, “projected economic impacts.” 9. Typically, how much importance would you say that political support and public opinion have in your decision to recommend a project from among its various alternatives? Please check your response. 10. Typically, how much importance would you say that the distribution of impacts across social groups has in your decision to recommend a project from among its various alternatives? Please check your response. 11. Typically, how much importance would you say that the availability of federal matching funds has in your decision to recommend a project from among its various alternatives? Please check your response. 15. During the past 10 years, did your agency typically analyze individual highway/transit capacity-adding projects to determine in retrospect whether specific proposed outcomes were achieved? Please consider all sources of analyses in your answer, including those completed by other state or local agencies or consultants. Please check your response. Thank you for participating in our survey! The names and backgrounds of the panelists are as follows. Brian Taylor of the University of California, Los Angeles, served as moderator for the sessions. David J. Forkenbrock is Director of the Public Policy Center, Director of the Transportation Research Program, Professor in Urban and Regional Planning, and Professor in Civil and Environmental Engineering at the University of Iowa. His research and teaching interests include analytic methods in planning, and transportation policy and planning. From 1995 through 1998, Dr. Forkenbrock chaired a National Research Council-appointed committee to review the FHWA’s Cost Allocation Study process. He is a member of the College of Fellows, American Institute of Certified Planners and a lifetime National Associate of the National Academies. He is chairman of the TRB Committee for Review of Travel Demand Modeling by the Metropolitan Washington Council of Governments and a member of the TRB Committee for the Study of the Long-Term Viability of Fuel Taxes for Transportation Finance. In 2004, he received the first ever TRB William S. Vickrey Award for Best Paper in Transportation Economics and Finance for his work on mileage-based road user charges. He received the Michael J. Brody Award for Excellence in Faculty Service to the University and the State, from the University of Iowa in 1996. He earned a Ph.D., from the University of Michigan; a Master of Urban Planning from Wayne State University; and a B.A., from the University of Minnesota. José A. Gómez-Ibáñez is Derek C. Bok Professor of Urban Planning and Public Policy at Harvard University’s John F. Kennedy School of Government and Graduate School of Design. His research interests are primarily in the area of transportation policy and urban development and in privatization and regulation of infrastructure. He has served as a consultant for a variety of public agencies. His recent books include Regulating Infrastructure: Monopoly, Contracts, and Discretion; Regulation for Revenue: The Political Economy of Land Use Exactions (with Alan Altshuler); Going Private: The International Experience with Transport Privatization (with John R. Meyer); and Essays on Transport Policy and Economics (ed.). Ronald F. Kirby is Director of Transportation Planning for the Metropolitan Washington Area Council of Governments. He began his career in the United States as a Senior Research Associate with Planning Research Corporation. He joined the Urban Institute as a Senior Research Associate and became a Principal Research Associate and Director of Transportation Studies. He has served on several TRB committees and is currently a member of the TRB Executive Committee. He has a B.S. and a Ph.D., in applied mathematics, from the University of Adelaide, South Australia. David L. Lewis is President and CEO of HLB Decision Economics. His credits include a range of widely adopted applications in cost-benefit analysis, productivity measurement, risk analysis, and approaches to establishing public-private investment partnerships. He has authored three books, including Policy and Planning as Public Choice: Mass Transit in the United States (Ashgate Press), 1999. His past positions include Partner-in-Charge, Division of Economics and U.S. Operations, Hickling Corporation; Chief Economist, Office of the Auditor General of Canada; Executive Interchange Program and Principal Analyst, U.S. Congressional Budget Office, Congress of the United States; and Senior Economist and Director of the Office of Domestic Forecasting, Electricity Council. He has a Ph.D., and an M.S., in economics from the London School of Economics and a B.A., in economics from the University of Maryland. Michael D. Meyer is Professor of Civil and Environmental Engineering at the Georgia Institute of Technology. Prior to coming to Georgia Tech in 1988, he was the Director of the Bureau of Transportation Planning and Development at the Massachusetts Department of Public Works for 5 years. Prior to his employment at the Massachusetts Department of Public Works, he was a professor in the civil engineering department of the Massachusetts Institute of Technology. His research interests include transportation planning and policy analysis, environmental impact assessment, analysis of transportation control measures, and intermodal and transit planning. He is a Professional Engineer in the State of Georgia, and a member of the American Society of Civil Engineers and the Institute of Transportation Engineers. He has chaired TRB’s Task Force on Transportation Demand Management, the Public Policy Committee, the Committee on Education and Training, and the Statewide Multimodal Transportation Planning Committee. He is a former member of the National Research Council policy study Panel on Statistical Programs and Practices of the Bureau of Transportation Statistics. Currently, he is a member of TRB’s Executive Committee and Standing Committee on Statewide Multimodal Transportation Planning. Donald Pickrell is DOT’s Volpe Center's Chief Economist. Prior to joining DOT, he taught economics, transportation planning, and government regulation at Harvard University. While at the Volpe Center, he also was a lecturer in the Department of Civil Engineering at Massachusetts Institute of Technology. He has authored over 100 published papers and research reports on various topics in transportation policy and planning, including transportation pricing, transit planning and finance, airline marketing and competition, travel demand forecasting, infrastructure investment and finance, and the relationships of travel behavior to land use, urban air quality, and potential climate change. He received his undergraduate degree in economics and mathematics from the University of California at San Diego, and Master's and Ph.D. degrees in urban planning from the University of California at Los Angeles. Kenneth A. Small is Professor of Economics at the University of California at Irvine, where he served 3 years as chair of the Department of Economics and 6 years as Associate Dean of Social Sciences. He previously taught at Princeton University and was a Research Associate at The Brookings Institution. He has written numerous books and articles on urban economics, transportation, public finance, and environmental economics. He serves on the editorial boards of several professional journals in the fields of urban and transportation studies and has served as coeditor or guest editor for four of those boards. In 1999, he received the Distinguished Member award of the Transport and Public Utilities Group of the American Economic Association. During 1999 to 2000, he held a Gilbert White Fellowship at Resources for the Future. He has served on two TRB policy study committees—the Committee for a Review of the Highway Cost Allocation Study and the Committee for a Study on Urban Transportation Congestion Pricing. Brian D. Taylor (Moderator) is Associate Professor of Urban Planning and Director of the Institute of Transportation Studies at University of California at Los Angeles as well as Vice-Chair of the Urban Planning Department. His research centers on transportation finance and travel demographics. He has examined the politics of transportation finance, including the influence of finance on the development of metropolitan freeway systems and the effect of public transit subsidy programs on system performance and social equity. His research on the demographics of travel behavior has emphasized access-deprived populations including women, racial-ethnic minorities, the disabled, and the poor. He also has explored relationships between transportation and urban form, with a focus on commuting and employment access for low-wage workers. Prior to coming to University of California at Los Angeles in 1994, he was Assistant Professor in the Department of City and Regional Planning at the University of North Carolina at Chapel Hill. Prior to that, he was a Transportation Analyst with the Metropolitan Transportation Commission in Oakland, California. Martin Wachs is Professor of Civil and Environmental Engineering and City and Regional Planning, and Director of the Institute of Transportation Studies at the University of California at Berkeley. He was formerly Professor of Urban Planning and Director of the Institute of Transportation Studies at the University of California at Los Angeles where he served three terms as Head of the Urban Planning Program. His research interests include methods for evaluating alternative transportation projects; relationships among land use, transportation, and air quality; and fare and subsidy policies in urban transportation. Most recently, he chaired the Transportation Research Board policy study Committee for the Study on Urban Transportation Congestion Pricing. He is the former Chairman of the TRB Executive Committee. He holds a Ph.D., in transportation planning from Northwestern University. Expenditures by all levels of government for both highways and transit have grown substantially from fiscal year 1982 through 2002, at an average annual rate of about 3.4 percent for both highway and transit spending. Figures 5 and 6 show trends in federal, and state and local spending for highways and transit in inflation-adjusted 2002 dollars. In 2002, total highway expenditures reached almost $136 billion while over $26 billion was spent on transit, with the bulk of funding coming from state and local governments for both highways and transit systems. For highways, total federal expenditures have risen at a faster rate since the enactment of TEA- 21 in 1998 than have state and local expenditures, with federal expenditures rising at about 8.4 percent per year, on average, from 1998 through 2002, and state and local expenditures rising at about 0.5 percent per year, on average, over the same period. For transit, the converse is true, as state and local expenditures have increased at a faster rate than federal spending since 1998, with state and local expenditures rising at an average annual rate of about 7.5 percent per year, as opposed to 5.8 percent per year for federal expenditures. Investment in highway and transit capital, which represents investment in new capacity as well as rehabilitation of existing assets, has also increased. Figure 7 shows trends in federal, and state and local capital spending from 1982 through 2002 in inflation-adjusted 2002 dollars. The bulk of federal funding for highways goes toward capital outlays, with about 96 percent of all federal funding going to capital outlays in 2002, as compared with 36 percent of state and local funds. In addition, since the passage of TEA-21, federal capital spending has increased at a faster rate than state and local capital spending for highways. From 1998 through 2002, federal capital spending on highways increased an average of about 8.8 percent per year in inflation-adjusted dollars, while state and local capital spending decreased at about 0.8 percent per year, on average, in inflation-adjusted dollars. Figure 8 shows trends in federal, and state and local capital spending for transit from 1995 through 2002 in inflation-adjusted 2002 dollars. Data prior to 1995 are not reported because comparable data with those available in the National Transit Database are not available. In contrast to highway capital spending, since the passage of TEA-21, state and local capital spending has increased at a faster rate than federal capital spending for transit. From 1998 through 2002, federal capital spending on transit increased an average of about 4.9 percent per year in inflation-adjusted dollars, while state and local capital spending increased almost 15 percent per year on average in inflation-adjusted dollars. According to DOT’s 2002 Conditions and Performance report, capital investment by all levels of government remains well below DOT’s estimate of the amount needed to maintain the condition of the highway and transit systems. As a result, according to DOT, the overall performance of the system declined, thus increasing the number of highway and transit investments needed to address existing performance problems. Figure 9 shows DOT’s estimates of capital investment needed from all levels of government to maintain and to improve the highway and transit systems, compared with actual capital spending in 2002. Travel on highways and transit has increased steadily from 1982 through 2002. For highways, the level of usage has increased at an average annual rate of about 3 percent per year. By 2002, Americans traveled on highways more than 2.8 trillion vehicle miles annually. Figure 10 shows trends in usage of public highways from 1982 through 2002. Although most highway lane miles are rural, the majority of highway travel occurs in urban areas. For example, in 2002, 61 percent of highway travel occurred in urban areas. Passenger vehicles account for the bulk of vehicle miles traveled on public highways, although usage by trucks has increased more over the period. Highway usage by trucks increased by 92.5 percent, as opposed to 78.3 percent by passenger vehicles. Conversely, the level of usage of public highways by buses only increased 17.6 percent from 1982 through 2002. The level of usage of public transit, measured in passenger miles traveled, has increased an average of 1.5 percent annually from 1982 through 2002, although usage has increased more rapidly since passage of TEA-21.Figure 11 shows trends in rail and nonrail transit usage over this period. Since 1998, rail transit has seen an 11.2 percent increase in usage, while nonrail forms of transit, including demand response, ferry-boat, jitney, motor bus, monorail, publico, trolley bus, and van pools, experienced a smaller increase, approximately 9.5 percent, over the same time period. In 2002, passenger miles traveled on rail were 24.6 billion and accounted for about 54 percent of total usage; however, according to the 2002 C&P report, rail accounts for only 5 percent of urban transit route miles. Disaggregating rail usage by commuter rail, heavy rail, and light rail, shows that usage of heavy rail and commuter rail greatly exceeds that of light rail. Figure 12 shows trends in usage by rail mode from 1984 through 2001, the years for which the data are available. In 2001, light rail accounted for only 6 percent of the total passenger miles traveled on rail, whereas commuter rail and heavy rail were 38 percent and 56 percent, respectively, of the total passenger miles traveled on rail transit in 2001. The capacity of the public highway system and the nation’s transit system has increased at a slower rate than usage of these systems. For highways, total estimated lane miles have increased an average of 0.17 percent annually from 1982 through 2002, compared with an annual increase of 3 percent for vehicle miles traveled.In 2002, there were approximately 8.3 million lane miles in the United States, with 76 percent of the total capacity existing in rural areas. From 1993 through 2002, years for which data are available, total transit system capacity increased 24 percent, while usage increased 27 percent over the same period.The capacity of all rail modes increased 26 percent from 1993 through 2002, while nonrail mode capacity increased 22 percent. Light rail capacity experienced the greatest percentage change of the rail modes over the period, increasing 122 percent. Vanpools experienced the largest percentage change in nonrail capacity, 225 percent. Measuring benefits that can potentially result from highway and transit investments can be quite contentious and spur vigorous debates among experts in the field and in literature, although there tends to be more agreement about the nature of the direct user benefits associated with highway and transit investments, as opposed to the wider social benefits or the indirect benefits. Generally, the largest direct benefit from transportation investments, both highway and transit, is the reduction in travel time that results from the investment. When travel time is reduced, additional time becomes available to spend on some other activity and, therefore, people are willing to pay to reduce their travel time. The value of travel-time savings is an estimate of how much people would be willing to pay for reductions in travel time. There is a substantial body of literature consisting of both conceptual analyses of how best to estimate the value of travel-time savings and empirical analyses that estimate values in specific circumstances. Travel- time savings are often divided between work-time savings and nonwork­ time savings. Work-time savings—for example, reductions in the time for a repairperson to get from one work site to another during the workday— would allow someone to accomplish more in a day’s work. Accordingly, the work travel time that someone saves is generally valued at that person’s hourly wage rate because the wage rate represents the value to the employer of having an additional hour of that person’s time available for work activities. The values that travelers place on nonwork travel-time savings depend upon both the benefit that they would receive by spending additional time in some other way and the benefit they receive from reductions in individuals’ perceived costs of travel. For example, it is generally accepted that reductions in time spent waiting for a bus to arrive are more highly valued than reductions in riding time because travelers dislike waiting more than riding and, therefore, would receive a greater benefit from waiting time reductions. As a result, the conceptual link between nonwork travel-time savings and the wage rates of the travelers is less direct. Different travelers along the same route with equal wage rates might value a given reduction in travel time differently, and any one traveler might value travel-time savings differently in different circumstances. In addition, a large change in travel time may be valued differently per minute than a relatively small change in travel time. Nonetheless, because some empirical studies have identified a relationship between willingness to pay for travel- time reductions and wage rates, DOT guidance for valuing benefits recommends estimating the value of travel-time savings for nonwork travel for both highways and transit as certain fractions of travelers’ wage rates.For transit, the recommended value is different for different types of time savings, such as waiting, transfer, and in-vehicle time. It may be possible to obtain more accurate estimates of travel-time savings for a specific investment. This additional precision could be obtained by considering the degree to which the travelers who are affected by this investment are likely to have different values in this circumstance, as compared with previously estimated average values for all travel-time savings. However, obtaining this additional precision entails a cost, which would have to be considered in deciding whether to seek more precise estimates. In addition to reductions in travel time for people, investment in transportation can reduce the time for freight products to move from one location to another, which is also a benefit from this investment. For highway investment, this effect is more direct; adding a new lane, for example, can increase the speed of highway travel, enabling trucks to reach their destinations more quickly. Although most freight typically does not travel by bus or subway, transit investment can indirectly allow freight to move more quickly to the extent that such investment removes cars from highways and allows trucks to travel at faster speeds. Measurement and forecasting of travel-time impacts can be complicated by changes in demand resulting from shifts in travel behavior brought about by the highway or transit improvement. Reducing travel times leads to what has been referred to as triple convergence, where traffic on an improved road increases due to (1) travelers switching from less convenient alternative routes to the improved road (although travelers remaining on the alternative routes will benefit from reduced traffic), (2) travelers switching from less convenient times to the peak period, and (3) travelers switching from transit to driving because of the higher speeds and lower travel times. Estimates of this effect vary. One study showed that, over time, a 10 percent increase in road capacity led to a 9 percent increase in travel, while other research finds that these changes in demand may have a smaller effect. This change in demand does not mean travel-time benefits are not realized—only that forecasting future travel-time reductions should take account of increased traffic flows resulting from such shifts in demand, or else travel-time benefits are likely to be overestimated. For transit investments, the impact of the investment on travel times for highway users can be complicated by what is known as travel-time convergence, whereby travel times on a roadway alternative to a transit line tend to converge to the transit travel time. The convergence of travel times occurs because some drivers are drawn off of the alternative roads to the transit line in search of lower door-to-door travel times. As these drivers leave the road, traffic conditions on the roadway improve, leading to some additional demand on the road and resulting in additional traffic. This process continues until door-to-door travel times on the two modes converge. Several studies bear out the existence of this phenomenon in highly congested urban corridors and suggest that improving the transit travel time will lead to improvements in travel times on the alternative roadways. Another user benefit from transportation investment in both highways and transit related to travel time, concerns reliability, which is generally defined to mean the variability in travel time. Empirical studies suggest that travelers often place a high value on increased certainty of arrival by a specific time, such that they would be willing to pay to reduce their travel time variability even if there was no change in mean travel time. Some investments might accomplish both and would be valued accordingly. For example, improving a bottleneck might not only reduce time on average, but it also might reduce variability by reducing the likelihood of an exceptionally long delay. One study estimates that the value of increased travel time reliability may be as large as the value of travel-time savings on a per minute basis. Not all projects that affect travel-time savings will affect reliability and vice versa. In addition to benefits related to making travel times shorter and less variable, transportation investment can provide travelers other benefits, such as lower vehicle operating costs and safer and more comfortable travel. Lower vehicle costs can arise from highway investments that improve road quality, thereby reducing wear and tear on vehicles, and from investments that reduce congestion, which can reduce fuel consumption. Estimates exist in the literature of the extent to which highway investment reduces vehicle operating costs. Transit investment can also reduce vehicle operating costs to the extent that such investment reduces congestion by inducing some drivers to switch to transit. Improved safety has often been found to be a major benefit from transportation investment. Improving roadway designs generally contributes to fewer accidents, which implies fewer deaths and injuries and less property damage. As for the value of safety improvements, there is substantial literature—both conceptual and empirical—on how to value lives saved, often referred to as the value of a statistical life. Although different people might be willing to pay different amounts to reduce their likelihood of death, and the same person might be willing to pay different amounts in different circumstances, an average value based on various research studies is generally recommended.Improved comfort is another benefit from some forms of transportation investment. Transit investment that, for example, improves the comfort of a seat or increases the likelihood that a rider will get a seat, creates benefits for which some travelers would be willing to pay. Transportation investment benefits also include benefits that accrue to the general public, not just to the travelers directly taking advantage of the investment. For example, transportation investment can lead to a reduction in environmental damage, which can be a benefit to an entire metropolitan area. Research has indicated that increased roadway congestion increases air pollution. Thus, investments that reduce congestion—including highway investments that directly speed up traffic and transit investments that indirectly speed up traffic by inducing people to switch from driving to using transit—can provide environmental benefits. However, to the extent that transportation investment induces additional travel by reducing expected travel time, the pollution resulting from these additional trips might offset the initial pollution-reducing effects of the investment. As another example, transportation investment that increases mobility for those who currently have limited access to the transportation network for access to jobs, schools, etc., might provide social benefits that go beyond the benefits to the users themselves. Such investment could include both additional transit service and highways that connect residents of lower income areas with job sites to which service and roads do not currently exist. Another form of public benefit that may result from transportation investment, particularly for transit, is sometimes called option value: nontransit users, for example, might be willing to pay to provide transit service to retain the option to use it in the future. That is, for some people, having the option of transit service available in case circumstances—such as the weather or the price of gasoline—change could have some value, even if they do not currently plan to use it. The Transit Manual provides a methodology for estimating the value of this benefit. The direct user benefits of highway and transit improvements result in individuals, households, and firms acting to take advantage of those benefits. These actions can then lead to several types of indirect benefits, such as increased property values and new development, reduction in the costs associated with other public infrastructure (e.g., water, electricity, etc.) due to more compact development, reduction of production and logistics costs from improved freight efficiency, and overall increases in productivity and economic growth. As was discussed earlier in the report, these benefits largely represent capitalization of direct user benefits or transfers of economic activity from one area or group to another and, therefore, should not entirely be added to direct benefits. As transportation costs fall and access is improved, incentives are created for households and firms to relocate to areas where housing and land is less expensive or more desirable. This can result in new development and increases in land values of the areas made more accessible, although improvements can also result in land values falling in other locations, due to changes in relative access, and negative impacts from noise and emissions that may result from the improvement. Most studies show a positive effect on land values from highway improvements, although the effects of improvements to highways, as opposed to new roads, are more localized and tend to be smaller. For transit, several studies have documented that increases in land values and higher-density development can occur around rail transit stations, although these impacts depend highly on local conditions, such as the condition of the local economy, and the extent to which complimentary land-use policies exist. Residents of areas where new transit lines are constructed, or where transit is improved may also value the type of urban development, i.e., high density or mixed use, which typically occurs around transit stations. However, increasing property values around transit stations can also displace low-income households, who may rely on transit. Transportation investments can also have an impact on how land is used in an urban area. How such changes are valued can depend in large part on individual preferences for more or less compact and dense development. Highways are generally thought to encourage development on the outskirts of urban areas, although transit investments that provide access to those areas can also encourage such development. However, some research indicates that transit-served sites require less public capital than sites on the edges of urban areas. Nonetheless, while investments in transportation infrastructure have had major effects on development and land use in the past, research indicates that future effects are likely to be much weaker due to the already extensive amount of connectivity that exists and shifts in the nature of the U.S. economy from manufacturing to service orientation. Transportation investments can also reduce freight transportation costs and increase freight reliability, which allows firms not only to move to more desirable locations, but also to reorganize their warehousing and production processes to take advantage of those benefits. This reorganization can result in lower production and inventory costs for firms. Research on this relationship has estimated the benefits on a national level and found that, while the relationship is positive, the returns have been diminishing over time. While diminishing returns are to be expected as the highway and road network becomes more interconnected, the authors of one study also postulate that returns may also be diminishing because highways are inefficiently priced, and highway investment policies do not target the most efficient investments. While investment in highways has a more direct relationship to this benefit, transit investment can also result in such benefits to the extent that it improves conditions on nearby roadways. Transportation improvements also lead to increased productivity and economic growth, through improving access to goods and services for businesses and individuals and increasing the geographic size of potential labor pools for employers and potential jobs for individuals. Recent research into the relationship between productivity, economic growth, and highway investment shows average annual returns on investment of 13.6 percent between 1990 and 2000, slightly greater than the return on private capital investment. However, this research also supports the notion that returns on highway investment have been declining over time. Transit can also lead to economic growth through encouraging the concentration of economic activity and the clustering of offices, shops, entertainment centers, and other land uses around transit stops, particularly rail transit stops. This concentration of activity leads to more efficient economic interactions, which results in higher productivity and can stimulate economic growth. One study has estimated that a 10 percent increase in transit presence would raise economic growth by about 0.2 percent.Another study on the rate of return of several investments in new transit capacity suggests that these returns can be substantial, depending on the project, with projects ranging from 11.8 percent returns to 92 percent returns. In addition to those named above, Mark Braza, Jay Cherlow, Steve Cohen, Sharon Dyer, Sarah Eckenrod, Scott Farrow, Libby Halperin, Jessica Kaczmarek, Terence Lam, Heather MacLeod, Sara Ann Moessbauer, Stan Stenersen, Stacey Thompson, Andrew Von Ah, and Susan Zimmerman made key contributions to this report.
Mobility is critical to the nation's economy. Projections of future passenger and freight travel suggest that increased levels of investment may be needed to maintain the current levels of mobility provided by the nation's highway and transit systems. However, calls for greater investment in transportation come amid growing concerns about fiscal imbalances at all levels of the government. As a result, careful decisions will need to be made to ensure that transportation investments maximize the benefits of each federal dollar invested. In this report GAO identifies (1) the categories of benefits and costs that can be attributed to new highway and transit investments and the challenges in measuring them; (2) how state, local, and regional decision makers consider the benefits and costs of new highway and transit investments when comparing alternatives; (3) the extent to which investments meet their projected outcomes; and (4) options to improve the information available to decision makers. To address these objectives, we convened an expert panel, surveyed state departments of transportation and transit agencies, and conducted site visits to five metropolitan areas that had both a capacity-adding highway project and transit project completed within the last 10 years. DOT generally agreed with the report's findings and offered technical comments, which were incorporated as appropriate. A range of direct and indirect benefits, such as savings in travel time and positive land-use changes, and costs can result from new highway and transit investments. The extent to which any particular highway or transit investment will result in certain benefits and costs, however, depends on the nature of the project and the local economic and transportation conditions where the investment is being made. In addition, measuring project benefits and costs can be challenging and is subject to several sources of error. For example, some benefit-cost analyses may omit some benefits or double-count benefits as they filter through the economy. Officials we surveyed and visited said they considered a project's potential benefits and costs when considering project alternatives but often did not use formal economic analyses to systematically examine the potential benefits and costs. Even when economic analyses are performed, the results are not necessarily the most important factor considered in investment decision making. Rather, our survey responses indicate that a number of factors, such as public support or the availability of funding, shape transportation investment decisions. Officials we interviewed indicated that they often based their decision to select a particular alternative on indirect benefits that were often not quantified in any systematic manner, such as desirable changes in land use or increasing economic development. Available evidence indicates that highway and transit projects do not achieve all projected outcomes; in addition, our case studies and survey show that evaluations of the outcomes of completed projects are not frequently conducted. A number of outcomes and benefits are often projected for highway and transit investments, including positive changes to land use and increased economic development. These projected outcomes were often cited as reasons why the projects were pursued. However, because evaluations of the outcomes of completed highway and transit projects are not typically conducted, officials have only limited or anecdotal evidence as to whether the projects produced the intended results. Several options exist to improve the information available to decision makers about new highway and transit investments and to make analytic information more integral to decision making. These options, such as improving modeling techniques and evaluating the outcomes of completed projects, focus on improving the value this information can have to decision makers and holding agencies accountable for results. Even if steps are taken to improve the analytic information available to decision makers, however, overarching issues, such as the structure of the federal highway and transit programs, will affect the extent to which this information is used. Nevertheless, the increased use of economic analysis, such as benefit-cost analysis, could improve the information available, and ultimately, lead to better-informed transportation investment decision making.
Under the U.S. National Drug Control Strategy, the United States has established domestic and international efforts to reduce the supply and demand for illegal drugs. The strategy includes five goals intended to integrate the budgets and activities of all organizations involved in counterdrug efforts. The goals focus on education, law enforcement, and treatment in the United States and on eradication, alternative development, interdiction, support for host nations, money laundering, and other issues outside the United States. Goal four of the National Drug Control Strategy is “to shield America’s air, land, and sea frontiers from the drug threat.” In its efforts to achieve this goal, the United States has efforts underway to detect, monitor, and interdict illegal narcotics moving through the transit zone. From 1986 to 1996, the United States spent about $103 billion on efforts to reduce drug supply and demand. About $20 billion of this amount was expended on international counternarcotics efforts, including $4.1 billion to support crop eradication, alternative development, and increased foreign law enforcement capabilities and $15.6 billion for interdiction activities. Funding for drug interdiction in the transit zone declined from about $1 billion in 1992 to $600 million in 1996. In 1988, the Office of National Drug Control Policy (ONDCP) was established to set priorities and objectives for national drug control, develop an annual drug control strategy, and oversee the strategy’s implementation. The U.S. Interdiction Coordinator, who reports to ONDCP, is responsible for coordinating the efforts of all U.S. agencies involved in drug interdiction. The Department of State coordinates U.S. efforts in host countries, including training provided by U.S. agencies. The Department of Defense (DOD) supports U.S. law enforcement agencies by tracking and monitoring suspected drug-trafficking activities. Within the transit zone, the U.S. Coast Guard is the lead agency for maritime drug interdiction and co-lead with the U.S. Customs Service for air interdiction. They provide aircraft and ships to assist with detection and monitoring activities. The Drug Enforcement Administration (DEA) is responsible for coordinating drug enforcement intelligence gathering overseas and conducting law enforcement operations. DEA’s activities in the Caribbean are managed by its Puerto Rico-based field division and in the Bahamas by its Miami field division. On September 17, 1997, the executive branch revised the National Interdiction Command and Control Plan that called for creating several joint interagency task forces that are intended to strengthen interagency coordination of U.S. drug control efforts. According to the revised plan, the Joint Interagency Task Force (JIATF)-East, located in Key West, Florida, is responsible for detection, monitoring, sorting, and handoff of suspect air and maritime drug-trafficking events in the Pacific Ocean east of 92 west longitude, the Gulf of Mexico, the Caribbean Sea, Central America north of Panama, and surrounding seas and the Atlantic Ocean. JIATF-East is composed of personnel from various defense and civilian law enforcement agencies. JIATF-South, located in Panama, focuses on source country initiatives and the detection and monitoring of suspect drug targets for either subsequent handoff to participating national law enforcement agencies or to JIATF-East for further monitoring. According to the Department of State’s 1997 International Narcotics Control Strategy Report, about 760 metric tons of cocaine were produced in South America in 1996. Of this amount, U.S. officials estimate that about 608 metric tons moved through the transit zone destined for U.S. markets and 40 metric tons transited to Europe. The officials acknowledge, however, that estimates of the amount of cocaine that enters the United States are based on limited intelligence and other information and may not reflect the actual cocaine flow. Maritime conveyances continue to be the predominant means for smuggling cocaine into the United States. The Eastern Pacific and Western and Eastern Caribbean are the principal cocaine smuggling routes from South America to the United States. U.S. interagency 1996 estimates indicated that of the 608 metric tons of cocaine destined for the United States, 234 metric tons flowed through the Eastern Pacific, 264 metric tons flowed through the Western Caribbean, and 110 metric tons flowed through the Eastern Caribbean. About 52 percent of the U.S.-bound cocaine transited Mexico and Central America. As shown in figure 1, the United States is the principal destination for cocaine smuggled in the transit zone. U.S. interagency estimates for the first 2 quarters of 1997 showed that cocaine continued to be moved mostly by maritime conveyances and Mexico was the principal destination. According to interagency estimates, Mexico and Central America received 59 percent of the cocaine during this period. The Caribbean countries accounted for 30 percent of the flow, and 11 percent was shipped directly to the United States from source countries. Also, U.S. law enforcement agencies noted some changes in trafficking patterns in and around Puerto Rico. They attributed this change to increased U.S. law enforcement efforts in this area. According to the DEA Caribbean Field Division, the Eastern Caribbean corridor remains a prolific drug-trafficking route. South American traffickers continue to use the Bahamas and islands in the Leeward, Windward, and Hispaniola routes as staging and transshipment areas. Cocaine loads originate in Colombia or Venezuela and are moved by air or large motherships to smaller vessels in the Eastern Caribbean. The many unguarded airstrips and coastlines of the Caribbean islands make it easy for traffickers to refuel or store cocaine for further shipment directly to the United States or through Puerto Rico. According to DEA, Puerto Rico is a popular gateway to the United States and a principal staging destination for South American drug traffickers. Haiti, the Dominican Republic, and Jamaica have also experienced increased drug-smuggling activity. For example, DEA has indicated that, in the past, the role of Dominicans in the drug business was limited to “pick-up crews” and couriers who assisted Puerto Rican smugglers. Today, Dominicans have sophisticated drug-smuggling operations and use advanced security systems and telephone communications to move and sell cocaine in the Caribbean and the United States, according to DEA. Even with heightened enforcement, offshore airdrops along the southern coast of the Dominican Republic and in the Mona Passage between Puerto Rico and the Dominican Republic continue. In addition, the Department of State has reported that thousands of kilograms of cocaine have been smuggled over the border from Haiti into the Dominican Republic, whose army has had little success in stopping the flow of drugs. Although the Eastern Caribbean remains a major route for illegal drug trafficking, the larger quantities of cocaine pass through Mexico via the Eastern Pacific and Western Caribbean corridors. U.S. interagency estimates show that, in 1996, 314 metric tons of cocaine reached Mexico for eventual movement to the United States through the Eastern Pacific and Western Caribbean channels. Of this amount, about 70 percent was shipped through the Eastern Pacific. Multiton shipments of cocaine depart Colombia, Panama, or Ecuador predominantly by noncommercial maritime means and travel north via the Pacific Ocean. Shipments reach Mexico directly or are unloaded at sites along the Central American coast to smaller vessels. The U.S. interagency also estimates that about 30 percent of the cocaine entering Mexico passed through the Western Caribbean. Overall cocaine seizures in the transit zone have increased from 61 metric tons in 1994 to 66 metric tons in 1995 and to 80 metric tons in 1996, according to JIATF-East. These amounts include seizures by U.S. agencies at sea and local law enforcement authorities in Mexico and Caribbean and Central American countries. Of the 1996 seizures, 53 metric tons were seized by transit zone countries and 27 metric tons were seized by the United States at sea. Seizures by Mexico and Central American countries accounted for about 40 of the 80 metric tons seized in 1996. Since 1993, cocaine traffickers have continued to increase their reliance on maritime vessels. According to JIATF-East, the number of known maritime drug-trafficking events has increased by 41 percent, from 174 events in 1993 to 246 in 1996. A “known event” is the confirmed movement of illegal drugs supported by seizure of drugs, observation of activities that can reasonably be attributed to drug smuggling, or reliable intelligence. Table 1 shows air and maritime known events for 1992-96. According to JIATF-East, the most significant maritime drug-smuggling modes of transportation involve “go-fast” boats in the Caribbean and fishing vessels in the Eastern Pacific. The “go-fast” boats are difficult to detect and interdict because they are small and capable of speeds that enable them to successfully evade law enforcement pursuits, according to the U.S. Coast Guard. The boats are between 25 and 45 feet in length, can routinely carry up to a ton of cocaine per trip, travel predominantly by night, and have refueling capability to complete cocaine runs in one day. In the first quarter of 1997, 38 of the 69 known maritime events, or 55 percent, in the transit zone involved “go-fast” boats. As illustrated in figure 2, maritime smuggling events by “go-fast” boats is on an upward trend. DEA has also reported an increase in the use of canoes by Jamaican smugglers and “yolas” by Dominican smugglers. The yola is an open vessel with twin motors for propulsion and the ability to refuel rapidly. DEA reported that Bahamian and Jamaican transportation groups use yolas to smuggle cocaine loads into the Bahamas from either airdrops or boat-to-boat transfers off the coast of Jamaica. These groups then use the territorial waters of Cuba to shield their movements for eventual unloading to pleasure craft, which can easily blend in with inter-island boat traffic. A U.S. Customs Service official told us that “go-fast,” recreational, and commercial fishing vessels move rapidly from the Bahamas to smuggle drugs into Florida. In the Eastern Pacific, cocaine traffickers use large fishing vessels that have been retrofitted with hidden compartments and have been known to carry as much as 11 metric tons of cocaine. The largest amount of cocaine is smuggled through the Eastern Pacific, and the vast majority is delivered into Mexico, where it continues northbound over land. JIATF-East reported that, between May and December of 1996, there were 27 known or possible events in the Eastern Pacific. Figure 3 shows typical maritime vessels most commonly used by drug traffickers in the Caribbean and Eastern Pacific corridors. As we reported in April 1996, host countries in the Caribbean continue to be hampered by inadequate counternarcotics capabilities. While the drug flow through the transit zone continues at about the same level, drug seizures by most countries in this region are minimal. During the last 2 years, the United States has continued its efforts to strengthen host countries’ capabilities to complement and support U.S. interdiction efforts. These efforts include commitments made at the May 1997 Caribbean/United States Summit in Barbados and new bilateral agreements that promote increased air and maritime cooperation with countries that have not yet signed agreements. The Department of State points out, however, that the best-trained, best-equipped antidrug units cannot succeed for long without the determined commitment of host government political authorities. As we reported in April 1996, many host nations have weak economies and insufficient resources for conducting law enforcement activities in their coastal waters. In the Caribbean, St. Martin has the most assets for antidrug activities, with three cutters, eight patrol boats, and two fixed-wing aircraft, whereas other Caribbean countries have much less. Nevertheless, the United States depends on support from host nations and several European countries to help stop the drug flow through the transit zone. In our 1996 report, we also noted that the need for law enforcement training for host governments had been evident for some time. In recognition of this need, the Department of State provided about $7 million for training 6,700 persons throughout the world in fiscal year 1996. Other U.S. agencies also have funded and conducted training for some host nations. In addition, at the Barbados Summit, the United States committed to continuing to provide technical assistance in such areas as law enforcement, judicial systems, antimoney laundering, and other counterdrug activities. To further assist in implementation of U.S. commitments, the Director of ONDCP issued budget guidance that tasks departments and agencies to implement commitments made at the summit. In March 1997, the Department of State reported corruption-related problems in various transit zone countries, including Antigua, Aruba, Belize, Dominica, the Dominican Republic, Jamaica, St. Kitts, St. Vincent, and others. Once the influence of drug trafficking becomes entrenched, corruption inevitably follows and democratic government may be placed in jeopardy. Even in countries where the political will to support antidrug activities exists, corruption can hinder counterdrug efforts. As we have previously reported, low salary levels for law enforcement officers and other public servants throughout much of the transit zone make them susceptible to accepting bribes. As shown in table 2, the amount of cocaine seized in eight Caribbean countries rose from 6.79 metric tons in 1995 to 14.16 metric tons in 1996.The Bahamas, the Cayman Islands, Cuba, Haiti, the Netherlands Antilles, and the United Kingdom Virgin Islands recorded some increase in 1996, while seizures in the Dominican Republic and Jamaica declined. Significantly, Cuba accounted for most of the 1996 increase, represented by one seizure of 6.2 metric tons from the disabled Honduran vessel Limerick that was boarded by the U.S. Coast Guard and later drifted into Cuban waters. One of JIATF-East’s goals for Caribbean countries is to increase their seizure rates to at least 15 percent of the cocaine estimated to be passing through their territories. In 1996, cocaine seizures in most Caribbean countries were much lower than 15 percent. In Jamaica, for example, the seizure rate was only 1.2 percent, while in Haiti it was 4.5 percent; the Dominican Republic, 3.6 percent; and Mexico, 7 percent. JIATF-East determined that, if these four countries had achieved a 15-percent seizure rate during 1996, seizures would have increased by 32 metric tons—from 26.12 metric tons to 58.65 metric tons. In its March 1997 International Narcotics Control Strategy Report, the Department of State noted that, during 1996, the U. S. government had negotiated an assortment of treaties and agreements designated to serve as important new tools in fighting drug trafficking. One type of bilateral agreement is the maritime counterdrug agreement, generally consisting of six parts and granting the United States full or partial permission for shipboarding, shiprider, pursuit, entry to investigate, overflight, and order to land. There are 12 countries in the region with which the United States currently has no formal counterdrug agreements. These include Barbados, Costa Rica, Cuba, Ecuador, El Salvador, French West Indies, Guatemala, Honduras, Jamaica, Mexico, Nicaragua, and Suriname. Department of State officials told us the U.S. government is undertaking efforts to obtain additional agreements but that progress has been impeded by host government concerns about sovereignty and legal issues. The United States currently has six-part agreements with 5—Antigua & Barbuda, Grenada, St. Kitts & Nevis, St. Lucia, and Trinidad & Tobago—of the 28 countries in the region and partial agreements (from one to four parts) with 11 other countries. See table 3 for information regarding U.S. bilateral counterdrug agreements with transit zone countries. Bilateral agreements are not uniform and some provide very limited rights to U.S. law enforcement authorities. For example, a U.S.-Belize agreement allows U.S. Coast Guard personnel to board suspected Belizean-flagged vessels on the high seas without prior notification to the Government of Belize. Also, the U.S. shiprider agreement with Panama contains restrictions that require U.S. Coast Guard vessels operating in Panamanian territorial waters to be escorted by a Government of Panama ship. In contrast, other agreements do not include this restriction. Although budgets for most federal activities in the transit zone increased in fiscal year 1997, the number of JIATF-East maritime and air assets and resources for detection and monitoring have remained relatively unchanged since fiscal years 1995-96 (we have provided data through August 1997, however). Seizures of drugs supported by JIATF-East have dropped. In addition, JIATF-East believes the Eastern Pacific merits greater attention. Assets have been inadequate in this area, JIATF-East says. Although JIATF-East has asked for additional resources from DOD to support an 18-month operation in the Eastern Pacific, DOD has not decided whether to grant this request. In the Caribbean, two agencies—the U.S. Coast Guard and the U.S. Customs Service—increased their counterdrug efforts, including conducting two “surge” operations to seize and disrupt cocaine smuggling activity in and around Puerto Rico and the U.S. Virgin Islands in 1996. Furthermore, intelligence sharing among some U.S. agencies has been problematic. U.S. counternarcotics funding in the transit zone increased by about $33 million from fiscal year 1995 to fiscal year 1996 and is estimated to increase by an additional $97 million in fiscal year 1997, as shown in table 4. Most of the fiscal year 1997 increase is due to a onetime allotment to DOD to modify a P3 aircraft for interdiction activities. According to JIATF-East officials, the modifications have not yet been completed. The officials also noted that, when the P3 becomes operational, it is likely to be used in the transit zone as well as in other areas as needed. Between fiscal year 1995 and 1996, there was little change in JIATF-East maritime assets and flight hours devoted to interdiction in the Caribbean and the Eastern Pacific (see tables 5 and 6). However, a significant decline in DOD funding began in fiscal years 1993 and 1994. These declines resulted in a 40-percent reduction in U.S. maritime assets by fiscal year 1994. As indicted in table 5, the number of shipdays in 1996 was 1,645 shipdays less than in 1993, when it was at its highest level. The reductions involved almost all classes of ships. Flight hours by JIATF-East air assets to support detection and monitoring declined by only 4 percent between 1995 and 1996. However, the drop between 1993 and 1994 was 27 percent, reflecting a decrease in DOD funding in fiscal year 1994. The flight hours for P3C, a maritime patrol aircraft, have continued to decline from 1992 through August 1997. JIATF-East stated that the maritime patrol aircraft are required for addressing both the “go-fast” boat threat in the Caribbean and fishing vessels in the Eastern Pacific. JIATF-East-supported seizures in 1996 were substantially less than its peak of nearly 70 metric tons seized in 1992 and slightly less than in 1995 (see fig. 4). Also, maritime seizures have continued to increase as a proportion of total seizures. JIATF-East believes its capabilities to detect and monitor maritime vessels in the Caribbean and the Eastern Pacific are restricted. Even when a drug-smuggling event is detected, U.S. capability to interdict the smugglers is limited. For example, only about 52 percent of the 246 known maritime events detected in 1996 resulted in an apprehension, seizure, or jettison. According to a JIATF-East official, detecting and monitoring “go-fast” boats is difficult because there is often little tactical intelligence on when these events will occur and limited marine patrol aircraft assets equipped with radar and night-vision capability to deal with this problem. For example, during the first quarter of 1997, JIATF-East detected only 8 of 23 “go-fast” events that originated in South America. JIATF-East noted that U.S. law enforcement agencies detected six additional events without JIATF-East support. Of the eight detected events, six were detected by maritime patrol aircraft and resulted in three jettisons and three seizures. The remaining two events were detected by radar ships but escaped and presumably completed their delivery. In addition, U.S. Coast Guard officials told us that, other than visual identification of the wake of “go-fast” boats, detection is nearly impossible without more-advanced sensors, especially at night when most of these events occur. According to U.S. officials, the small amount of aircraft and maritime assets hinders U.S. interdiction efforts in the Eastern Pacific. In that area, U.S. air and marine capability to interdict commercial and noncommercial fishing vessels is limited. In May 1996, JIATF-East initiated Operation Caper Focus to better define the nature of the cocaine smuggling threat in the Eastern Pacific. Before the operation began, JIATF-East had little intelligence on the smuggling methods used and quantities being transported in this area. JIATF-East estimated that, from May 1996 through June 1997, 43 known or possible maritime smuggling events occurred in the Eastern Pacific, and only 4 resulted in seizures. These events were detected from among hundreds of fishing vessels that routinely operate in the Eastern Pacific. JIATF-East officials indicated they have relatively little chance of detecting and monitoring these events because they currently have only 2 surface ships and about 200 flight hours of marine patrol aircraft per month dedicated to the area. In 1996, JIATF-East reported that of 86 known air events, 26 resulted in an apprehension, seizure, or jettison. According to JIATF-East, a successful interdiction generally involves a number of steps. It may start with an initial detection by Relocatable Over the Horizon Radar (ROTHR) systems or a radar ship, followed by a handoff to an asset equipped with an Airborne Early Warning (AEW) system. The aircraft will find the detected suspect and hand it off to an interceptor aircraft, which will monitor the suspect for ultimate handoff to a law enforcement agency. An analysis by JIATF-East showed that if any of the required assets, especially AEW-equipped assets, are not in place, the chances of a successful interdiction diminish. A JIATF-East analysis of known air events from October 1996 to May 1997 showed that something usually went wrong that prevented a successful interdiction. During this period, there were 27 known events in the Yucatan region. Of these, five were not known until after the drugs were delivered. In the remaining 22 detected events, only 4 resulted in a successful seizure. The main reasons for lack of success were limitations in ROTHR to handoff tracks, lack of AEW-equipped assets, and inadequate response and apprehension capabilities by host nation law enforcement agencies. For example, the analysis showed that without AEW-equipped assets, JIATF-East was able to track and hand off the suspect aircraft to law enforcement agencies only 18 percent of the time. Even when an AEW-equipped asset was available, JIATF-East analysis showed that it was able to track the suspect aircraft to the handoff point only 45 percent of the time because either the AEW-equipped asset or the interceptor failed to pick up the track from ROTHR. Finally, the lack of available local law enforcement support further reduced the results of adequate tracking. For example, JIATF-East reported that of the seven adequately monitored tracks, three did not result in a seizure because local law enforcement authorities in Guatemala did not have the assets to respond. In 1994, the United States had 26 various radar assets that supported counterdrug efforts. Between 1994 and 1995, however, DOD deactivated nine radar assets. When this occurred, U.S. law enforcement officials told us that reductions in radar capability hampered their operations.However, during 1994 and 1995, the United States activated two ROTHR systems to cover the Caribbean. Although the ROTHR systems provide a larger area of coverage footprint than microwave radars, ROTHR has less probability to handoff (rather than detect) an air event to law enforcement agencies because it is not as accurate in vectoring in interceptions as microwave radars. (See app. I for an overview of radar coverage capability.) JIATF-East acknowledged that reduced radar capability continues to limit operational successes today. According to JIATF-East, radar assets have not changed since our prior report and radar capabilities have further been exacerbated by a long term outage of the Guantanamo Bay radar because of no funding for operations and maintenance. In commenting on our draft report, U.S. Coast Guard officials told us the Guantanamo Bay radar is expected to be reactivated in the near future. Although JIATF-East acknowledges that U.S. capabilities to detect and monitor “go-fast” boats in the Caribbean and aircraft throughout the transit zone are limited, it currently believes that targeting multiton cargo vessels in the Eastern Pacific provides the richest target of opportunity to seize large quantities of cocaine. Accordingly, JIATF-East has requested two additional ships and an additional 450 aircraft surveillance flight hours per month from DOD. JIATF-East believes that with these assets the United States can increase annual cocaine seizures by an additional 40 metric tons. DOD, however, has indicated that it will not be able to fully support JIATF-East’s request because of other priorities. At present, two surface ships and about 200 flight hours are assigned to the Eastern Pacific per month. Most of the assigned U.S. air and marine assets support Caribbean interdiction efforts. JIATF-East officials told us that, although the United States could seize larger amounts of cocaine in the Eastern Pacific, JIATF-East is unwilling to transfer assets from the Caribbean. The Director of JIATF-East told us that the Caribbean counternarcotics programs have more “voice and visibility” in terms of political support from both the United States and island nations. He acknowledged that the United States has worked with Caribbean nations to build continued support for fighting the war on drugs and any movement of assets may have undesirable political consequences. Between May and December of 1996, JIATF-East temporarily shifted assets from the Caribbean to implement Operation Caper Focus in the Eastern Pacific. The temporary operation resulted in 27 metric tons of cocaine that were either seized or jettisoned, as well as improved intelligence on smuggling methods and routes in the region. Before this operation, few seizures had occurred in this region, and none took place during the first quarter of 1997—after the operation ended. As previously discussed, JIATF-East has requested additional assets from DOD to support an 18-month Eastern Pacific operation. JIATF-East estimated such assets would double the seizures it supports from the current level of 40 metric tons annually to 80 metric tons. In June 1997, ONDCP provided interagency budget guidance that directed the agencies to expand operational support to Operation Caper Focus. In July, ONDCP informed high-level DOD officials that interdiction resources were inadequate to support the national strategy and that the interdiction effectiveness of Operation Caper Focus was at risk. According to DOD officials, a decision is pending as to what, if any, additional support will be allocated to the Eastern Pacific above current force levels. Since 1995, the U.S. Coast Guard and U.S. Customs Service have reported some increases in interdiction activities in the transit zone. Both agencies conduct counterdrug activities that are in addition to the shipdays and flight hours provided in support of JIATF-East. For example, in 1996, the U.S. Customs Service and U.S. Coast Guard launched Operations Gateway and Frontier Shield, respectively, to disrupt cocaine trafficking in and around Puerto Rico and the U.S. Virgin Islands. According to ONDCP officials, the operations have been successful, as indicated by increases in the street prices of cocaine in Puerto Rico. DEA officials noted that the wholesale price, considered a better indicator, doubled in San Juan, Puerto Rico, from April to late May 1997. As of September, prices had declined but were still higher than in April. Seizures of cocaine during the first year of Operation Gateway increased by about 30 percent in comparison to seizures before the start of the operation. In the year prior to the operation, seizures were about 11 metric tons while seizures during the first year of the operation were about 14 metric tons. Actual fiscal year 1996 funding for Operation Gateway was $2.4 million, and planned funding for fiscal year 1997 was $30.1 million. In its One-Year Report on the Operation (covering the period ending February 28, 1997), the U.S. Customs Service noted a number of problems, including duplication and confusion in the procurement of equipment and services, a lack of public affairs coordination with other agencies, and lengthy delays in filling numerous authorized staff positions. The U.S. Customs Service noted, however, that some of these problems were not within its control and did not affect the overall performance of Operation Gateway. The first 3 months of the U.S. Coast Guard’s Operation Frontier Shield began with an initial “surge” of activity. Subsequently, operations were somewhat reduced but remained higher than before the operation began. According to the U.S. Coast Guard, shipdays and flight hours devoted to counterdrug missions increased to support the surge operation. During the first 10 months of the operation, cocaine seizures totaled 10.7 metric tons, an increase of 5.8 metric tons over the prior comparable 10-month period. In July 1997, the Secretary of Transportation reported that “assessments show a decisive shift in drug traffic away from the Frontier Shield area of operation” and that “drug runners are being forced to move their operations elsewhere.” U.S. Coast Guard officials stated that the shift in trafficking routes was anticipated in planning Operation Frontier Shield and that additional efforts are planned or underway to address this trafficking shift. In 1996, we reported that intelligence sharing was a contentious issue among various collectors and users of such data, including most federal counterdrug agencies. Since then, some initiatives to improve intelligence sharing have begun. For example, an interagency review, initiated by ONDCP on September 18, 1997, is underway to look at the national drug intelligence architecture. The review will assess the drug intelligence missions, functions, and resources of the major federal counterdrug agencies. According to ONDCP, coordination among the intelligence program and between intelligence producers and consumers will be a focus of the review. In addition, the Federal Bureau of Investigation noted that other law enforcement agencies with jurisdiction in the Caribbean are developing a regional plan for law enforcement in the Caribbean that calls for expanding intelligence coordination. A draft of the regional plan is scheduled to be completed by January 1998. JIATF-East officials told us that there are inherent barriers in sharing information between and among federal counternarcotics agencies.Specifically, law enforcement agencies are mainly focused on individuals and drug organizations in the context of building cases for arresting and prosecuting criminals. In contrast, JIATF-East is focused on tactical operations and obtaining information in support of detecting and monitoring suspects and making successful and timely law enforcement interdictions. DEA officials referred to their comments in our prior report that noted that there are limitations on what intelligence DEA can legally provide other federal agencies developed from grand jury information, wiretaps, and court sealing orders. They also noted that some intelligence is not released to protect sources and the integrity of ongoing investigations. DEA officials also stated that the El Paso Intelligence Center provides JIATF-East with the necessary information to track suspect aircraft and vessels until the respective U.S. and foreign authorities can take appropriate law enforcement action. In April 1996, we recommended that the Director of ONDCP develop a Caribbean plan of action that should at a minimum determine resources and staffing needed and delineate a comprehensive strategy to improve host nation capabilities. In response to our recommendation, ONDCP told us it provided a framework for addressing strategic objectives in the transit zone in its classified annex to the National Drug Control Strategy issued in February 1997. ONDCP officials noted that the operational tasks for implementing the framework, including targets for measuring performance in that area, are still under development. They also noted that implementation of the strategy for the transit zone is the responsibility of other federal agencies such as the U.S. Coast Guard, DEA, and DOD. According to ONDCP, its performance measurement system will provide policymakers with new insights about which programs are effective and which are not. The system is intended to help guide adjustments to the National Drug Control Strategy as conditions change, expectations are met, or failure is noted. The goals of the strategy form the foundation of a 10-year plan, supported by a 5-year budget. Each goal is to be related to a measurable objective, further defined by an outcome that the objective is to produce by 2002. Each outcome is to be quantified by performance targets that identify measurable attainments. ONDCP has instructed agencies to develop plans, without being constrained by budget considerations, identifying how they will make meaningful progress toward achieving a drug control mission. As of October 1, 1997, the performance measurement system remains incomplete because proposed measurable targets, the core of ONDCP’s system, are still under review. Since these measurable targets have not yet been developed, agencies cannot be held accountable for their performance. In April 1994, ONDCP and the participating agencies approved the National Interdiction Command and Control Plan. This plan provided for establishing three geographically oriented counterdrug joint interagency task forces. The task forces were to be led and staffed by DOD, the U.S. Customs Service, and the U.S. Coast Guard. A major premise of the plan was that the full-time personnel assigned to the task forces would become stakeholders in their operations. It was anticipated that this would ensure close planning and operational coordination; the availability of federal assets; and a seamless handoff of suspected air, sea, or land targets. Other agencies that either had an interest in or were affected by the operation were to provide liaison personnel. Nevertheless, participating agencies have not provided the required staffing to JIATF-East and, thus, it has been dominated by DOD personnel and has not achieved the intended interagency composition. According to ONDCP, the revised September 17, 1997, plan places the task forces more under the control of DOD sponsorship rather than remaining an interagency center. As of July 1997, little has changed since our last report; many of the key civilian positions have remained unfilled. Thus, JIATF-East is still predominately staffed by DOD personnel and has not achieved the interagency mix initially hoped for at its creation. Of the 184 authorized permanent positions, 132 were DOD, 21 were U.S. Coast Guard, and 31 were other agencies’ position. Thirty-seven authorized positions were vacant. DOD had filled 117 of its 132 authorized positions and the U.S. Coast Guard had filled 17 of 21. However, various other agencies had assigned staff to only 13 of 31 authorized positions, compared to having assigned staff to 11 of 27 authorized positions as of November 1995. The U.S. Customs Service, for example, had filled only 7 of 22 positions, compared to 1995 when it had filled 8 positions; the 1 authorized Department of State position has never been filled. JIATF-East has periodically requested the civilian agencies to staff these positions, but the agencies have not done so. Since our last report, JIATF-East has more clearly identified the problem of “go-fast” boats in the Caribbean and fishing vessels in the Eastern Pacific, identified shortcomings in its detection and monitoring capabilities, and requested additional resources to address the problem. Also, the U.S. Coast Guard and the U.S. Customs Service have launched two surge operations in and around Puerto Rico and the U.S. Virgin Islands that have resulted in seizing additional quantities of cocaine and changing drug-trafficking patterns. By identifying the anticipated benefits from providing additional resources to the transit zone, JIATF-East has taken an important step in adding accountability to the drug-interdiction effort. Notwithstanding these efforts, the overall amount of cocaine seized in the transit zone has not disrupted the flow and availability of cocaine in the United States. We believe ONDCP has not fully addressed our prior recommendation to develop a regional plan. We continue to believe that a transit zone plan that includes quantitative goals and objectives that will allow policymakers to determine resource requirements and to evaluate the potential benefits of regional interdiction efforts is essential. An effective transit zone operation is an integral part of the U.S. strategy to limit drug availability in the United States. But it alone will not be the solution to the drug problem. A regional plan should be a subset to ONDCP’s overall strategy. Therefore, it is important that ONDCP develop a comprehensive plan that provides a blueprint for how efforts in the transit zone countries are going to reduce the flow of cocaine. Without such a plan, it is not possible to judge the merits of individual activities in terms of what are the most cost-effective measures in the counterdrug effort. ONDCP in its written response (see app. II) stated that it is in general agreement with this report. It provided clarification and information on (1) its future plans, (2) the importance of source country efforts in complementing those in the transit zone, (3) activities of the U.S. Coast Guard, (4) results from the Bridgetown Summit, and (5) the changes to the revised National Interdiction Command and Control Plan. ONDCP recognized that current assets in the transit zone are inadequate to accomplish the strategy and identified recent initiatives to implement the strategy. These initiatives included a Caribbean action plan, a short-term enhancement to transit zone interdiction, a 5-year asset plan for the transit zone, intelligence sharing efforts, and a deterrence study. These initiatives are expected to provide linkage between planning, development of performance measures, and assets required to implement the strategy. However, ONDCP did not provide time frames for completing these initiatives. We obtained oral comments on a draft of this report from the Department of State, DOD, DEA, the U.S. Coast Guard, the U.S. Customs Service, and the Federal Bureau of Investigation. None of these agencies disagreed with our principal findings or our conclusions. However, several of these agencies indicated we had not provided enough information about U.S. counterdrug activities in the transit zone other than those directed by JIATF-East. In response to their concerns, we have expanded our discussion of these other activities, particularly those of the U.S. Coast Guard and the U.S. Customs Service, and have added data on overall drug seizures in the region. Each of the commenting agencies suggested points of clarification and we have incorporated them into the report where appropriate. In addition, we modified the report to include some updated information provided by DOD and the U.S. Coast Guard on operational capabilities in the transit zone. To determine the nature of drug-trafficking in the transit zone, we obtained reports from the U.S. Coast Guard, DEA, the U.S. Customs Service, JIATF-East, ONDCP, and interagency assessments on cocaine smuggling activities. We analyzed and compared the data from these reports with data from prior years to assess the degree of change in the cocaine flow to the United States and drug traffickers’ methods, routes, and modes of transportation. We also obtained assessment briefings about the cocaine threat from DEA headquarters in Washington, D.C.; the U.S. Coast Guard Seventh District and U.S. Customs Service in Miami, Florida; and JIATF-East in Key West, Florida. To obtain information on host nation capabilities and impediments to their counternarcotics efforts, we reviewed various Department of State cables and other relevant documents, including the annual International Narcotics Control Strategy Reports. We interviewed officials from DEA, the U.S. Coast Guard, the U.S. Customs Service, the Department of State, and JIATF-East and obtained information on (1) host nation counterdrug capabilities, (2) the amount of cooperation with the U.S. counterdrug activities, and (3) the extent of corruption in host nations. We also interviewed officials from the U.S. Coast Guard, the Department of State, and JIATF-East for information on the status of U.S. bilateral maritime agreements with host countries. To assess U.S. counterdrug capabilities, we reviewed program and budget documents and related information from numerous federal agencies, including ONDCP, the Departments of Defense and State, the U.S. Coast Guard, the U.S. Customs Service, and the U.S. Interdiction Coordinator. We obtained summary reports from the U.S. Customs Service on Operation Gateway and the U.S. Coast Guard on Frontier Shield to determine their impact on reducing the flow of cocaine to United States. We interviewed and obtained information from JIATF-East and other federal agencies on the resources and capabilities of U.S. interdiction efforts in the transit zone. We also documented U.S. funding trends from 1991 to 1998 and assets devoted to detection and monitoring to 1996. We interviewed key officials from DOD and U.S. law enforcement agencies to determine the extent of U.S. planning, coordination, and implementation of counterdrug programs in the transit zone. We met with ONDCP officials on the status of its implementation of our recommendation to develop a plan of action for the Caribbean and its efforts to develop performance measures for U.S. counternarcotics initiatives. To obtain information on interagency staffing at JIATF-East, we reviewed documents, obtained briefings, and interviewed cognizant JIATF-East officials. We conducted our review between April and September 1997 in accordance with generally accepted government auditing standards. As arranged with you, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to other interested congressional committees, the Director of ONDCP, the Secretaries of State and Defense, the U.S. Attorney General, the Commissioner of the U.S. Customs Service, the Commandant of the U.S. Coast Guard and the U.S. Interdiction Coordinator, the Administrator of DEA, and the Director of the Federal Bureau of Investigation. We will make copies of this report available to others upon request. If you or your staff have any questions concerning this report, please call me at (202) 512-4268. The major contributors to this were Janice Villar Morrison, George Taylor, and Louis Zanardi. In 1994, the United States had 26 various radar assets that supported counterdrug efforts in the Caribbean and Eastern Pacific. Between 1994 and 1995, DOD deactivated nine radar assets. However, during 1994 and 1995, the United States activated two Relocatable Over the Horizon Radar (ROTHR) systems to cover the Caribbean, as shown in figure 1.1. According to the Joint Interagency Task Force (JIATF)-East, radar surveillance capabilities illustrated for 1995 remained current as of September 1997. The following are GAO’s comments on the Office of National Drug Control Policy’s letter dated October 3, 1997. 1. We provided additional information concerning this point in the report. 2. We recognize that the overall funding levels are not a reflection of the assets allocated to JIATF-East. Accordingly, we have added several footnotes to make this distinction in the report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed drug trafficking lanes into the United States from the Caribbean Sea, Gulf of Mexico, and eastern Pacific Ocean, focusing on: (1) the nature of drug trafficking activities through the transit zone; (2) host nation efforts, capabilities, and impediments to an effective counternarcotics program; (3) U.S. agencies' capabilities, including funding, in interdicting drug trafficking activities in the region; and (4) the status of U.S. agencies' efforts to plan, coordinate, and implement U.S. interdiction activities. GAO noted that: (1) since its April 1996 report, the amount of drugs smuggled and the counternarcotics capabilities of host countries and the United States have remained largely unchanged; (2) cocaine trafficking through the Caribbean and Eastern Pacific regions continues, and drug traffickers are still relying heavily on maritime modes of transportation; (3) recent information shows that traffickers are using "go-fast" boats, fishing vessels, coastal freighters, and other vessels in the Caribbean and fishing and cargo vessels with multi-ton loads in the Eastern Pacific; (4) recent estimates indicate that, of all cocaine moving through the transit zone, 38 percent (234 metric tons) is being shipped through the Eastern Pacific; (5) although the United States has continued to provide technical assistance and equipment to many Caribbean and other transit zone countries, the amount of cocaine seized by most of the countries is small relative to the estimated amounts flowing through the area; (6) the counter-drug efforts of many transit zone countries continue to be hampered by limited resources and capabilities; (7) the United States does not have bilateral maritime agreements with 12 transit zone countries to facilitate interdiction activities; (8) the United States has increased funding but has had limited success in detecting, monitoring, and interdicting air and maritime trafficking in the transit zone; (9) Joint Interagency Task Force (JIATF)-East assets devoted to these efforts have stayed at almost the same level; (10) JIATF-East has requested additional resources from the Department of Defense to address Eastern Pacific drug trafficking; (11) Office of National Drug Control Policy (ONDCP) officials told GAO that it developed an overall strategy that identifies agency roles, missions and tasks to execute the drug strategy and establish task priorities; (12) according to ONDCP, its performance measurement systems remains incomplete; (13) until measurable targets are developed, it will not be possible to hold agencies with jurisdiction in the Caribbean accountable for their performance; and (14) law enforcement agencies with Caribbean jurisdiction are developing a regional plan to be completed by January 1998, led by the Drug Enforcement Administration, the Federal Bureau of Investigation, and the U.S. Customs Service.
Given the condition and needs of the transportation system and the federal government’s fiscal outlook, DOT faces several challenges in leveraging investment in surface transportation networks to further national interests. More specifically, DOT faces challenges related to (1) transitioning to a goal-oriented, performance-based approach, (2) targeting funds to national priorities such as our freight network, (3) effectively managing discretionary grant and credit assistance programs, and (4) effectively overseeing programs and spending. Since I testified on this topic last year, there has been progress in clarifying federal goals and roles and linking federal programs to performance, as GAO has recommended. In past work, we reported that many federal transportation programs do not effectively address key challenges, have unclear federal goals and roles, and lack links to performance. As a result, we made several recommendations and matters for congressional consideration to address these findings. In July 2012, the President signed into law the Moving Ahead for Progress in the 21st Century Act (MAP-21) that included provisions to move toward a more performance-based highway and transit program. For highways, for example, the act identified seven national performance goals for areas including pavement and bridge conditions, fatalities and injuries, and traffic congestion. MAP-21 also provides for the creation of performance measures and targets and links funding to performance, thus enhancing accountability for results. Successfully implementing a performance-based approach entails new responsibilities for DOT and its operating administrations. For example, MAP-21 requires that the Secretary of Transportation initiate a rulemaking to establish the required performance measures for highways in consultation with states and others. After performance measures are set, states and other grantees must establish performance targets for those measures and report their progress to the Secretary. While some operating administrations, such as the National Highway Traffic Safety Administration (NHTSA), have been working toward such a performance- based framework for several years, the work to implement MAP-21 requirements will require collaborating with multiple nonfederal partners over several years. DOT also faces institutional challenges in implementing performance-based programs. First, its administration and oversight of programs have tended to be process-oriented, rather than outcome-oriented. For example, we have reported that the Federal Highway Administration’s (FHWA) and the Federal Transit Administration’s (FTA) oversight of statewide and metropolitan planning focuses on process rather than specific transportation outcomes, making it unclear if states’ investment decisions are improving the condition and performance of the nation’s transportation system. For FTA’s triennial review program, which evaluates grantee adherence to federal requirements, we found that FTA evaluates the process—specifically, the timeliness of steps in the process—but not the outcome and quality of the program. Second, based on our work on FHWA’s oversight of the federal-aid highway program, FHWA will have to overcome risks related to its partnership approach with the states to move to a more performance-based approach to monitor states’ progress and hold states accountable for meeting performance targets. We found advantages to FHWA’s partnership approach with the states but also identified risks such as lax oversight, a reluctance to take corrective action, and a lack of independence in decision making. Recent actions, including those set forth in MAP-21, also provide opportunities to better align investments in areas of national interest— such as the freight network—to national goals. The movement of freight over highways, railroads, and waterways is critical to the economy and the livelihood of Americans who rely on freight transportation for food, clothing, and other essential commodities. We have previously reported that the fragmented federal approach to freight surface transportation has resulted in programs having different oversight and funding requirements and a lack of coordination. Last year, MAP-21 established a national freight policy and mandated that DOT develop a National Freight Strategic Plan including national goals and performance targets, as GAO has recommended. In order to implement this more holistic, performance-based approach, DOT will have to effectively coordinate sector transportation agencies at the federal, state, and local levels and private sector entities that play a role in freight mobility. These entities and agencies have not necessarily worked in a coordinated manner in the past. DOT will also have to work with the U.S. Army Corps of Engineers (Corps), the lead federal agency responsible for maintaining and improving navigable waterways. DOT and the Corps signed a memorandum of understanding in March 2012 to identify and capitalize on opportunities to improve the nation’s marine transportation infrastructure investments. Specifically, DOT and the Corps agreed to develop project prioritization criteria and coordinate project evaluation and selection processes as they relate to DOT grant programs and the Corps’ project prioritization. Historically, however, there has been limited coordination between the two agencies. Involving the Corps is essential, since the vast majority of the nation’s freight is imported and exported via navigable waterways through our nation’s ports. Beyond challenges associated with implementing these changes driven primarily by MAP-21, DOT also faces challenges effectively managing existing discretionary grant programs. Most federal surface-transportation funding has been delivered through formula grant programs that have only an indirect relationship to needs and allow states and other grantees considerable flexibility in selecting projects to fund. Meritorious projects of national or regional significance, in particular those that connect transportation modes or cross geographic boundaries, may not compete well for these formula grants. Therefore, allocating some portion of federal funds for surface transportation on a competitive basis—as is done in many discretionary programs—for projects of national or regional significance in particular, is a direction we have recommended to more effectively address the nation’s surface transportation challenges. Below we highlight key issues based on our work on two DOT discretionary programs. Transportation Investment Generating Economic Recovery (TIGER) program: The TIGER program represented an important step toward investing in projects of regional and national significance on a merit- based, competitive basis. Since 2009, DOT has held four rounds of competition and awarded more than $3 billion in grants to highway, transit, rail, port, and other projects. In March 2011 we reported that while DOT developed a sound set of criteria to evaluate applications and select grantees, there was a lack of documentation of final award decisions. As a result, we recommended that DOT better document these decisions. DOT has not implemented this recommendation. In its work on the TIGER program, DOT’s Office of the Inspector General (OIG) found that while grantees had developed performance measures, as required, these measures were generally not outcome based and thus could not be used to assess whether projects were meeting the expected outcomes articulated in their applications, such as improving the state of infrastructure and enhancing safety. Going forward, documenting key decisions for all major steps in the review of competitive grant applications will help improve transparency and help to ensure the credibility of DOT’s award decisions. In addition, establishing a process for evaluating program performance based on project outcomes will be important for DOT to be able to measure the impacts of these investments. High Speed Intercity Passenger Rail (HSIPR) grant program: The HSIPR program, administered by the Federal Railroad Administration (FRA), provides funds to states and others to develop high-speed rail and inter-city passenger-rail corridors and projects. Congress appropriated $8 billion for high-speed rail and inter-city passenger rail in the American Recovery and Reinvestment Act of 2009 (Recovery Act) and $2.5 billion in the fiscal year 2010 DOT Appropriations Act. As of October 2012, about $9.9 billion has been obligated for 150 projects in 34 states and the District of Columbia—with more than one third designated for a single project in California. While most of the program’s funds have been obligated, we have highlighted key challenges that FRA faces managing this program. In 2009, we recommended that FRA develop guidelines and methods for ensuring reliability of ridership and other forecasts used to determine the viability of high-speed rail projects. According to FRA, this recommendation is in the process of being implemented, and FRA officials stated that the agency is working to develop a comprehensive approach for improving the reliability of ridership forecasts. The DOT OIG reported that FRA faces substantial challenges to ensure the HSIPR program meets reporting, transparency, and program and financial management requirements under the Recovery Act and that Recovery Act funding that has been obligated for HSIPR projects is not wasted. In addition, FRA will have to transition from its role of awarding grants to overseeing the implementation of HSIPR-funded projects, including overseeing the implementation of the California High Speed Rail project, which has a current cost estimate of $68.4 billion. In addition, DOT faces challenges implementing and managing changes to the Transportation Infrastructure Finance and Innovation Act (TIFIA) program, which provides direct loans, loan guarantees, and lines of credit to surface transportation projects. MAP-21 made several changes to the TIFIA program, including a dramatic increase in the funding available for the program. Such changes—coupled with TIFIA’s already complex mission to leverage limited federal resources and stimulate private capital investment in transportation infrastructure by providing credit assistance to projects of national or regional significance—constitute new challenges. MAP-21 authorized $750 million for fiscal year 2013 and $1 billion for fiscal year 2014 to pay the subsidy cost of credit assistance, compared to $122 million in authorized budget authority in previous years. MAP-21 also made changes to the process DOT uses to select projects and increased the portion of project costs TIFIA loans can cover from 33 to 49 percent. As we reported in 2012, with the increase in budget authority, DOT will likely have a higher number of applications to review and credit agreements to negotiate. DOT faces challenges implementing these changes—including updating guidance, issuing new regulations, and ensuring that adequate staff and expertise exist to efficiently manage the expanded program—all while TIFIA credit assistance remains in high demand. Moreover, as the TIFIA portfolio grows, now totaling more than $10 billion in loans and other assistance, DOT will have to monitor an increasing number of projects as they proceed through what is expected to be decades of loan repayment to manage current and future risk from potential nonrepayment. DOT also faces challenges overseeing other programs going forward. The federal-aid highway program, and thus FHWA’s oversight role, has expanded over the years to encompass broader goals, more responsibilities, and a variety of approaches. FHWA has taken steps to improve its approach to managing this program’s risks by, for example, requiring field offices to identify risks, assess them based on their potential impact and the likelihood they will occur, and develop response strategies in their planned oversight activities. However, in addition to overcoming the risks associated with its partnership with the states, opportunities for improvement in other areas remain. In 2011, for this Subcommittee, we reviewed FHWA’s Emergency Relief Program, which provides funds to states to repair roads damaged by natural disasters and catastrophic failures, and were unable to determine the basis on which FHWA made many eligibility determinations because of missing or incomplete documentation. Without clear and standardized procedures for FHWA officials to make and document eligibility decisions, FHWA lacks assurance that only eligible projects are approved to receive scarce relief funds. In June 2012, in response to a GAO recommendation, FHWA reviewed each state’s balance of unused emergency relief funds on a monthly basis so that unused funding can be more easily identified and withdrawn. This resulted in savings of about $231 million in unused allocations in fiscal year 2012, which was made available to other priority Emergency Relief Program projects. In addition, FTA is implementing a new Public Transportation Emergency Relief Program established in July 2012 in MAP-21, for which Congress recently appropriated this program’s first funds—$10.9 billion—to restore transit services affected by Hurricane Sandy. As FTA implements this new program and distributes funds, assurance that only eligible projects receive funds and that processes support effective and efficient delivery of relief services is of particular importance. Another challenge that DOT and the states continue to face is improving safety. The vast majority of transportation-related fatalities and injuries occur on our roadways, involving drivers and passengers in cars and large trucks, motorcyclists, pedestrians, and cyclists. We have seen a remarkable decline in traffic fatalities and injuries in recent years. Specifically, traffic fatalities and injuries decreased nearly 24 percent over the last decade, from about 43,000 fatalities and 2.9 million injuries in 2002 to about 32,000 fatalities and 2.2 million injuries in 2011. (See fig. 1.) While these trends are encouraging, NHTSA’s early estimates of traffic fatalities for the first 9 months of 2012 project a 7 percent increase in fatalities, which would be the first increase since 2005. Continued federal and state efforts to reduce traffic fatalities and injuries are needed, particularly in areas where the risks of crashes, fatalities, and injuries are high, such as motorcyclist, teen-driver, and distracted-driving crashes. While other surface transportation modes—such as rail, transit, and pipeline—are relatively safe when compared to roadways, accidents can and do occur. For example, a natural gas pipeline explosion in San Bruno, California, in September 2010 killed 8 people and damaged or destroyed over 100 homes, and a hazardous liquid pipeline rupture near Marshall, Michigan, in July 2010 spilled over 840,000 gallons of crude oil into a wetland area. Likewise, although 2012 was the safest year in rail industry history, three notable freight rail accidents occurred during the summer of 2012—including the derailment of a freight train in Columbus, Ohio, which caused the evacuation of homes in the area because of a fire caused by exploding ethanol tank cars. In addition, while the nation’s aviation system is one of the safest in the world, with air travel projected to increase over the next 20 years, efforts to ensure continued safety are increasingly important. To enhance safety, the Federal Aviation Administration (FAA) is shifting to a data-driven, risk- based safety oversight approach—called a safety management system (SMS) approach. Implementation of SMS is intended to allow FAA to proactively identify system-wide trends in aviation safety and manage emerging hazards before they result in incidents or accidents. Our recent work on transportation safety across all modes has highlighted the need for improvement in data and oversight. With the move toward a more performance-based approach in MAP-21, high-quality data are essential to identify progress and ensure accountability. As DOT moves closer to a data-driven, performance-based structure, a robust oversight approach is critical to ensure that states are establishing appropriate goals and making sufficient progress toward those goals. For traffic safety data, states maintain six core types of data systems that are used to identify priorities for highway and traffic safety programs. In 2010, we reported that NHTSA’s periodic assessments designed to help states evaluate the quality of their data systems were in some cases incomplete or inconsistent. We recommended actions for DOT to make those assessments more useful for states, and DOT plans to complete implementation of those actions this spring. Data are also critical for the Federal Motor Carrier Safety Administration (FMCSA) to target resources and identify which of the hundreds of thousands of commercial motor vehicles operating on our nation’s roads pose the highest safety concerns. For example, we recently reported that FMCSA only examines about 2 percent of new motor carrier applicants that register annually to identify carriers operating illegally under new identities. We recommended that FMCSA develop a data-driven approach to target new carriers attempting to disguise their former identities and expand this new approach to examine all motor carriers. FMCSA is currently developing a plan to enhance its ability to identify unsafe motor carriers that try to disguise their former identities and expects to complete the development of a data-driven approach by February 2013. Further, industry representatives, shippers and brokers, and other stakeholders are questioning the validity of certain aspects, such as the accuracy and consistency of data inputs and the reliability of carrier performance scores resulting from FMCSA’s Compliance, Safety, Accountability (CSA) initiative—a data-driven approach to select the highest risk carriers for intervention. We are currently evaluating this approach and plan to report on this and other aspects of the CSA program later this year. FRA is responsible for overseeing efforts made by railroads in developing positive train control (PTC), a communications-based system designed to prevent some serious train accidents; progress in these efforts has been a concern. Federal law requires major freight and passenger railroads to implement this system on most major routes by the end of 2015. In 2010, we reported that delays in developing some system components as well as costs that publicly funded commuter railroads would incur to implement the system raised the risk that railroads would not meet the 2015 deadline. In 2012, in response to our recommendation, FRA reported to Congress on the railroads’ progress in implementing PTC noting that it was unlikely most railroads would be able to meet the 2015 deadline. Further, FRA identified obstacles and recommended factors to consider in developing additional legislation. We are currently reviewing how FRA estimated the costs and benefits of PTC in its rulemaking process and to what extent railroads will be able to leverage PTC technology to achieve benefits in addition to the anticipated safety improvements. MAP-21 authorized FTA to establish and enforce basic safety standards for transit rail systems and required the agency to develop a new safety oversight program, with a continued role for state safety oversight offices that meet certain requirements. We have noted that FTA would face challenges in building up its internal capability to develop and carry out such a program, and that state safety oversight agencies would face similar challenges. As FTA moves forward, reliable rail-transit safety data as well as clear and specific goals and measures based on these data will be essential in allowing FTA to monitor safety trends, determine whether safety programs are achieving their intended purposes, target resources, and make informed decisions about the safety strategy. In 2011, we recommended improvements in FTA’s rail-transit safety database and related goals and measures. FTA officials have informed us that they have taken steps to improve this database, including establishing the appropriate internal controls over their data collection process to prevent data-reporting errors. FTA officials have also informed us that, as part of their efforts to develop their new safety strategy, they are working on developing new goals and measures for the agency’s rail- transit safety efforts. Data collection and oversight for the safety of our nation’s 2.5-million mile pipeline network can also be improved. For example, while the Pipeline and Hazardous Materials Safety Administration (PHMSA) requires pipeline operators to develop incident response plans to minimize the risks of leaks and ruptures, PHMSA has not linked performance measures or targets to measurable response-time goals and does not collect reliable data on actual incident response times. In January 2013, we recommended that PHMSA improve incident response data and use these data to evaluate whether to implement a performance-based framework for incident response times. In addition, part of the nation’s pipeline network consists of more than 200,000 miles of onshore “gathering” pipelines, many of which are not federally regulated because they have generally been located away from populated areas and operate at relatively low pressures. However, urban development is encroaching on these pipelines, and the increased extraction of oil and natural gas from shale deposits is resulting in new gathering pipelines that can be larger in diameter and operate at higher pressures. Thus, in March 2012, we recommended that PHMSA collect data on these pipelines to assess their safety risks. In response, PHMSA has initiated a rulemaking to collect data on gathering pipelines. Our work has found that FAA continues to experience data-related challenges that affect oversight efforts, including limitations with the analysis it conducts and the data it collects, as well as the absence of data in some areas. For example, we reported that several challenges remain that may affect FAA’s ability to implement SMS in an efficient and timely matter, challenges related to data sharing and data quality, capacity to conduct SMS-based analyses and oversight, and standardization of policies and procedures. As a result, in September 2012 we made several recommendations to FAA regarding the implementation of SMS that FAA is working to address. We also identified data and oversight concerns in FAA’s efforts to reduce the general aviation accident rate. For example, while we can draw some conclusions about general aviation accident characteristics, limitations in flight activity (e.g., flight hours) and other data preclude a complete assessment of general aviation safety. GAO has recommended, among other things, that FAA require the collection of general aviation aircraft flight-hour data in ways that minimize the impact on the general aviation community, set safety improvement goals for individual general aviation-industry segments, and develop performance measures for significant activities that aim to improve general aviation safety. FAA is currently working to implement these recommendations. FAA’s data-related challenges are affecting other efforts, such as the development of standards for unmanned aerial systems (UAS) operations, a key step in the integration of these systems into the national airspace system. The standards- development process has been hindered, in part, because of FAA’s inability to use safety, reliability, and performance data from the Department of Defense, by the need for additional data from other sources, and by the complexities of UAS issues in general. FAA is working to address these data limitations; its success in doing so is important in moving forward with the standards-development process as well as supporting research and development efforts needed to address the obstacles affecting safe integration of UAS operations. Another area that I would like to address is the implementation of NextGen. This complex multiagency undertaking is intended to transform the current radar-based system into an aircraft-centered, satellite navigation-based system and is estimated to cost between $15 billion and $22 billion through 2025. FAA has taken several steps to improve NextGen implementation and is continuing to address critical issues that we, stakeholders, and others have identified, including three key challenges that affect NextGen implementation: delivering and demonstrating NextGen benefits, keeping key NextGen acquisitions within cost estimates and on schedule, and balancing NextGen implementation with maintaining and operating the current air traffic control system during the transition. FAA must deliver systems, procedures, and capabilities that provide aircraft operators with a return on their investments in NextGen avionics. For example, a large percentage of the current fleet is equipped to fly more precise performance-based navigation (PBN) procedures, which use satellite-based guidance to route aircraft and improve approaches at airports, and can save operators money through reduced fuel use and shorter flight time. However, operators have expressed concern that FAA, to date, has not produced the most useful or beneficial PBN routes and procedures, and therefore, operators do not yet see benefits resulting from their investments in advanced avionics systems. As a means to leverage existing technology, to provide immediate benefit to the industry, and to respond to industry advisory group recommendations, FAA began an initiative to better use PBN procedures to resolve airspace problems in and provide benefits to 13 selected areas around multiple busy airports, known as “metroplexes.” FAA is working to design its metroplex and other PBN initiatives to avoid some of the challenges—such as lack of air traffic controller involvement—that have limited the use of PBN procedures and, in turn, limited the potential benefits of existing PBN procedures. If operators cannot realize benefits from existing equipment investments, they may be hesitant to invest in the new technologies necessary to fully realize NextGen benefits. While some operational improvements can be made with existing aircraft equipment, realizing more significant NextGen benefits requires a critical mass of properly equipped aircraft. Reaching that critical mass is a significant challenge because the first aircraft operators to purchase and install NextGen-capable technologies will not obtain a return on their investment until many other operators also adopt NextGen technologies. FAA estimates that the NextGen avionics needed on aircraft to realize significant midterm NextGen capabilities will cost private operators about $6.6 billion from 2007 through 2018. However, aircraft operators may be hesitant to make these investments if they do not have confidence that benefits will be realized from their investments. The FAA Modernization and Reform Act of 2012 created a program to facilitate public-private financing for equipping general-aviation and air-carrier aircraft with NextGen technologies. According to FAA, the goal for such a program would be to encourage deployment of NextGen-capable aircraft sooner than would have occurred without such funding assistance in place. FAA is soliciting industry input about how to design and implement a loan guarantee program but has yet to decide on how to incentivize this transition. As we have previously reported, FAA should regularly provide stakeholders, interested parties, Congress, and the American people with a clear picture of where NextGen’s implementation stands, and whether the capabilities being implemented are resulting in positive outcomes and improved performance for operators and passengers. We have recommended that FAA develop a timeline and action plan to work with industry and federal partner agencies to develop an agreed-upon list of outcome-based performance metrics, as well as goals for NextGen both at a broad level and in specific NextGen improvement areas. In addition, the FAA Modernization and Reform Act of 2012 requires FAA to report on measures of the agency’s progress in implementing NextGen capabilities and operational results. FAA has taken steps to establish NextGen metrics, but much work remains, including finalizing agency targets for specific improvement areas and making a link between NextGen performance goals and metrics and NextGen improvements. For example, publicly available information about FAA’s plans for implementing additional capabilities through 2018 lacks specifics about the timing and locations of implementation; this lack of details has been cited as an obstacle to incentivizing aircraft operators to equip with new technologies. Measuring performance of near-term NextGen improvements will be critical for FAA management and stakeholders to assess impacts, make investment decisions, and monitor NextGen progress. We will report on this issue in more detail as part of our ongoing near-term NextGen implementation work for the Congress. NextGen has significantly increased the number, cost, and complexity of FAA’s acquisition programs; it is imperative that these programs remain on time and within budget, particularly given current budget constraints and the interdependencies of many NextGen-acquisitions. Since our February 2012 report on major air traffic control acquisition programs, the key NextGen-related acquisition programs have generally continued to proceed on time and on budget. However, past delays with the En Route Automation Modernization (ERAM) program—a critical program for NextGen—illustrate how delays can affect overall acquisition and maintenance costs as well as time frames for other programs. As we previously reported, ERAM’s delayed implementation from December 2010 to August 2014 and cost increase of $330 million were associated with insufficient testing to identify software issues before deployment at key sites and insufficient stakeholder involvement during system development and deployment. The delays with ERAM added an estimated $18 million per year to the costs of maintaining the system that ERAM was meant to replace and delayed other key NextGen acquisitions. Since new budget and schedule baselines for the ERAM program were established in June 2011, according to FAA reports, the program has made progress toward its deployment goals. The successful implementation of NextGen—both in the midterm (through 2020) and in the long term (beyond 2020)—will be affected by how well FAA manages such program interdependencies. Particularly in light of constrained budget resources, FAA will have to balance its priorities to help ensure that NextGen implementation stays on course. Sustaining the current legacy equipment and facilities remains critical, as these will continue to be the core of the national airspace system for a number of years, and some of the components will be part of NextGen. For example, while FAA transitions to satellite-based aircraft surveillance through the deployment of Automatic Dependent Surveillance-Broadcast Out (ADS-B Out) technology, the agency expects to continue to operate and maintain current radar technology through at least 2020. At that time, FAA is scheduled to make decisions about which radar systems the agency will decommission and which will be maintained as the back-up system for ADS-B. If either ADS-B’s deployment or airlines’ efforts to purchase and install this technology is delayed, then FAA may have to maintain and operate some of its radars longer than expected. In addition, to fully realize NextGen’s capabilities, facilities that handle air traffic control must be reconfigured. In November 2011, FAA approved an initial plan to consolidate en route centers and terminal radar approach- control facilities (TRACONs) into large, integrated facilities over the next two decades. However, FAA has yet to make key decisions on how to proceed with this consolidation, and has delayed its decision on where to build the first integrated facility until June 2013. While FAA develops its facilities plan, it faces the immediate task of maintaining and repairing existing facilities so that the current air-traffic control system continues to operate safely and reliably during the NextGen transition. According to FAA, in 2011, 65 percent of its terminal facilities and 74 percent of its en route facilities were in either poor or fair condition with a total deferred- maintenance backlog of $310 million for these facilities. Once FAA develops and implements a facility consolidation plan, it can identify which legacy facilities to repair and maintain and, in doing so, potentially reduce overall facility repair and maintenance costs. FAA has acknowledged the need to keep long-term plans in mind so that it does not invest unnecessarily in facilities that will not be used for NextGen. Although NextGen is projected to keep delays at many airports from getting worse than would be expected without these improvements, NextGen alone is not likely to sufficiently expand the capacity of the national airspace system. For example, FAA’s NextGen modeling indicates that even if all ongoing and planned NextGen technologies are implemented, 14 airports—including some of the 35 busiest—may not be able to meet the projected increases in demand (table 1). The transformation to NextGen will also depend on the ability of airports to handle greater capacity. For example, decisions regarding using existing capacity more efficiently include certifying and approving standards that allow the use of closely spaced parallel runways. At some airports, policies may need to be developed to address situations where demand exceeds capacity (e.g., pricing, administrative rules, service priorities). Infrastructure projects to increase capacity, such as building additional runways, can be a lengthy process and will require substantial advance planning as well as safety and cost analyses. Also, the improved efficiency in runway and airspace use that should result from some NextGen technologies may exacerbate other airport capacity constraints, such as taxiways, terminal gates, or parking areas. Finally, increasing capacity must be handled within the context of limiting increases in emissions and noise that can affect the communities around airports. DOT relies extensively on more than 400 computerized information systems to carry out its financial and mission-related operations. Effective information security controls are required to ensure that financial and sensitive information is adequately protected from inadvertent or deliberate misuse, fraudulent use, and improper disclosure, modification, or destruction. Ineffective controls can also impair the accuracy, completeness, and timeliness of information used by management. The need for effective information security is further underscored by the evolving and growing cyber threats to federal systems and the increase in the number of security incidents reported by DOT and other federal agencies. DOT has been challenged to effectively protect its computer systems and networks. Our analysis of Office of Management and Budget (OMB), OIG, and GAO reports shows that the department has not consistently implemented effective controls in accordance with National Institute of Standards and Technology (NIST) and OMB guidance in response to the Federal Information Security Management Act (FISMA). For example, in March 2012, OMB reported that DOT had a 44.2 percent compliance rate with certain FISMA requirements. Although this is a 14.4 percent increase from fiscal year 2010, it is still below many other major federal agencies. In addition, OMB reported that DOT’s implementation of automated continuous-monitoring capabilities for asset and configuration management were both below 50 percent of the agency’s information technology assets. Further, we have reported on the need for federal agencies, including DOT, to improve their workforce planning, hiring, and development activities for cybersecurity personnel. We recommended that DOT, among other things, update its departmentwide cybersecurity workforce plan or ensure that departmental components have plans that fully address gaps in critical skills and competencies and that support requirements for its cybersecurity workforce strategies. The department neither concurred nor nonconcurred with our recommendations. In summary, as the principal agency responsible for implementing national transportation policy and administering most federal transportation programs, DOT faces several key challenges going forward in leveraging surface transportation investments, improving surface and aviation transportation safety, effectively implementing NextGen, and improving information security. Addressing these challenges in an environment of increasing need and increasing fiscal challenges will require looking at the entire range of federal activities and reexamining federal spending and tax expenditures to improve and enhance these systems that are vital to the nation’s economy. Chairman Latham, Ranking Member Pastor, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information on the statement, please contact Phillip R. Herr at (202) 512-2834 or [email protected]. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Individuals making key contributions to this statement were Melissa Bodeau, Jonathan Carver, Steve Cohen, Matthew Cook, Gerald Dillingham, Susan Fleming, Judy Guilliams-Tapia, Brandon Haller, Nicole Jarvis, Heather Krause, Hannah Laufe, Edward Laughlin, Joanie Lofgren, Maureen Luna-Long, Heather MacLeod, Maria Mercado, SaraAnn Moessbauer, Sara Vermillion, Dave Wise, Gregory Wilshusen, and Susan Zimmerman. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 2013. Unmanned Aircraft Systems: Continued Coordination, Operational Data, and Performance Standards Needed to Guide Research and Development. GAO-13-346T. Washington, D.C.: February 15, 2013. Pipeline Safety: Better Data and Guidance Needed to Improve Pipeline Operator Incident Response. GAO-13-168. Washington, D.C: January 23, 2013. Highway Trust Fund: Pilot Program Could Help Determine Viability of Mileage Fees for Certain Vehicles. GAO-13-77. Washington, D.C.: December 13, 2012. The Federal Government’s Long-Term Fiscal Outlook, Fall 2012 Update. GAO-13-148SP. Washington: D.C.: December 3, 2012. Maritime Infrastructure: Opportunities Exist to Improve the Effectiveness of Federal Efforts to Support the Marine Transportation System. GAO-13-80. Washington, D.C.: November 13, 2012. General Aviation Safety: Additional FAA Efforts Could Help Identify and Mitigate Safety Risks. GAO-13-36. Washington, D.C.: October 4, 2012. Next Generation Air Transportation System: FAA Faces Implementation Challenges. GAO-12-1011T. Washington, D.C.: September 12, 2012. Surface Transportation: Financing Program Could Benefit from Increased Performance Focus and Better Communication. GAO-12-641. Washington, D.C.: June 21, 2012. Highway Infrastructure: Federal-State Partnership Produces Benefits and Poses Oversight Risks. GAO-12-474. Washington, D.C.: April 26, 2012. Pipeline Safety: Collecting Data and Sharing Information on Federally Unregulated Gathering Pipelines Could Help Enhance Safety. GAO-12-388. Washington, D.C.: March 22, 2012. Motor Carrier Safety: New Applicant Reviews Should Expand to Identify Freight Carriers Evading Detection. GAO-12-364. Washington, D.C.: March 22, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Air Traffic Control Modernization: Management Challenges Associated with Program Costs and Schedules Could Hinder NextGen Implementation. GAO-12-223. Washington, D.C.: February 16, 2012. Cybersecurity Human Capital: Initiatives Need Better Planning and Coordination. GAO-12-8. Washington, D.C.: November. 29, 2011. Highway Emergency Relief: Strengthened Oversight of Eligibility Decisions Needed. GAO-12-45. Washington, D.C.: November 8, 2011. Aviation Safety: Enhanced Oversight and Improved Availability of Risk- Based Data Could Further Improve Safety. GAO-12-24. Washington, D.C.: October 5, 2011. Highway Trust Fund: All States Received More Funding Than They Contributed in Highway Taxes from 2005 to 2009. GAO-11-918. Washington, D.C.: September 8, 2011. NextGen Air Transportation System: Mechanisms for Collaboration and Technology Transfer Could be Enhanced to More Fully Leverage Partner Agency and Industry Resources. GAO-11-604. Washington, D.C.: June 30, 2011. Surface Transportation: Competitive Grant Programs Could Benefit from Increased Performance Focus and Better Documentation of Key Decisions. GAO-11-234. Washington, D.C.: March 30, 2011. Intercity Passenger Rail: Recording Clearer Reasons for Awards Decisions Would Improve Otherwise Good Grantmaking Practices. GAO-11-283. Washington, D.C.: March 10, 2011. Rail Transit: FTA Programs Are Helping Address Transit Agencies’ Safety Challenges, but Improved Performance Goals and Measures Could Better Focus Efforts. GAO-11-199. Washington, D.C.: January 31, 2011. Statewide Transportation Planning: Opportunities Exist to Transition to Performance-Based Planning and Federal Oversight. GAO-11-77. Washington, D.C.: December 15, 2010. NextGen Air Transportation System: FAA’s Metrics Can Be Used to Report on Status of Individual Programs, but Not of Overall NextGen Implementation or Outcomes. GAO-10-629. Washington, D.C.: July 27, 2010. High Speed Rail: Learning From Service Start-ups, Prospects for Increased Industry Investment, and Federal Oversight Plans. GAO-10-625. Washington, D.C.: June 17, 2010. Aviation Safety: Improved Data Quality and Analysis Capabilities Are Needed as FAA Plans a Risk-Based Approach to Safety Oversight. GAO-10-414. Washington, D.C.: May 6, 2010. Traffic Safety Data: State Data System Quality Varies and Limited Resources and Coordination Can Inhibit Further Progress. GAO-10-454. Washington, D.C.: April 15, 2010. Rail Transit: Observations on FTA’s State Safety Oversight Program and Potential Change in Its Oversight Role. GAO-10-314T. Washington, D.C.: December 10, 2009. Metropolitan Planning Organizations: Options Exist to Enhance Transportation Planning Capacity and Federal Oversight. GAO-09-868. Washington, D.C.: September 9, 2009. Federal-Aid Highways: FHWA Has Improved Its Risk Management Approach, but Needs to Improve Its Oversight of Project Costs. GAO-09-751. Washington, D.C.: July 24, 2009. Public Transportation: FTA’s Triennial Review Program Has Improved, But Assessments of Grantees’ Performance Could Be Enhanced. GAO-09-603. Washington, D.C.: June 30, 2009. High Speed Passenger Rail: Future Development Will Depend on Addressing Financial and Other Challenges and Establishing a Clear Federal Role. GAO-09-317. Washington, D.C.: March 19, 2009. Surface Transportation: Clear Federal Role and Criteria-Based Selection Process Could Improve Three National and Regional Infrastructure Programs. GAO-09-219. Washington, D.C.: February 6, 2009. Surface Transportation: Restructured Federal Approach Needed for More Focused, Performance-Based, and Sustainable Programs. GAO-08-400. Washington, D.C.: March 6, 2008. Freight Transportation: National Policy and Strategies Can Help Improve Freight Mobility. GAO-08-287. Washington, D.C.: January 7, 2008. Railroad Bridges and Tunnels: Federal Role in Providing Safety Oversight and Freight Infrastructure Investment Could Be Better Targeted. GAO-07-770. Washington, D.C.: August 6, 2007. Intermodal Transportation: DOT Could Take Further Actions to Address Intermodal Barriers. GAO-07-718. Washington, D.C.: June 20, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The nation's transportation system--including highways, airways, pipelines, and rail systems that move both people and freight--is critical to the economy and affects the daily lives of most Americans. However, this system is under growing strain, and estimates of the cost to repair and upgrade the system to meet current and future demands are in the hundreds of billions of dollars. At the same time, traditional funding sources--in particular motor fuel and truck-related taxes--are eroding and the federal government faces long-term fiscal challenges. Addressing these challenges will require looking across federal activities and reexamining all types of federal spending and tax expenditures. DOT is the principal agency responsible for implementing national transportation policy and administering most federal transportation programs. This statement discusses four key management challenges facing DOT: (1) leveraging surface transportation investments to further national interests, (2) improving surface and aviation transportation safety, (3) effectively implementing the Next Generation Air Transportation System and (4) improving information security. This statement is based on GAO's previous reports and testimonies, which are listed at the end of the statement. GAO has made a number of recommendations to DOT to more effectively leverage the departments' investments and enhance the safety of the traveling public, among other areas. DOT actions underway to address these recommendations are described in this statement. Leveraging surface transportation investments to further national interests : The Department of Transportation (DOT) faces several challenges leveraging investment in surface transportation networks to meet national goals and priorities. For example, DOT has to transition to a goal-oriented, performance-based approach for highway and transit programs, as required by the Moving Ahead for Progress in the 21st Century Act (MAP-21). Successfully implementing a performance-based approach entails new responsibilities for DOT since, as GAO has previously reported, its program oversight has generally been process-oriented rather than outcome-oriented. DOT also faces challenges related to targeting funds to priorities like the nation's freight network, effectively managing discretionary grant and credit assistance programs, and effectively overseeing other programs, such as the federal-aid highway program. Improving surface and aviation transportation safety : GAO's recent work on safety across all modes has highlighted the need for improved data reliability and oversight. For example, data are critical for identifying commercial motor vehicles that pose the highest safety concerns. In 2012, GAO recommended that the Federal Motor Carrier Safety Administration (FMCSA) develop a data-driven approach to target carriers operating illegally by attempting to disguise their former identities and expand this approach to examine all new motor carriers. FMCSA is currently working to develop such a data-driven approach. Aviation safety-data collection and oversight also can be improved. For example, limitations in flight activity (e.g., flight hours) and other data preclude a complete assessment of general aviation safety. GAO recommended, among other things, that the Federal Aviation Administration (FAA) require the collection of general aviation aircraft flight-hour data in ways that minimize the impact on the general aviation community and set safety improvement goals for individual general aviation-industry segments, which FAA is working to address. Effectively implementing the Next Generation Air Transportation System (NextGen) : NextGen is intended to transform the current radar-based system to an aircraft-centered, satellite navigation-based system. FAA faces three key challenges going forward. One challenge is delivering procedures and capabilities that provide aircraft operators with a return on investment in NextGen avionics to incentivize further investments. FAA also faces challenges keeping key NextGen acquisitions within cost estimates and on schedule. NextGen implementation will be affected by how well FAA manages the program's interdependencies, as delays in one program can affect timeframes for other programs and overall acquisition and maintenance costs. Finally, FAA faces challenges managing the transition to NextGen. FAA will have to balance its priorities to ensure that NextGen implementation stays on course while continuing to maintain current equipment and facilities. FAA's modeling indicates that even if all NextGen technologies are implemented, 14 airports--including some of the 35 busiest--may not be able to meet projected increases in demand. Improving information security : DOT faces challenges effectively protecting its computer systems and networks. GAO and others have found that DOT has not consistently implemented effective controls to ensure that financial and sensitive information is adequately protected from unauthorized access and other risks.
DI and SSI provide cash benefits to people with long-term disabilities. While the definition of disability and the process for determining disability are the same for both programs, the programs were initially designed to serve different populations. The DI program, enacted in 1954, provides monthly cash benefits to disabled workers—and their dependents or survivors—whose employment history qualifies them for disability insurance. These benefits are financed through payroll taxes paid by workers and their employers and by the self-employed. In fiscal year 2001, more than 6 million individuals received more than $59 billion in DI benefits. SSI, on the other hand, was enacted in 1972 as an income assistance program for aged, blind, or disabled individuals whose income and resources fall below a certain threshold. SSI payments are financed from general tax revenues, and SSI beneficiaries are usually poorer than DI beneficiaries. In 2001, more than 6 million individuals received almost $28 billion in SSI benefits. The process to obtain SSA disability benefits is complex and fragmented; multiple organizations are involved in determining whether a claimant is eligible for benefits. As shown in figure 1, the current process consists of an initial decision and up to three levels of administrative appeals if the claimant is dissatisfied with SSA’s decision. Each level of appeal involves multistep procedures for evidence collection, review, and decision- making. Generally, a claimant applies for disability benefits at one of SSA’s 1,300 field offices across the country, where a claims representative determines whether the claimant meets financial and other program eligibility criteria. If the claimant meets these eligibility criteria, the claims representative forwards the claim to the state disability determination service (DDS).DDS staff then obtain and review evidence about the claimant’s impairment to determine whether the claimant is disabled. Once the claimant is notified of the medical decision, the claim is returned to the field office for payment processing or file retention. This completes the initial claims process. Claimants who are initially denied benefits can ask to have the DDS reconsider its initial denial. If the decision at this reconsideration level remains unfavorable, the claimant can request a hearing before a federal administrative law judge (ALJ) at an SSA hearings office, and, if still dissatisfied, the claimant can request a review by SSA’s Appeals Council. Upon exhausting these administrative remedies, the individual may file a complaint in federal district court. Given its complexity, the disability claims process can be confusing, frustrating, and lengthy for claimants. Many individuals who appeal SSA’s initial decision will wait a year or longer for a final decision on their benefit claims. In fact, the commissioner recently testified that claimants can wait as long as 1,153 days from initial claim through a decision from the Appeals Council. Moreover, the claims process can also result in inconsistent assessments of whether claimants are disabled; specifically, the DDS may deny a claim that is later allowed upon appeal. For example, in fiscal year 2000, about 40 percent of claimants denied at the initial level filed an appeal and about two-thirds were awarded benefits. This inconsistency calls into question the fairness, integrity and cost of SSA’s disability decisions. Program rules, such as claimants’ ability to submit additional evidence and to allege new impairments upon appeal, as well as the worsening of some claimants’ conditions over time can explain only some but not all of the overturned cases. Other overturned cases may be due to inaccurate decisions by the DDSs or ALJs or to other unexplained factors. In response to these problems, SSA first announced an ambitious plan to redesign the disability claims process in 1994, after a period of rapid growth in the number of people applying for disability benefits. This plan represented the agency’s first effort to significantly revise its procedures for deciding disability claims since the DI program began in the 1950’s. The overall purpose of the redesign was to ensure that decisions are made quickly, ensure that the disability claims process is efficient, award legitimate claims as early in the process as possible, ensure that the process is user friendly for claimants and those who provide employees with a satisfying work environment. The agency’s initial plan entailed a massive effort to redesign the way it made disability decisions. SSA had high expectations for its redesign effort. Among other things, SSA planned to develop a streamlined decision-making and appeals process, more consistent guidance and training for decision makers at all levels of the process, and an improved process for reviewing the quality of eligibility decisions. In our reviews of SSA’s efforts after 2 and 4 years, we found that the agency had accomplished little. In some cases, the plans were too large and too complex to keep on track. In addition, the results of many of the initiatives that were tested fell far short of expectations. Moreover, the agency was not able to garner consistent stakeholder support and cooperation for its proposed changes. In 1999, we recommended that SSA focus attention and resources on those initiatives that offer the greatest potential for achieving the most critical redesign objectives, such as quality assurance, computer support systems, and initiatives that improve consistency in decision-making. In addition, because implementing process changes can be even more difficult than testing them, we recommended that SSA develop a comprehensive and meaningful set of performance measures that help the agency assess and monitor the results of changes in the claims process on a timely basis. We have also pointed out the need for effective leadership and sustained management attention to maintain the momentum needed to effect change in such a large and complex system. SSA’s five most recent initiatives were designed to improve claims processing at all levels of the service delivery system. These redesign initiatives continue to experience only limited success. A brief summary of the status, results and problems experienced in implementing each of the five initiatives follows. The Disability Claim Manager initiative, which began in November 1997 and ended in June 2001, was designed to make the claims process more user friendly and efficient by eliminating steps resulting from numerous employees handling discrete parts of the claim. It did so by having one person—the disability claim manager—serve as the primary point of contact for claimants until initial decisions were made on their claims.The managers assumed responsibilities normally divided between SSA’s field office claims representatives and state DDS disability examiners. After an initial training phase, SSA tested the concept in 36 locations in 15 states from November 1999 through November 2000. While the test resulted in several benefits, such as improved customer and employee satisfaction and quicker claims processing, the increased costs of the initiative and other concerns convinced SSA not to implement the initiative. The Prototype changed the way state DDSs process initial claims, with the goal of ensuring that legitimate claims are awarded as early in the process as possible. This initiative makes substantial changes to the way the DDS processes initial claims. The Prototype requires disability examiners to more thoroughly document and explain the basis for their decisions and it gives them greater decisional authority for certain claims. The Prototype also eliminates the DDS reconsideration step. It has been operating in 10 states since October 1999 with mixed results. Interim results show that the DDSs operating under the Prototype are awarding a higher percentage of claims at the initial decision level without compromising accuracy, and that claims are reaching hearing offices faster because the Prototype eliminates DDS reconsideration as the first level of appeal. However, interim results also indicate that more denied claimants would appeal to administrative law judges (ALJ) at hearings offices, which would increase both administrative and program costs (benefit payments) and lengthen the wait for final agency decisions for many claimants. As a result, SSA decided that the Prototype would not continue in its current form. In April, the commissioner announced her “short-term” decisions to revise certain features of the Prototype in order to reduce processing time while it continues to develop longer-term improvements. It remains to be seen whether these revisions will retain the positive results from the Prototype while also controlling administrative and program costs. The Hearings Process Improvement initiative is an effort to overhaul operations at hearings offices in order to reduce the time it takes to issue decisions on appealed claims. This was to be accomplished by increasing the level of analysis and screening done on a case before it is scheduled for a hearing with an ALJ; by reorganizing hearing office staff into small “processing groups” intended to enhance accountability and control in handling each claim; and by launching automated functions that would facilitate case monitoring. The initiative was implemented in phases without a test beginning in January 2000 and has been operating in all 138 hearings offices since November 2000. The initiative has not achieved its goals. In fact, decisions on appealed claims are taking longer to make, fewer decisions are being made, and the backlog of pending claims is growing and approaching crisis levels. The initiative’s failure can be attributed primarily to SSA’s decision to implement large-scale changes too quickly without resolving known problems. For example, problems with process delays, poorly timed and insufficient staff training, and the absence of the planned automated functions all surfaced during the first phase of implementation and were not resolved before the last two phases were implemented. Instead, the pace of implementation was accelerated when the decision was made to implement the second and third phases at the same time. Additional factors, such as a freeze on hiring ALJs and the ALJs’ mixed support for the initiative, may also have contributed to the initiative’s failure to achieve its intended results. SSA has recently made some decisions to implement changes that can be made relatively quickly in order to help reduce backlogs and to streamline the hearings process, and they are preparing to negotiate some of these changes with union officials before they can be implemented. These changes include creating a law clerk position and allowing ALJs to issue decisions from the bench immediately after a hearing and including them in the early screening of cases for on-the-record decisions. They also include decisions to enhance the use of technology in the hearings process, as well as other refinements. The Appeals Council Process Improvement initiative combined temporary staff support with permanent case processing changes in an effort to process cases faster and to reduce the backlog of pending cases. The initiative was implemented in fiscal year 2000 with somewhat positive results. The initiative has slightly reduced both case processing time and the backlog of pending cases, but the results fall significantly short of the initiative’s goals. The temporary addition of outside staff to help process cases did not fulfill expectations, and automation problems and changes in policy which made cases with certain characteristics more difficult to resolve hindered the initiative’s success. However, SSA officials believe that recent management actions to resolve these problems should enhance future progress. Improving or revamping its quality assurance system has been an agency goal since 1994, yet it has made very little progress in this area, in part because of disagreement among stakeholders on how to accomplish this difficult objective. In March 2001, a contractor issued a report assessing SSA’s existing quality assurance practices and recommended a significant overhaul to encompass a more comprehensive view of quality management. We agreed with this assessment and in our recent report to this subcommittee recommended that SSA develop an action plan for implementing a more comprehensive and sophisticated quality assurance program. Since then, the commissioner has signaled the high priority she attaches to this effort by appointing to her staff a senior manager for quality who reports directly to her. The senior manager, in place since mid-April, is responsible for developing a proposal to establish a quality- oriented approach to all SSA business processes. The manager is currently assembling a team to carry out this challenging undertaking. SSA’s slow progress in achieving technological improvements has contributed, at least in part, to SSA’s lack of progress in achieving results from its redesign initiatives. As originally envisioned, SSA’s plan to redesign its disability determination process was heavily dependent upon these improvements. The agency spent a number of years designing and developing a new computer software application to automate the disability claims process. However, SSA decided to discontinue the initiative in July 1999, after about 7 years, citing software performance problems and delays in developing the software. In August 2000, SSA issued a new management plan for the development of the agency’s electronic disability system. SSA expects this effort to move the agency toward a totally paperless disability claims process. The strategy consists of several key components, including (1) an electronic claims intake process for the field offices, (2) enhanced state DDS claims processing systems, and (3) technology to support the Office of Hearing and Appeals’ business processes. The components are to be linked to one another through the use of an electronic folder that is being designed to transmit data from one processing location to another and to serve as a data repository, storing documents that are keyed in, scanned, or faxed. SSA began piloting certain components of its electronic disability system in one state in May 2000 and has expanded this pilot test to one more state since then. According to agency officials, SSA has taken various steps to increase the functionality of the system; however, the agency still has a number of remaining issues to address. For example, SSA’s system must comply with privacy and data protection standards required under the Health Information Portability and Accountability Act, and the agency will need to effectively integrate its existing legacy information systems with new technologies, including interactive Web-based applications. SSA is optimistic that it will achieve a paperless disability claims process. The agency has taken several actions to ensure that its efforts support the agency’s mission. For example, to better ensure that its business processes drive its information technology strategy, SSA has transferred management of the electronic disability strategy from the Office of Systems to the Office of Disability and Income Security Programs. In addition, SSA hired a contractor to independently evaluate the electronic disability strategy and recommend options for ensuring that the effort addresses all of the business and technical issues required to meet the agency’s mission. More recently, the commissioner announced plans to accelerate implementation of the electronic folder. In spite of the significant resources SSA has dedicated to improving the disability claims process since 1994, the overall results have been disappointing. We recognize that implementing sweeping changes such as those envisioned by these initiatives can be difficult to accomplish successfully, given the complexity of the decision-making process, the agency’s fragmented service delivery structure, and the challenge of overcoming an organization’s natural resistance to change. But the factors that led SSA to attempt the redesign—increasing disability workloads in the face of resource constraints—continue to exist today and will likely worsen when SSA experiences a surge in applications as more baby boomers reach their disability-prone years. Today, SSA management continues to face crucial decisions on its initiatives. We agree that SSA should not implement the Disability Claim Manager at this time, given its high costs and the other practical barriers to implementation at this time. We also agree that the Appeals Council Process Improvement initiative should continue, but with increased management focus and commitment to achieve the initiative’s performance goals. Deciding the future course of action on each of the remaining three initiatives presents a challenge to SSA. For example, SSA continues to face decisions on how to proceed with the Prototype initiative. Although SSA has recently decided to revise some features of the Prototype in the near term, it also is considering long-term improvements. As such, SSA continues to face the challenge of ensuring that the revisions it makes retain the Prototype’s most positive elements while also reducing its impact on costs. We are most concerned about the failure of the Hearings Process Improvement initiative to achieve its goals. Hearing office backlogs are fast approaching the crisis levels of the mid-1990’s. We have recommended that the new commissioner act quickly to implement short-term strategies to reduce the backlog and develop a long-term strategy for a more permanent solution to the backlog and efficiency problems at the Office of Hearings and Appeals. The new commissioner responded by announcing her decisions on short-term actions intended to reduce the backlogs, and the agency is preparing to negotiate with union officials on some of these planned changes. It is too early to tell if these decisions will have their intended effect, and the challenge to identify and implement a long-term strategy for a more permanent solution remains. It is especially crucial that the Office of Hearings and Appeals make significant headway in reducing its backlog quickly, as it faces in the next several months a potentially significant increase in Medicare appeals due to recent legislative changes in that program. In addition to the changes the agency is currently considering, it may be time for the agency to step back and reassess the nature and scope of its basic approach. SSA has focused significant energy and resources over the past 7 years on changing the steps and procedures of the process and adjusting the duties of its decision makers, yet this approach has not been effective to date. A new analysis of the fundamental issues impeding progress may help SSA identify areas for future action. Experts, such as members of the Social Security Advisory Board, have raised concerns about certain systemic problems that can undermine the overall effectiveness of SSA’s claims process, which in turn can also undermine the effectiveness of SSA’s redesign efforts. The Board found that SSA’s fragmented disability administrative structure, created nearly 50 years ago, is ill-equipped to handle today’s workload. Among other problems, it identified the lack of clarity in SSA’s relationship with the states and an outdated hearing process fraught with tension and poor communication. As the new commissioner charts the agency’s future course, she may need to consider measures to address these systemic problems as well. Regardless of the choices the agency makes about which particular reform initiatives to pursue, SSA’s experience over the past 7 years offers some important lessons. For example, sustained management oversight is critical, particularly in such a large agency and with such a complex process. We have found that perhaps the single most important element of successful management improvement initiatives is the demonstrated commitment of top leaders to change. In addition, some initiatives have not enjoyed stakeholder support or have contributed to poor morale in certain offices, both of which may undermine the chances for success. While it is probably not possible for the agency to fully please all of its stakeholders, it will be important for the agency to involve stakeholders in planning for change, where appropriate, and to communicate openly and often the need for change and the rationale for agency decisions. Moreover, because SSA has experienced problems implementing its process changes, the agency will need to continue to closely monitor the results of its decisions and watch for early signs of problems. An improved quality assurance process and a more comprehensive set of performance goals and measures can help the agency monitor its progress and hold different entities accountable for their part in implementing change and meeting agency goals. Thus, we are concerned about SSA’s lack of progress in revamping its quality assurance system. Without such as system, it is difficult for SSA to ensure the integrity of its disability claims process. Finally, because SSA has had mixed success in implementing information technology initiatives in the past, it is vital that the agency look back at its past problems and take the necessary steps to make sure its electronic disability system provides the needed supports to the disability claims process. It is imperative that the agency effectively identify, track, and manage the costs, benefits, schedule, and risks associated with the system’s full development and implementation. Moreover, SSA must ensure that it has the right mix of skills and capabilities to support this initiative and that desired end results are achieved. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have. For further information regarding this testimony, please contact Robert E. Robertson, Director, or Kay E. Brown, Assistant Director, Education, Workforce, and Income Security at (202) 512-7215. Ellen Habenicht and Angela Miles made key contributions to this testimony on the status of the five initiatives, and Valerie Melvin was the key contributor to the section on information technology.
This testimony discusses Social Security Administration (SSA) improvements in the claims process for its two disability programs, Disability Insurance (DI) and Supplemental Security Income (SSI). Managing its disability caseloads with fair, consistent, and timely eligibility decisions in the face of resource constraints has become one of SSA's most pressing management challenges. SSA has spent more than $39 million over the past 7 years to test and implement initiatives designed to improve the timeliness, accuracy, and consistency of its disability decisions and to make the process more efficient and understandable for claimants. These have included efforts to improve the initial claims process as well as handling appeals of denied claims. The results to date have been disappointing. SSA's two tests to improve the initial claims process produced some benefits; however, both initiatives as tested would have significantly raised costs, and one would have lengthened the wait for final decisions for many claimants. As a result, SSA is considering additional changes to one of these initiatives and has shelved the other. One initiative to change the process for handling appealed claims in SSA's hearing offices has resulted in even slower case processing and larger backlogs of pending claims. A second initiative has reduced the processing times for a separate group of appealed claims, though far less than expected. Moreover, a cross-cutting initiative to update the SSA's quality assurance program--a goal the SSA has held since 1994--is still in the planning stage. Finally, SSA's plans to improve its disability claims process relied upon hoped for technological improvements. However, SSA failed to design and develop a software application to automate the disability claims process after a 7-year effort.
A U.S. passport is not only a travel document but also an official verification of the bearer’s origin, identity, and nationality. Under U.S. law, the Secretary of State has the authority to issue passports. Only U.S. nationals may obtain a U.S. passport, and evidence of citizenship or nationality is required with every passport application. For individuals 16 or older, a regular U.S. passport issued on or after February 1, 1998, is valid for 10 years from the date of issue; it is valid for five years for younger applicants. Federal regulations list those who do not qualify for a U.S. passport, including those who are subjects of a federal felony warrant. The Deputy Assistant Secretary for Passport Services oversees the Passport Services Office, within State’s Consular Affairs Bureau. Passport Services, the largest component of Consular Affairs, consists of three headquarters offices: Policy Planning and Legal Advisory Services; Field Operations; and Information Management and Liaison. The Office of Consular Fraud Prevention addresses passport, visa, and other types of consular fraud. The Consular Systems Division is responsible for the computer systems involved in passport services and other consular operations. The Office for American Citizens Services handles most issues relating to passport cases at overseas posts. The Bureau of Diplomatic Security is responsible for investigating individual cases of suspected passport and visa fraud. The State Department Office of the Inspector General (OIG) also has some authority to investigate passport fraud. Figure 1 shows the key State Department units involved in passport-related operations. State operates 16 passport-issuing offices in Boston; Charleston, South Carolina; Chicago; Honolulu; Houston; Los Angeles; Miami; New Orleans; New York; Norwalk, Connecticut; Philadelphia; Portsmouth, New Hampshire; San Francisco; Seattle; and Washington, D.C. These 16 offices employ the approximately 480 passport examiners who are responsible for approving and issuing most of the U.S. passports that are printed each year. The number of passports issued by domestic passport offices has risen steadily in recent years, increasing from about 7.3 million in fiscal year 2000 to 8.8 million in fiscal year 2004. Overseas posts deal with a much lower volume of passports by comparison, handling about 300,000 worldwide in fiscal year 2004. With only a few exceptions, applications submitted and approved overseas are transmitted electronically to a domestic passport office to be printed. The majority of passport applications are submitted by mail or in-person at one of almost 7,000 passport application acceptance facilities nationwide. Passport acceptance facilities are located at certain U.S. post offices, courthouses, and other institutions and do not employ State Department personnel. The passport acceptance agents at these facilities are responsible for, among other things, verifying whether an applicant’s identification document (such as a driver’s license) actually matches that applicant. Applications go to a passport office to be examined after their information is entered, and payments are processed by a State Department contractor, Mellon Bank, in Pennsylvania. Through a process called adjudication, passport examiners determine whether they should issue each applicant a passport. Adjudication requires the examiner to scrutinize identification and citizenship documents presented by applicants to verify their identity and U.S. citizenship. It also includes the examination of an application to detect potential indicators of passport fraud and the comparison of the applicant’s information against databases that help identify individuals who may not qualify for a U.S. passport. When passport applications are submitted by mail or through acceptance facilities, examiners adjudicate the applications at their desks. A relatively small percentage of the total number of passport applications are submitted directly by applicants at one of State’s domestic passport- issuing offices. Applicants are required to demonstrate imminent travel plans to set an appointment for such services at one of the issuing office’s public counters. “Counter” adjudication allows examiners to question applicants directly or request further information on matters related to the application, while “desk” adjudication requires telephoning or mailing the applicants in such cases. Figure 2 depicts the typical passport application and adjudication process. The passport adjudication process is facilitated by computer systems— including the Travel Document Issuance System (TDIS), which appears on passport examiners’ screens when the adjudication begins. Figure 3 identifies the key computer databases available to help examiners adjudicate passport applications and detect potential fraud. TDIS automatically checks the applicant’s name against several databases— including State’s Consular Lookout and Support System (CLASS), which contains information provided by various offices within State and information on outstanding criminal warrants provided by the U.S. Marshal’s Service, the FBI, and other state and federal agencies, as well as Health and Human Services’ database, which identifies parents who have been certified by a state agency as owing more than $5,000 in child support and therefore are not eligible for a passport. If TDIS indicates the applicant may have applied for a passport at another agency or been issued a U.S. passport within the last 10 years, it prompts the examiner to reference computer databases outside of TDIS to determine whether the prompt refers to the applicant or rather someone who resembles the applicant. In addition, examiners scrutinize paper documents and other relevant information during the fraud detection process. Examiners compare the application submitted by the applicant to the information on the screen to make sure the information was entered properly, check for missing information, and analyze application documentation for various types of fraud indicators. In addition, examiners watch for suspicious behavior and travel plans. Examiners and acceptance agents are instructed to request additional proof of identification if they feel the documents presented are insufficient. State officials said that in such cases, some individuals abandon the application, and the names of those who do are placed in State’s name-check system and are more stringently scrutinized if they apply again. When examiners detect potentially fraudulent passport applications, they send the applications to their local fraud prevention office for review and potential referral to State’s Bureau of Diplomatic Security for further investigation. Using the stolen identities and documentation of U.S. citizens is the primary tactic of those fraudulently applying for U.S. passports. Applicants also commit fraud through other means. Passport fraud is often linked to other crimes. State’s Bureau of Diplomatic Security investigators stated that imposters’ use of assumed identities, supported by genuine but fraudulently obtained identification documents, was a common and successful way to fraudulently obtain a U.S. passport. This method accounted for 69 percent of passport fraud detected in fiscal year 2004. Investigators found numerous examples of aliens and U.S. citizens obtaining U.S. passports using a false identity. One example identified by Diplomatic Security investigators involved an alien using another person’s identity to obtain a U.S. passport. In 2003, a woman using a fraudulent identity claimed to be born in Puerto Rico and provided a Puerto Rican birth certificate when applying for a passport at a clerk of the court office in Florida. She also provided a Florida driver’s license. Diplomatic Security investigators also found cases of U.S. citizens using the documentation of others to hide their true identity. In 1997, a naturalized U.S. citizen born in Cuba stole a Lear jet and transported it to Nicaragua for use in charter services. At the time of his arrest in 2003, he was using an assumed identity and possessed both false and legitimate but fraudulently obtained identification documents, including a U.S. passport in the name he used while posing as a certified pilot and illegally providing flight instruction. Seized at his residence when he was arrested were two Social Security cards, four driver’s licenses, three Puerto Rican birth certificates, one U.S. passport, one pilot identification card, numerous credit cards and checking account cards, and items used to make fraudulent documents. In October 2004, he pled guilty to knowingly possessing five or more “authentication devices” and false identification documents, for which he was sentenced to 8 months’ confinement. In another case, a man wanted for murdering his wife obtained a Colorado driver’s license and a passport using a friend’s Social Security number and date and place of birth. Three and four years later he obtained renewal and replacement passports, respectively, in the same assumed identity. He was later arrested and pled guilty to making a false statement in an application for a passport. He was sentenced to time served (about 7 months) and returned to California to stand trial for murdering his wife. In a third example, a woman obtained a U.S. passport for herself and her daughter using the assumed identity of a friend and that friend’s daughter. The individual fled the country, but was eventually caught, returned to the United States, and tried for forgery, criminal impersonation, and child abduction. Applicants commit passport fraud through other means, including submitting false claims of lost, stolen, or mutilated passports; child substitution; and counterfeit citizenship documents. Some fraudulently obtain new passports by claiming to have lost their passport or had it stolen or that it was damaged. For example, one individual who used another person’s Social Security number and Ohio driver’s license to report a lost passport obtained a replacement passport through the one-day expedited service. This fraudulently obtained passport was used to obtain entry into the United States 14 times in less than three years. Diplomatic Security officials told us that another means of passport fraud is when individuals obtain replacement passports by using expired passports containing photographs of individuals they closely resemble. This method of fraud is more easily and commonly committed with children, with false applications based on photographs of children who look similar to the child applicant. Assuming the identity of a deceased person is another means of fraudulently applying for a passport. Diplomatic Security investigated an individual who had been issued a passport in the identity of a deceased person and was receiving Social Security benefits in the deceased person’s name. The individual was charged with making false statements on a passport application. According to State Bureau of Diplomatic Security documents, passport fraud is often commited in connection with other crimes, including narcotics trafficking, organized crime, money laundering, and alien smuggling. According to Diplomatic Securityofficials, concerns exist within the law enforcement and intelligence communities that passport fraud could also be used to help facilitate acts of terrorism. Using a passport with a false identity helps enable criminals to conceal their movements and activities, according to a State Department document. U.S. passports provide their holders free passage into our country with much less scrutiny than is given to foreign citizens. U.S. passports also allow visa-free passage into many countries around the world, providing obvious benefits to criminals operating on an international scale. According to State officials, the most common crime associated with passport fraud is illegal immigration. For example, one woman was recently convicted for organizing and leading a large-scale passport fraud ring that involved recruiting American women to sell their children’s identities, so that foreign nationals could fraudulently obtain passports and enter the United States illegally. According to the Department of State, the woman targeted drug-dependent women and their children, paying them about $300 for each identity and then using the identities to apply for passports. The woman then sold the fraudulently obtained passports to illegal aliens for as much as $6,000 each. Other leaders of alien smuggling rings have also been recently convicted. One such ring had been smuggling hundreds of undocumented aliens from Ecuador and other parts of South America into the United States for fees of $12,000 to $14,000 each. State faces a number of challenges to its passport fraud detection efforts. Limited interagency information sharing between State and law enforcement and other agencies makes it more difficult to protect U.S. citizens from terrorists, criminals, and others who would harm the United States. Intra-agency information sharing between passport-issuing offices and headquarters, and between offices, is also limited because State lacks a centralized and up-to-date fraud library accessible by all staff. Additionally, insufficient fraud prevention staffing, training, and oversight has resulted in reduced fraud detection capabilities at the issuing offices. Finally, overstretched investigative resources within State’s Bureau of Diplomatic Security and Office of Inspector General have prevented investigators from devoting adequate time and continuity to passport fraud investigations. One of the key challenges to State’s fraud detection efforts is limited interagency information sharing. State does not have access to certain information in the Terrorist Screening Center’s (TSC) consolidated watch list database. Additionally, State’s CLASS name-check system does not include names of all criminals wanted by federal and state law enforcement authorities. Further, access to information from other agencies varies. State currently lacks access to the names of U.S. citizen “persons of interest” in TSC’s consolidated terrorist watch list database. TSC was created in 2003 to improve information sharing among government agencies. By consolidating terrorist watch lists, TSC is intended to enable federal agencies to access critical information quickly when a suspected terrorist is encountered or stopped within the United States, at the country’s borders, or at embassies overseas. Because State’s CLASS name- check database for passports does not contain the TSC information, U.S. citizens with possible ties to terrorism could potentially obtain passports and travel internationally without the knowledge of appropriate authorities. Although TSC has been operational since December 1, 2003, State and TSC did not begin exploring the possibility of systematically uploading data from TSC database into passport CLASS until December 2004. State initiated discussions with TSC after an official in State’s Passport Services Office attended an interagency meeting and became aware that information on certain U.S. citizens was available in the TSC database. A TSC official told us that the center had devoted substantial effort in the first 16 months of its operation to reaching out to federal agencies that could benefit from TSC information. However, efforts to prevent the entry of or locate foreign citizens who would do harm to the United States had been a higher immediate priority in the early stages of operation than information-sharing efforts involving U.S. citizens. The official also noted that, while TSC plays an outreach role with other agencies, it is up to the individual agencies involved to define their own specific information requirements. State and TSC have not reached an agreement about the information- sharing proposal, though State sent an official proposal to TSC in January 2005. TSC has noted that it is in the process of addressing certain legal questions relating to privacy. A TSC official told us that she does not foresee any technical limitations because TSC already has an “elaborate interface” with State’s CLASS system for visas. She added that TSC agrees that it is important to work out an agreement with State. Because the FBI and other law enforcement agencies do not currently provide State with the names of all individuals wanted by federal law enforcement authorities, State’s CLASS name-check system does not contain the names of many federal fugitives, some wanted for murder and other violent crimes; these fugitives could therefore obtain passports and potentially flee the country. The subjects of federal felony arrest warrants are not entitled to a U.S. passport. According to FBI officials, FBI databases contain the names of approximately 37,000 individuals wanted on federal charges. State Department officials acknowledge that many of these individuals are not listed in CLASS. We tested the names of 43 different federal fugitives and found that just 23 were in CLASS; therefore, passport examiners would not be alerted about the individuals’ wanted status if any of the other 20 not in CLASS applied for a passport. One of these 20 did obtain a U.S. passport 17 months after the FBI had listed the individual in its database as wanted. A number of the 20 federal fugitives who were included in our test and were found not to be in CLASS were suspected of serious crimes, including murder. Table 1 lists the crimes suspected of the federal fugitives in our test. Fourteen were wanted by the FBI—including one on its Ten Most Wanted list (the names of all 14 were posted on the FBI’s Web site). Six other fugitives not in CLASS were wanted by other federal agencies—two by the U.S. Marshal’s Service; two by the Bureau of Alcohol, Tobacco, Firearms, and Explosives; and two by the U.S. Postal Service. State officials told us they had not initiated efforts to improve information sharing with the FBI on passport-related matters until the summer of 2004 because they had previously been under the impression that the U.S. Marshal’s Service was already sending to CLASS the names of all fugitives wanted by federal law enforcement authorities. The officials noted that the U.S. Marshal’s Service had been cooperative in providing names to CLASS from its main database of fugitives. However, prior to the summer of 2004, State officials were not aware that the information in the U.S. Marshal’s database was not as comprehensive as that contained in the FBI-operated National Crime Information Center database. State officials became aware of this situation when the union representing passport examiners brought to their attention that a number of individuals on the FBI’s Ten Most Wanted list were not in CLASS. In the summer of 2004, State requested, and the FBI agreed, to provide the names from the FBI’s Ten Most Wanted list, though State officials told us they often obtain this information by periodically checking the FBI’s Web site. As part of these discussions, State and FBI explored other information-sharing opportunities as well, and FBI headquarters officials sent a message instructing agents in its field offices how to provide names of U.S. citizens who are FBI fugitives (other than those from the Ten Most Wanted list) to State on a case-by-case basis. Additionally, State began discussions with the FBI about receiving information on individuals with FBI warrants on a more routine and comprehensive basis. During the most recent negotiations, in December 2004, FBI officials told State officials that they would need a written proposal outlining State’s specific technical and information needs, following which negotiations could begin to develop a formal agreement. One possibility that was discussed for additional name sharing was for the FBI to send State weekly extracts from its databases, while another possibility would be to give State officials the ability to access the FBI’s wanted-persons database. According to State, it sent a written request to the FBI outlining its needs in April 2005. State also noted that it had reached agreement in principal with the FBI on information sharing efforts related to FBI fugitives. According to FBI officials, State requested that the FBI provide only the names of FBI fugitives and not those of individuals wanted by other federal law enforcement entities. A State official told us that the information provided by the U.S. Marshal’s Service together with that to be requested from the FBI would enable State to meet its regulatory requirement that it not issue passports to subjects of federal felony arrest warrants. However, we noted that the limited information State was receiving on fugitives wanted by the U.S. Marshal’s Service was not as comprehensive or up to date as State officials believed: two of nine individuals wanted by the U.S. Marshal’s Service were not in CLASS at the time of our test. The FBI is the only law enforcement agency that systematically compiles comprehensive information on individuals wanted by all federal law enforcement agencies, and, according to FBI officials, it is the logical agency to provide such comprehensive information to State. The FBI is also the only law enforcement agency that compiles comprehensive information on individuals wanted by state and local authorities. According to FBI officials, FBI databases contain the names of approximately 1.2 million individuals wanted on state and local charges nationwide. FBI officials told us they believed it would be more useful for State to have a more comprehensive list of names that included both federal and state fugitives. These officials pointed out that some of the most serious crimes committed often involve only state and local charges. We tested the names of 24 different state fugitives and found that just 7 were in CLASS; therefore, the CLASS system would not flag any of the other 17, were they to apply for a passport. Table 2 lists the crimes suspected of the 17 state fugitives not in CLASS who were included in our test. State Department officials told us that having a comprehensive list of names that included both federal and state fugitives could “clog” State’s CLASS system and slow the passport adjudication process. They also expressed concern that the course of action required of State would not always be clear for cases involving passport applicants wanted on state charges. The officials pointed out that, though the law is specific about denying passports to individuals wanted on federal felony charges, the law was not as clear cut about doing so in the case of state fugitives. However, FBI officials told us that, at a minimum, State could notify law enforcement authorities that such individuals were applying for a passport. Then, the relevant law enforcement authorities could make their own determination about whether to obtain a court order that would provide a legal basis for denying the passport or to simply arrest the individual or take some other action. State officials noted that, to work effectively, such an arrangement would require the FBI to establish some sort of liaison office that State could contact in such instances. State receives varying degrees of information from several other agencies, including the Department of Health and Human Services, the Department of Homeland Security (DHS), the Social Security Administration (SSA), and individual state departments of motor vehicles. Health and Human Services provides names of parents who have been certified by a state agency as owing more than $5,000 in child support payments and are therefore not eligible for a U.S. passport. According to State officials, this information-sharing arrangement has been very successful in preventing such individuals from obtaining passports. State is negotiating with DHS to gain access to naturalization records to verify applicants’ citizenship. State officials currently rely on ad hoc information from DHS colleagues, according to Passport Services officials, and irregular notifications from DHS when fraudulent passports are confiscated. State currently uses limited information from Social Security records which are quickly becoming outdated that SSA provided to State on a one-time basis in 2002. Though State and SSA signed an April 2004 memorandum of understanding giving State access to SSA’s main database to help verify passport applicant’s identity, the memorandum had not been implemented as of March 2005 because the system was still being tested to ensure SSA privacy standards. The agreement will not include access to SSA death records, though State officials said they are exploring the possibility of obtaining these records in the future. Issuing office officials have contact with officials in individual state departments of motor vehicles to confirm, for example, the physical characteristics of individuals presenting drivers licenses as identification. However, these are informal contacts cultivated by individual State officials. State does not maintain a centralized and up-to-date electronic fraud prevention library, which would enable passport-issuing office personnel in the United States, and overseas, to efficiently share fraud prevention information and tools. As a result, fraud prevention information is provided inconsistently to examiners among the 16 domestic offices. Though offices share information through local fraud prevention files or by e-mailing relevant fraud updates, the types and amount of information shared with passport examiners in each office vary widely. For example, at some offices, examiners maintain individual sets of fraud prevention materials. Some print out individual fraud alerts and other related documents and file them in binders. Others archive individual e-mails and other documents electronically. Some examiners told us that the sheer volume of fraud- related materials they receive makes it impossible to maintain and use these resources in an organized and systematic way. In contrast, the issuing office in Seattle developed its own online fraud library that contained comprehensive information and links on fraud alerts nationwide. Some information was organized by individual state, including information such as the specific serial numbers of blank birth certificates that were stolen. The library contained sections on Social Security information, government documents related to U.S. territories, recent fraud updates, and fraud-related news, among other information. It included examples of legitimate as well as counterfeit naturalization certificates, false driver’s licenses, fraud prevention training materials, and a host of other fraud prevention information resources and links. Seattle offered a static version of its library on CD-ROM to other issuing offices at an interoffice fraud prevention conference in 2003. A few of the other offices used this resource to varying degrees, but their versions have not been regularly updated since 2003. An Office of Consular Fraud Prevention official told us that they uploaded at least some of the information onto its Web site, but that material has not been regularly updated, either. The developer of the library has since been reassigned. Most of the 16 fraud prevention managers we talked to believed that the Bureau of Consular Affairs should maintain one centralized, up-to-date fraud prevention library, similar to the Seattle-developed model, that serves offices nationwide. Consular Affairs’ Office of Consular Fraud Prevention maintains a Web site and “e-room” with some information on fraud alerts, lost and stolen state birth documents, and other resources related to fraud detection, though fraud prevention officials told us the Web site is not kept up to date, is poorly organized, and is difficult to navigate. Fraud prevention officials also told us that most of the information on the site relates to visas rather than U.S. passports. We directly observed information available on this Web site and in the “e-room” during separate visits to State’s passport- issuing offices and noted that some of the material was outdated. For example, in September 2004, we noted that certain information on state birth and death records had not been updated since September 2003 and that information on fraudulent U.S. passports had not been updated in more than a year. In addition to limited information sharing, State’s fraud prevention support services are not closely coordinated with the passport-issuing offices. Multiple headquarters offices, including the Office of Consular Fraud Prevention and Office of Passport Policy Planning and Legal Advisory Services, claim some responsibility for fraud trend analysis and fraud prevention support but fraud detection personnel in issuing offices are unclear as to which offices provide which services. Most of the 16 fraud prevention managers we interviewed said they do not clearly understand the respective roles of these headquarters offices in helping them with their fraud detection efforts. Also, while officials in these two offices said they are responsible for analyzing fraud-related data to identify national or region-specific trends on factors such as the types, methods, and perpetrators of fraud, most fraud prevention managers told us they could not recall having received much analysis on fraud trends from these offices beyond individual fraud alerts. We noted that the Office of Passport Policy Planning and Legal Advisory Services only recently began to perform some basic fraud trend analysis on a systematic basis. Office of Consular Fraud Prevention officials told us they spend most of their time on visa fraud because each domestic agency has its own fraud detection apparatus. While this office provides some training services, these are limited, and much other training is provided by issuing offices and is not coordinated with headquarters. Limited fraud prevention staffing, training, oversight, and investigative resources pose additional challenges to fraud detection efforts. A staffing realignment reduced the time available to Fraud Prevention Mangers to review cases and make decisions on fraud referrals. Additionally, interoffice transfers of passport adjudications have, in some cases, led to fewer fraud referrals back to the originating offices. Further, State’s lack of a standard refresher training curriculum and schedule has led to uneven provision of such training. Additionally, sporadic training and limited oversight of passport application acceptance agents constitute a significant fraud vulnerability. Finally, overstretched investigative resources hinder fraud detection efforts. In January 2004, State eliminated the assistant fraud prevention manager position that had existed at most of its domestic passport-issuing offices, and most Fraud Prevention Managers believe that this action was harmful to their fraud detection program, in part by overextending their own responsibilities. State eliminated the permanent role of assistant primarily to expand participation of senior passport examiners serving in that role on a rotational basis; the purpose was to help the examiners gain a deeper knowledge of the subject matter and, in turn, enhance overall fraud detection efforts when the examiners returned to adjudicating passport applications. Prior to the permanent position being abolished, 12 of the 16 passport issuing offices had at least one assistant manager (2 offices had two). Of the 4 offices that did not have permanent assistants, 3 did not have them because they had relatively low workloads and 1 had been in operation only for a few years and had not yet filled the position. Managers at 10 of the 12 offices that had assistants told us that the loss of this position had been harmful to their fraud detection program. In particular, managers indicated that the loss of their assistant impacted their own ability to concentrate on fraud detection by adding to their workload significant additional training, administrative, and networking responsibilities. Fraud Prevention Managers also said that taking on their assistant’s tasks had diverted their attention from their fraud trend analysis as well as their preparation of reports to Washington, D.C., and cases for referral to Diplomatic Security. Some managers said they are now performing more case work than before because they lack an experienced assistant and do not always believe they can rely on rotating staff to do this work unsupervised. Fraud Prevention Managers and other State officials have linked declining fraud referrals to the loss of the assistant fraud prevention manager position. In the 12 offices that previously had permanent assistants, fraud referral rates from the managers to Diplomatic Security decreased overall by almost 25 percent from fiscal year 2003 through 2004, the period during which the position was eliminated, and this percentage was much higher in some offices. Fraud Prevention offices screen fraud referrals received from examiners, perform certain checks on applicant information, and assess the examiner’s rationale for making the referral before the fraud prevention manager can determine whether to refer the case to Diplomatic Security for further investigation. Without their assistants helping them with these and other duties, managers said they are making fewer fraud referrals to Diplomatic Security because they lack the time and do not believe they can fully rely on new rotational staff to take on this responsibility. In one issuing office where referrals to Diplomatic Security were down 41 percent, the manager indicated that loss of his assistant had slowed his ability to get cases to Diplomatic Security because he had to perform many of the assistant’s duties. A Diplomatic Security agent in another issuing office, where the fraud referral rate was down by 55 percent, said the overall effect of eliminating the assistant manager position had been harmful to fraud detection efforts at least in part because the permanently assigned assistants had developed valuable personal contacts and cooperative arrangements over time with state and local law enforcement authorities, department of motor vehicle officials, and others, and that such relationships could not be easily developed or maintained by rotating staff. Most Fraud Prevention Managers acknowledged the value of having senior examiners rotate into the fraud prevention office for temporary assignments; however, the managers said that rotating staff should augment the efforts of a permanent assistant and not serve in place of that role. Passport Services management told us they were not planning to re- establish the permanent assistant role, but that they are in the process of filling one to two additional fraud prevention manager positions at each of the 2 offices with the largest workloads nationwide. Both of these offices operate multiple shifts each workday, and the new managers are intended to provide more comprehensive fraud prevention support for all of the shifts. State also plans to establish one additional fraud prevention manager position at another issuing office with a large workload. There are no current plans for additional positions at any of the other 13 offices. As adjudication workload and production capacity fluctuate at individual passport-issuing offices, State routinely transfers adjudication cases among the different offices to keep workload and capacity in balance at each location. Fraud Prevention Managers at a number of issuing offices said they had noticed that a lower percentage of fraud referrals are being returned to them from the 3 offices that were assigned the bulk of workload transfers from other offices. The Fraud Prevention Managers noted that, over the course of a year, the many thousands of passport applications originating from one particular region should generally be expected to generate a consistent rate of fraud referrals. In fiscal year 2004, 28 percent of passport applications were transferred to 1 of these 3 offices for adjudication, while other issuing offices adjudicated 72 percent. Although these 3 offices received 28 percent of the applications, they provided only 11 percent of total fraud referrals to the originating agencies; the other 89 percent were provided by regional agency passport examiners (74 percent) and others, including acceptance agents (15 percent). For fiscal year 2003, the 3 processing centers adjudicated 26 percent of the applications but provided only 8 percent of the fraud referrals. In 2004, 1 of the issuing offices transferred out to processing centers 63 percent of its applications (about 287,000) but received back from the processing centers only 2 percent of the fraud referrals it generated that year. In 2003, this office transferred out 66 percent of its workload, while receiving back only 8 percent of its total fraud referrals. Fraud Prevention Managers and other officials told us that one reason fewer fraud referrals return from these 3 offices is that passport examiners handling workload transfers from a number of different regions are not as familiar with the demographics, neighborhoods, and other local characteristics of a particular region as are the examiners who live and work there. For example, some officials noted that, in instances when they suspect fraud, they might telephone the applicants to ask for additional information so they can engage in polite conversation and ask casual questions, such as where they grew up, what school they attended, and other information. The officials noted that, since they are familiar with at least some of the neighborhoods and schools in the area, applicants’ answers to such questions may quickly indicate whether their application is likely to be fraudulent. One examiner in an office that handled workload transfers from areas with large Spanish-speaking populations said that the office had an insufficient number of examiners who were fluent in Spanish. She and other officials emphasized the usefulness of that skill in detecting dialects, accents, handwriting, and cultural references that conflict with information provided in passport applications. Moreover, some officials added that passport examiners at centers handling workload transfers are not always well trained in region-specific fraud indicators and do not have the same opportunity to interact directly with applicants as do the examiners working at public counters in regional offices. State has not established a core curriculum and ongoing training requirements for experienced passport examiners, and thus such training is provided unevenly at different passport-issuing offices. While State recently developed a standardized training program for new hires that was first given in August 2004, the Fraud Prevention Managers at each passport- issuing office have developed their own fraud detection refresher training approaches and materials. We reviewed the training programs and materials at all 7 issuing offices we visited and discussed the programs and materials at other offices with the remaining nine Fraud Prevention Managers by telephone and found that the topics covered and the amount and depth of training varied widely by office. Some had developed region- specific materials; others relied more heavily on materials that had been developed by passport officials in Washington, D.C., much of which were outdated. Some scheduled more regular training sessions, and others held training more sporadically. Several examiners told us they had not received any formal, interactive fraud prevention training in at least 4 years. Some Fraud Prevention Managers hold brief discussions on specific fraud cases and trends at monthly staff meetings, and they rely on these discussions to serve as refresher training. Some Fraud Prevention Managers occasionally invite officials from other government agencies, such as the Secret Service or DHS, to share their fraud expertise. However, these meetings take place when time is available and may be canceled during busy periods. For example, officials at one issuing office said the monthly meetings had not been held for several months because of high workload; another manager said he rarely has time for any monthly meetings; and two others said they do not hold such discussions but e-mail to examiners recent fraud trend alerts and information. Numerous passport-issuing agency officials and Diplomatic Security investigators told us that the acceptance agent program is a significant fraud vulnerability. Examples of acceptance agent problems that were brought to our attention include important information missing from documentation, such as evidence that birth certificates and parents’ affidavits concerning permission for children to travel had been received, and identification photos that did not match the applicant presenting the documentation. Officials at one issuing office said that their office often sees the same mistakes multiple times from the same agency. These officials attributed problems with applications received through acceptance agents to the sporadic training provided for and limited oversight of acceptance agents. State has almost 7,000 passport acceptance agency offices, and none of the 16 issuing offices provide comprehensive annual training or oversight to all acceptance agency offices in their area. Instead, the issuing offices concentrate their training and oversight visits on agency offices geographically nearest to the issuing offices, those in large population centers, those where examiners and Fraud Prevention Managers had reported problems, and those in high fraud areas. Larger issuing offices in particular have trouble reaching acceptance agency staff. At one larger issuing office with about 1,700 acceptance facilities, the Fraud Prevention Manager said he does not have time to provide acceptance agent training and that it is difficult for issuing office staff to visit many agencies. A manager at another large issuing office that covers an area including 11 states said she does not have time to visit some agencies in less populated areas and concentrates her efforts in higher fraud areas, which tend to be in the larger cities. Officials at one issuing agency noted that State had worked together with the U.S. Postal Service to develop CD-ROM training for use at Postal Service acceptance facilities. The officials noted that, while they believed the training had been well designed, State does not have any way of tracking whether all postal employees responsible for accepting passport applications actually receive the training. Additionally, issuing office officials also said that acceptance agent staff should receive training from outside agencies such as state departments of motor vehicles, local police, and the FBI on document authenticity and fraud. Other issuing office officials said acceptance agents should also receive interview training. Finally, while State officials told us it is a requirement that all acceptance agency staff be U.S. citizens, issuing agency officials told us they have no way of verifying that all of them are. Management officials at one passport- issuing office told us that, while their region included more than 1,000 acceptance facilities, the office did not maintain records of the names of individuals accepting passport applications at those facilities and the office did not keep track of how many individuals acted in this capacity at those facilities. Although State’s Bureau of Diplomatic Security has provided additional resources for investigating passport fraud in recent years, its agents must still divide their time among a number of competing demands, some of which are considered a higher priority than investigating passport fraud. A Diplomatic Security official told us that, after the September 11 terrorist attacks, the bureau hired about 300 additional agents, at least partially to reduce investigative backlogs. Diplomatic Security and passport officials told us that, while the increased staff resources had helped reduce backlogs to some degree, agents assigned to passport fraud investigations are still routinely pulled away for other assignments. For example, a significant number of agents from field offices across the country are required to serve on “protective detail” in New York when the United Nations General Assembly convenes and at various other diplomatic events. We found that at most of the offices we visited during our fieldwork, few of the agents responsible for investigating passport fraud were actually physically present. At one office, all of the agents responsible for investigating passport fraud were on temporary duty elsewhere, and the one agent who was covering the office in their absence had left his assignment at the local Joint Terrorism Task Force to do so. A number of agents were on temporary assignments overseas in connection with the 2004 Summer Olympics in Greece. Agents at one office said that five of the eight agents involved in passport fraud investigations there were being sent for temporary duty in Iraq, as were many of their colleagues at other offices. Agents at all but 2 of the 7 bureau field offices we visited said they are unable to devote adequate time and continuity to investigating passport fraud because of the competing demands on their time. The agents expressed concerns about the resulting vulnerability to the integrity of the U.S. passport system. We noted that the number of new passport fraud investigations had declined by more than 25 percent over the last five years, though Diplomatic Security officials attributed this trend, among other factors, to refined targeting of cases that merit investigation. A number of Diplomatic Security agents pointed out that passport fraud investigations are often “time sensitive” and that opportunities to solve cases are often lost when too much time elapses before investigative efforts are initiated or when such efforts occur in fits and starts. The rotation of Diplomatic Security agents to new permanent duty stations every 2 or 3 years also makes it more difficult to maintain continuity for individual investigations. Passport-issuing office officials told us that cases referred to Diplomatic Security sometimes take a year or more to investigate. The officials also said that the investigating agents often do not have time to apprise passport-issuing offices of the status of individual investigations and, thus, that the opportunity to convey valuable “real-time” feedback on the quality of fraud referrals was lost. The Special-Agent-in- Charge of a large Diplomatic Security field office in a high fraud region expressed serious concern that, in 2002, the Bureau of Diplomatic Security began requiring that most cases be closed after 12 months, whether or not the investigations were complete. This requirement was meant to reduce the backlog of old cases. The agent said that about 400 cases at his office were closed before the investigations were complete and that this action had taken place over his strenuous objection. A Diplomatic Security official in Washington, D.C., told us that, while field offices had been encouraged to close old cases that were not likely to be resolved, there had not been a formal requirement to close all cases that had remained open beyond a specific time limit. State officials agreed that Diplomatic Security agents are not currently able to devote adequate attention to investigating passport fraud. State officials told us that the Bureau of Diplomatic Security plans to hire 56 new investigative agents over the next few years to augment passport fraud investigation resources at each Diplomatic Security field office nationwide. According to State officials, these new investigators will be solely dedicated to investigating passport and visa fraud and will not participate in protective details or other temporary duties that would distract them from their investigative work. The new hires are to be civil service employees and will not be subject to the frequent rotations to new duty stations that regular Diplomatic Security agents experience as foreign- service officers. State Department OIG officials told us that the OIG also has authority to investigate passport fraud. However, OIG officials told us that budgetary constraints and related staffing reductions in recent years had severely restricted its ability to investigate such fraud. The OIG has invested more resources in efforts to pursue visa fraud, primarily because visa fraud is more prevalent than passport fraud. The OIG has focused most of its more recent passport-related efforts on assessing systematic weaknesses in fraud detection efforts. The idea was to produce broad findings that would be of greater benefit than individual passport fraud investigations could be expected to yield with such a low investment of staff resources available. Although State’s approach to developing new nationwide passport examiner production standards, which were implemented in January 2004, raises a number of methodological concerns, subsequent changes to the standards make an assessment of their impact on fraud detection premature. State intended that the new nationwide standards would make performance expectations and work processes more uniform among its 16 issuing offices. State tested examiner production capabilities before standardizing the passport examination process and used the test results in conjunction with old standards to set new nationwide standards. The new standards put additional emphasis on achieving quantitative targets. Responding to concerns about their fairness, State made a number of modifications to the production standards during the year, making it unclear what impact the standards have had on passport fraud detection. Consular Affairs officials stated that they created nationwide production standards to make performance expectations of examiners and the passport examination process as similar as possible at all domestic passport offices. Though the issuing offices already had production standards for their examiners, the average number of cases examiners were expected to adjudicate per hour varied from office to office, creating confusion and raising questions about equity among passport examiners, according to State officials. In an effort to identify reasonable production standards that would be applicable nationwide, State tested examiner production capabilities at all of its domestic passport-issuing offices. Issuing office management in each of the 16 offices measured the number of passport cases completed by each examiner over a two-week period in April 2003 and computed the hourly average for their office. Management did not inform examiners that their production rates were being measured for the test. Passport Services officials set the new performance standards, but they did not fully standardize some work processes and methods of counting production until after the new numbers were set. After considering nationwide test results, offices’ old standards, and passport office partnership council feedback, headquarters officials at Passport Services decided on the new standards for both desk and counter adjudication. The new standards were implemented in January 2004 and varied by pay grade level. After deciding on the production standards, State standardized the work processes and counting methods that were the basis of examiners’ production averages. For example, State officials encouraged all domestic passport offices to include expedited cases in examiners’ production averages to make examiners’ work more comparable nationwide, though some examiners and issuing office managers said expedited cases take longer to complete because they require additional steps. Also, State’s decision to base examiner’s production averages on a 7-hour day starting in January 2004 marked a change for offices that previously had measured production based on a 6½- or 7½-hour day. State’s decision to measure and compile nationwide production averages before fully standardizing the application examination process and the way completed cases are counted at the passport-issuing offices limited the validity of State’s test results. GAO has reported that consistency is a key element for data reliability and that the data obtained and used must be clear and well-defined enough to yield similar results in similar analyses. However, we found that State had attempted to uniformly measure production capacity at its 16 issuing offices when the individual offices were still using differing work processes and methods of counting production. Upon visiting 7 of the 16 offices after the new standards had been implemented, we found that several differences in passport adjudication practices and methods for counting production still existed. For example, at some offices, the more complex and time-consuming cases were included in examiner production averages, while at other offices they were not. In addition, issuing office and headquarters management told us that contract staff at some issuing offices performed certain tasks that helped speed up examiner production, while such tasks were performed by examiners at other offices. State told us 5 months after they had implemented the standards that domestic offices’ requirements for counting completed cases still varied and that this situation could make it easier for examiners at some offices to meet their production standard. State officials acknowledged that these processes and procedures should be standardized to ensure that the standards are fair. If State had standardized its work processes and procedures for measuring performance before testing production, the test would presumably have produced more valid results that could have been used to set uniformly attainable production standards. Because State adjusted the impact of production on examiners’ evaluations, the production standards implemented in 2004 placed increased pressure on examiners to focus on production numbers. Before State introduced the new standards, quantitative production requirements were grouped together with qualitative measures for performance ratings. For example, one former evaluative element paired the requirement that a GS-11 examiner adjudicate 25 to 28 cases per hour with a qualitative assessment of the examiner’s overall knowledge of the adjudication process. Supervisors and management officials at some of the domestic passport offices we visited said that if in previous years examiners failed to meet their minimum production number, the supervisor could still rate the examiner fully successful based on better performance against qualitative standards within the same rating element. However, the 2004 standards separated the quantitative performance measures—production numbers and error rates—from qualitative elements. Examiners not meeting the minimum hourly production average for the year were to receive an unsuccessful rating on that performance element, regardless of qualitative performance. An unsuccessful rating in one element results in an unsuccessful rating overall, even if the examiner rates outstanding in the other three skill elements. State officials said they made this change to clarify the criteria on which examiners were rated. Since the new production standards were set, State has incorporated computer upgrades and process changes that have enhanced fraud detection, but may have slowed the examination process. For example, at passport office counters, State upgraded the computer system to allow examiners to perform cashiering functions and to produce a receipt immediately for all financial transactions, thus adding time to each case requiring a fee. In addition, State added a page to the standard passport application in March 2005, thus requiring more information from each new applicant. Headquarters officials, regional office managers, and examiners agreed that these changes enhance fraud prevention efforts. But while they were pleased with the enhanced fraud detection capabilities, some examiners and examiners’ union representatives told us the changes may slow production now that examiners are required, for example, to scrutinize longer applications. Passport examiners and union officials argue that the new standards’ emphasis on production combined with changes to the examination process have made it more difficult to meet the new production standards without shortcutting fraud detection efforts. Some examiners we talked to said changes to annual evaluation criteria and to the examination process put additional pressure on them to focus on their numbers more than their efforts to detect fraud. Some also told us they believe the new standards were evidence that management prioritizes quantity of work over quality. Union representatives said examiners frequently complain that, to achieve their number targets, they have to skip required steps in the examination process or scrutinize applications less thoroughly than necessary to adequately detect fraud. A number of examiners at each domestic office we visited either stated they take shortcuts themselves or know colleagues who do. Some said, for example, they do not thoroughly check Social Security information provided on the computerized examination software against the information on the individual’s application. Others reported they do not thoroughly check all “hits” generated by the computer software—information that may help identify applicants flagged as fugitives or raise other concerns in one of State’s passport-related databases. An examiner noted that most hits, when further scrutinized, prove to be invalid, and thus the chances of missing a valid hit were low. Union representatives said they are hesitant to share such examples with passport management because examiners fear negative repercussions. Headquarters and regional office management said it is difficult to assess the number or magnitude of shortcuts being taken and the impact of shortcuts on fraud detection. Management officials at some of the offices we visited said supervisors audit only a limited percentage of cases after they are examined and that audits would not necessarily reveal that examiners had taken shortcuts. The impact of shortcuts on fraud detection is also difficult to assess because the overall incidence of detected fraud is low. One examiner noted that if she failed to check any fraud indicators at all and granted a passport to every applicant, she would be right more than 99 percent of the time. State data show that less than one-half of 1 percent of applications in 2004 were identified as potential frauds. Because State has modified the 2004 production standards in response to management, union, and examiner concerns, the standards’ effect on fraud detection is unclear. State officials told us that, from the outset of implementing the production standards, they had planned to reassess the standards regularly and to adjust them as necessary. In July 2004, State responded to a union suggestion to reduce by one-half hour the number of daily work hours used to calculate the hourly production average, thus acknowledging that time examiners spend doing essential tasks, such as reading e-mail updates, should not be factored into their hourly production averages. Both desk and counter examination production standards were lowered during 2004, and certain offices were exempted from either desk or counter measurement due to regional workload variations. One such exemption occurred in September 2004, when State informed regional management that neither New York nor Hawaii should rate examiners on desk examination production averages in 2004 because their desk workloads were too low to enable a fair rating of examiners. Also, headquarters passport management lowered the counter production requirement for GS-9 and GS-11 passport examiners retroactive to January 1, 2004. While about 63 examiners were not achieving the required production rate after the first quarter of 2004, all but 18 of State’s approximately 480 examiners nationwide had met the standards by the end of the year. Because State’s changes to the production standards continued throughout 2004, the standards’ net effect on fraud detection efforts remains unclear. Maintaining the integrity of the U.S. passport is an essential component of State’s efforts to help protect U.S. citizens from those who would harm the United States. The steadily increasing volume of passports issued each year underscores the importance of this task. State has a range of tools and resources at its disposal to help detect passport fraud, and it has taken a number of important measures in recent years to enhance its efforts in this area. However, State still faces a number of key challenges. Included among them is limited information sharing with TSC, the FBI, and other agencies, making it more difficult to protect the United States from terrorists, criminals, and others. State has begun working with these agencies to address this problem and is dependent on their cooperation to remedy it. Limited intra-agency information sharing and insufficient fraud prevention staffing, training, oversight, and investigative resources also make fraud detection more difficult. Together, these challenges constitute a serious concern to the overall effort to secure the borders of the United States and protect its citizens. To improve the coordination and execution of passport fraud detection efforts, we recommend the Secretary of State take the following six actions: Expedite, in consultation with the U.S. Attorney General, Director of the Federal Bureau of Investigation, and Secretary of Homeland Security, arrangements to enhance interagency information sharing, and reach agreement on a plan and timetable for doing so, to ensure that State’s CLASS system for passports contains a more comprehensive list of individuals identified in the Terrorist Screening Center database as well as state and federal fugitives and that such information is made available to State in an efficient and timely manner. Establish and maintain a centralized and up-to-date electronic fraud prevention library that would enable passport agency personnel at different locations across the United States to efficiently access and share fraud prevention information and tools. Consider designating additional positions for fraud prevention coordination and training in some domestic passport-issuing offices. Assess the extent to which and reasons why workload transfers from one domestic passport-issuing office to another were, in some cases, associated with fewer fraud referrals, and take any corrective action that may be necessary. Establish a core curriculum and ongoing fraud prevention training requirements for all passport examiners, and program adequate time for such training into the staffing and assignment processes at passport- issuing offices. Strengthen fraud prevention training efforts and oversight of passport acceptance agents. State provided written comments on a draft of this report (see app. II). State generally concurred with our findings, conclusions, and recommendations. State indicated that it had agreed in principle with the FBI on information-sharing arrangements concerning subjects of federal felony arrest warrants and planned to establish an automated mechanism for obtaining information from the FBI on the subjects of state warrants. State said that it was designing a centralized passport “knowledgebase” for passport examiners that includes information on fraud prevention resources. It said it would consider rotating GS-12 Adjudication Supervisors through local fraud prevention offices to relieve Fraud Prevention Managers of some of their training responsibilities. State is also establishing a standardized national training program for passport examiners, instituting a regular nationwide quality review program for passport acceptance agent work, and adapting and expanding computer- based training for U.S. Postal Service acceptance facilities for more widespread use among acceptance agents nationwide. State did not address our recommendation that it assess the extent to which and reasons why workload transfers from one domestic passport issuing-office to another were, in some cases, associated with fewer fraud referrals and to take any corrective action that may be necessary. The FBI also reviewed a draft of this report for technical accuracy. The FBI’s comments have been incorporated into the report, as appropriate. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. We will then send copies to interested congressional committees and the Secretary of State. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or [email protected]. Additional GAO contacts and staff acknowledgments are listed in appendix III. To examine how passport fraud is committed, we reviewed State Department Bureau of Diplomatic Security closed passport fraud case files and Bureau of Consular Affairs statistics on passport fraud. We also met with officials at State’s Diplomatic Security Headquarters Criminal Division and at Diplomatic Security’s Field Office in Miami and conducted telephone interviews with Diplomatic Security officials at field offices in Chicago and San Francisco. To identify and assess the key challenges State faces in detecting passport fraud, we directly observed State’s fraud detection efforts at 7 of its 16 domestic passport-issuing offices; tested State’s use of electronic databases for fraud detection; analyzed fraud referral statistics from the Bureaus of Consular Affairs and Diplomatic Security; and interviewed cognizant officials in both of these bureaus. We visited State’s passport-issuing offices in Charleston, South Carolina; Los Angeles; Miami; New Orleans; New York; Seattle; and Washington, D.C. We chose these fieldwork locations to gain an appropriate mix of geographic coverage, workload, levels and types of passport fraud, and counter-to-desk adjudication ratios. In addition, we chose the Charleston office because it is one of the two passport “megacenters” responsible for adjudicating applications from other regions. To test the electronic databases that State uses to help detect fraud, we ran the names of 67 different federal and state fugitives against State’s CLASS name-check system. Our test was not intended to employ a representative sample, and we did not generalize our results to the universe of wanted U.S. citizens. The test results were intended to provide a firsthand illustration of a problem that State and Federal Bureau of Investigation (FBI) officials acknowledge exists. We selected names of individuals with federal and state warrants from a variety of government agencies and offices—the FBI; U.S. Marshal’s Service; Bureau of Alcohol, Tobacco, Firearms, and Explosives; Drug Enforcement Administration; U.S. Postal Service; and various state and local law enforcement offices. Many of the names were taken from publicly available Internet sites, including those operated by the FBI and Department of Justice. We verified that all of the individuals were listed as “wanted” in the FBI’s national criminal database as of the date of our test in December 2004. In December 2004, we supervised the entry of the name and date of birth of each of the 67 fugitives into State’s system by an Office of Passport Services official. Three GAO employees verified each entry’s accuracy. For each entry, we recorded whether State’s system contained a record of that fugitive and, if it did, we noted the type of “lookout” that had been entered, such as wanted person information from the U.S. Marshal’s Service or a child support “lookout” from Health and Human Services. We analyzed fraud referral statistics from the Consular Affairs Office of Passport Services and the Bureau of Diplomatic Security for fiscal years 2000 through 2004. We reviewed the statistics and verified their accuracy by comparing select data with the individual issuing offices’ monthly reports that are State’s original source for compiling these data. Together with Passport Services officials, we identified the methods used to capture and compile the data and determined that the data were sufficiently reliable and generally usable for the purposes of our study. We did not use data elements that we did not deem reliable. At each of the 7 offices we visited, we conducted interviews with officials such as the Regional Director, Assistant Regional Director, Fraud Prevention Manager, Adjudication Manager, Customer Service Manager, supervisors, and certain passport examiners. We interviewed some examiners who expressed an interest in meeting with us and chose others at random. We conducted telephone interviews with the Fraud Prevention Managers at all 9 of the offices that we did not visit, using a list of questions identical to that used in interviews with their counterparts at offices we visited. We also met with Diplomatic Security agents attached to field offices responsible for investigating fraud suspected at the offices we visited. In addition, we interviewed cognizant officials in Consular Affairs’ Office of Passport Services, Office of Consular Fraud Prevention, and Consular Systems Division; the Bureau of Diplomatic Security; and the Office of the Inspector General. We also met with FBI and Terrorist Screening Center officials, including attorneys, to discuss technical and potential legal issues that might affect interagency information sharing arrangements with State. To assess the potential effect of new performance standards, which were implemented in January 2004, on State’s fraud detection efforts, we reviewed the methodology and criteria that State used in developing the new standards. We compared the adjudication processes that were in place when State tested examiner production capacity with those in place when State implemented the new standards and against which it applied them. We interviewed passport examiner union representatives and select examiners at the 7 offices we visited and the fraud prevention managers at all 16 domestic passport-issuing offices to obtain their views on the potential effect of the new standards on fraud detection efforts. We also obtained views on the same subject from the Consular Affairs Passport Services officials who oversaw the development and implementation of and ongoing adjustments to the new standards. We conducted our work from May 2004 to March 2005 in accordance with generally accepted government auditing standards. In addition to the individual named above, Jeffrey Baldwin-Bott, Joseph Carney, Paul Desaulniers, and Edward Kennedy made key contributions to this report. Martin de Alteriis, Etana Finkler, and Mary Moutsos provided technical assistance.
Maintaining the integrity of the U.S. passport is essential to the State Department's efforts to protect U.S. citizens from terrorists, criminals, and others. State issued about 8.8 million passports in fiscal year 2004. During the same year, State's Bureau of Diplomatic Security arrested about 500 individuals for passport fraud, and about 300 persons were convicted. Passport fraud is often intended to facilitate other crimes, including illegal immigration, drug trafficking, and alien smuggling. GAO examined (1) how passport fraud is committed, (2) what key fraud detection challenges State faces, and (3) what effect new passport examiner performance standards could have on fraud detection. Using the stolen identities of U.S. citizens is the primary method of those fraudulently applying for U.S. passports. False claims of lost, stolen, or damaged passports and child substitution are among the other tactics used. Fraudulently obtained passports can help criminals conceal their activities and travel with less scrutiny. Concerns exist that they could also be used to help facilitate terrorism. State faces a number of challenges to its passport fraud detection efforts, and these challenges make it more difficult to protect U.S. citizens from terrorists, criminals, and others. Information on U.S. citizens listed in the federal government's consolidated terrorist watch list is not systematically provided to State. Moreover, State does not routinely obtain from the Federal Bureau of Investigation (FBI) the names of other individuals wanted by federal and state law enforcement authorities. We tested the names of 67 federal and state fugitives and found that 37, over half, were not in State's Consular Lookout and Support System (CLASS) database for passports. One of those not included was on the FBI's Ten Most Wanted list. State does not maintain a centralized and up-to-date fraud prevention library, hindering information sharing within State. Fraud prevention staffing reductions and interoffice workload transfers resulted in fewer fraud referrals at some offices, and insufficient training, oversight, and investigative resources also hinder fraud detection efforts. Any effect that new passport examiner performance standards may have on State's fraud detection efforts is unclear because State continues to adjust the standards. State began implementing the new standards in January 2004 to make work processes and performance expectations more uniform nationwide. Passport examiner union representatives expressed concern that new numerical production quotas may require examiners to "shortcut" fraud detection efforts. However, in response to union and examiner concerns, State eased the production standards during 2004 and made a number of other modifications and compromises.
PACE integrates Medicare and Medicaid financing to provide comprehensive delivery of those programs’ services to individuals age 55 and older who have been certified as eligible for nursing home care by a state under Medicaid. PACE providers are, or are a separate part of, government entities or are not-for-profit private or public entities that provide PACE services to eligible individuals. PACE services include, but are not limited to, all Medicare services and all Medicaid services as specified in the state plan. Adult day care, medical treatment, home health and personal care, prescription drugs, social services, restorative therapies, respite care, and hospital and nursing home care when necessary are all required services under PACE. For most individuals, the comprehensive services offered by PACE allow them to live in their homes. CMS requires that each PACE provider operate an adult day center for its beneficiaries. WPP is a state-sponsored program that integrates Medicare and Medicaid financing to provide comprehensive delivery of those programs’ services to individuals age 55 and older and individuals age 18 and older with physical disabilities who have been certified by Wisconsin as eligible for nursing home care. To deliver WPP services, the state contracts with organizations to provide eligible individuals with primary, acute, and long- term care services; prescription drugs; rehabilitation services and physical therapy; adult day care; nursing home care; durable medical equipment and supplies; and other services such as meal delivery and transportation to medical appointments. The comprehensive services provided by WPP are intended to allow individuals to live in the setting of their choice. While similar to PACE, WPP does not require that providers operate an adult day center. ALTCS, the long-term care division of the Arizona Medicaid program, serves individuals who are age 65 and over, blind, or disabled and who need ongoing services at a nursing home level of care. Arizona provides all its Medicaid services through a Medicaid waiver, which allows some flexibility in the design and administration of the program. In Arizona, ALTCS contracts with providers to deliver acute medical care services, institutional care, hospice, behavioral health services, home health, homemaker services, personal care, respite care, transportation, adult day care, and home delivered meals. Many ALTCS participants are able to live in their own homes or in assisted living facility and receive in-home services. Palliative care programs are operated by a variety of health care entities, including hospitals, health care systems, and hospices. These programs generally do not receive federal or state funding and may rely on private grants or charitable support. Palliative care programs are designed to improve the quality of a seriously ill individual’s life and support the individual and his or her family during and after treatment. Services provided by palliative care programs vary and may include pain and symptom management, assistance with planning for additional services, and psychosocial and spiritual support and can be provided in conjunction with curative care. The IOM and AHRQ studies identified the following key components in providing care to individuals nearing the end of life: care management; supportive services for individuals; pain and symptom management; family and caregiver support; communication among the individuals, families, and program staff; and assistance with advance care planning. Care management, also referred to as case management, interdisciplinary care, or care coordination, is the coordination and facilitation of service delivery and can be provided by a team or a single manager. Supportive services include personal care services, home delivered meals, transportation to medical appointments, and other services that assist individuals who reside in noninstitutional settings. Pain and symptom management is pharmacological and nonpharmacological therapies, such as massage therapy, to treat pain and other symptoms of an individual who is seriously ill. Family and caregiver support are services that provide assistance to those caring for an individual nearing the end of life in his or her home and can include respite care and bereavement counseling. Communication among individuals, families, and program staff includes discussions regarding end-of-life issues with individuals and their family members and the use of various tools to foster communication among program staff. Advance care planning is the process by which individuals make decisions about their future care and may include the completion of written documents, such as advance directives. Specifically, IOM reported that for individuals nearing the end of life, care systems should ensure that the following are provided: interdisciplinary care management; home care or personal care services, which we refer to as supportive services; pain and symptom management; supportive care for caregivers and family through services such as respite care or housekeeping services; and communication. The IOM report also identified advance care planning as a key component of end-of-life care. The IOM report recommended that people nearing the end of life should receive supportive services managed by those involved in their care and that health care organizations should facilitate advance care planning. In addition, the IOM report recommended that health care professionals improve care for individuals nearing the end of life by providing pain and symptom management. The AHRQ report focused on identifying outcomes that can indicate the quality of the end-of-life experience and identifying the patient, family, and health care system factors that are associated with better or worse outcomes at the end of life. The AHRQ report identified continuity of health care, such as that provided through care management; supportive services, such as home care services; pain and symptom management; support for families and caregivers; and effective communication among program staff, which could include improved medical record documentation, as core components of end-of-life care. The programs we identified in four states that incorporate key components of end-of-life care described in the IOM and AHRQ reports are PACE, WPP, ALTCS, and palliative care programs. These programs use care management to ensure continuity of care and supportive services, such as personal care services, to assist individuals nearing the end of life. These programs also integrate pain and symptom management into their services; provide family and caregiver support; foster communication among the individuals, families, and program staff; and initiate or encourage advance care planning. Care management is used by all of the programs we identified to ensure continuity of care for individuals nearing the end of life. Most of these programs provide care management through interdisciplinary care teams. The interdisciplinary care teams of PACE providers include a primary care physician, nurse, social worker, physical therapist, occupational therapist, recreational therapist or activity coordinator, dietitian, PACE adult day center manager, health care aides, and transportation providers. PACE beneficiaries attend a PACE adult day center where they receive services from the interdisciplinary care team. The WPP providers use an interdisciplinary care team approach similar to PACE, although the teams are generally smaller. Representatives of two WPP providers we interviewed stated that care management reduces hospitalizations. Representatives of one of these providers stated that care management ensures that individuals admitted to a hospital are discharged to an appropriate setting to avoid unnecessary readmission. Representatives of the second WPP provider stated that care management improves the medical care of individuals by providing physicians with an accurate picture of individuals’ health status and assisting individuals with accessing physicians in a timely manner. Representatives of both PACE and WPP providers stated that the interdisciplinary care teams meet to exchange information, ensure that individuals’ needs are being met, and address changes in the health status of individuals. The four hospital-based palliative care programs we identified use interdisciplinary care teams to coordinate services. These programs’ teams include medical directors, social workers, chaplains, nurses, psychologists, and case managers. Two of the hospice-based palliative care programs developed partnerships with local hospitals and use interdisciplinary care teams to assist individuals. Two other hospice-based palliative care programs use interdisciplinary care teams of health care professionals to coordinate medical, nursing, social work, and spiritual services. Staff from one of these programs told us that because case managers facilitate communication among different medical providers and ensure that tests performed have a clear purpose, unnecessary or duplicate tests are avoided. One hospice-based palliative care program’s interdisciplinary care team consists of a nurse, social worker, and palliative care physician who coordinate care and monitor the quality of care provided. The two palliative care programs operated by health care systems use interdisciplinary care teams composed of nurses, social workers, chaplains, and pharmacists. The care team of one of these palliative care programs makes treatment recommendations and enhances coordination among medical staff. The other of these palliative care programs provides social and psychological support and assists individuals with transitioning between the hospital and their homes. One hospice-based palliative care program uses a single case manager to assist individuals with coordinating services. In the ALTCS program, each Medicaid beneficiary is assigned a case manager. The case manager aids the beneficiary in obtaining necessary services, coordinates service delivery, and consults with other providers as needed. ALTCS case managers refer beneficiaries to other social service agencies when additional services are needed. ALTCS officials noted that a unique feature of the program is that it provides institutional, supportive, and all other medical and long-term care services under one agency and under the supervision of a single case manager for each beneficiary. An official also noted that the ALTCS program fosters continuity of care and care coordination at the end of life through the case manager and the integrated delivery of services from a single agency. The programs we identified provide a variety of supportive services to assist individuals near the end of life. The PACE providers we interviewed are required to deliver supportive services such as personal care services, adult day care, social work services, and meal delivery. Representatives of one PACE provider stated that one strength of PACE is the integration of all Medicare- and Medicaid-covered services, which includes the supportive services, such as personal care services, covered by Medicaid. Representatives of a PACE provider reported that when individuals become too frail to come to the day center, a designated team visits individuals in their homes to provide personal care, nursing, and physician services. Representatives of this provider also described how they assist individuals residing in residential care facilities and adult foster homes who are nearing the end of life by providing additional staff support and visits from the primary care physician. The supportive services offered by WPP providers include social services, personal care services, adult day care, environmental adaptations, meal delivery, and transportation to medical appointments. Representatives of a WPP provider stated that they also involve local community resources such as religious institutions and friends to ensure that individuals receive the assistance they need in their homes and communities. Representatives of another WPP provider stated that the most common supportive services they provide are home care, transportation, and day center activities. Representatives of this provider noted that as individuals get closer to the end of life, additional home care support can be provided. Supportive services provided by the ALTCS program include home health services, homemaker services, personal care, transportation, adult day care, and home delivered meals. ALTCS officials stated that the type of supportive services provided can vary significantly depending on a beneficiary’s level of functioning and the level of support provided by the family. A CMS official noted that two-thirds of ALTCS beneficiaries receive supportive services in their homes or communities, which the official cited as being above the national average. The palliative care programs we identified either provide supportive services directly to individuals nearing the end of life or assist individuals with obtaining such services. One hospice-based palliative care program provides individuals telephone calls and visits and assists individuals with applying for other benefits. Another hospice-based palliative care program provides supportive care that includes nursing and social work services and spiritual counseling. A palliative care program that is operated by a health care system provides individuals with 24-hour nursing support and pastoral services. A palliative care program operated by a hospital helps individuals establish supportive services, such as personal care services, at the time of discharge from the hospital. All the programs we identified provide pain and symptom management or assist with the coordination of such services. Representatives of the WPP and PACE providers we interviewed incorporate pain and symptom management into the care they provide. For example, representatives of a provider of both PACE and WPP described how individuals they serve are able to receive pain and symptom management services whenever they feel such services are necessary. One PACE provider we interviewed offers pain and symptom management to individuals nearing the end of life, and a palliative care team visits individuals in the home when they are unable to attend the PACE day center. Other providers of PACE and WPP obtain assistance from local hospices to help provide pain and symptom management services, such as overnight nursing, spiritual care, or pain management. ALTCS provides pain and symptom management to individuals when such services are needed. Representatives of the 12 palliative care providers we interviewed provide or assist with coordinating pain and symptom management for individuals in either the home or hospital setting. Programs we identified offer family and caregiver support through a variety of services. PACE and WPP providers offer family and caregiver support through personal care services, which can help alleviate demands on a caregiver, and respite services provided in the home. In addition, the adult day centers operated by the PACE providers we visited offer respite opportunities for the caregivers of the individuals who attend the day care programs. One WPP provider also operates a day center to provide caregivers with respite. The ALTCS program provides support for caregivers through personal care, respite, and adult day care services. Most of the palliative care programs we identified also provide support to family members and caregivers. They provide this support in a variety of ways. Two hospice-based palliative care programs use social workers to assist families and caregivers with end-of-life decision making and accessing community agencies and resources. Another hospice-based palliative care program uses an interdisciplinary care team to assist families in making end-of-life decisions. One hospital-based palliative care program, two hospice-based palliative care programs, and one palliative care program operated by a health care system provide bereavement support to family members. One health care system’s palliative care program provides 24-hour nursing support for individuals in their homes, which assists caregivers, and another palliative care program operated by a hospice assists family members with coordinating in-home support services. Two hospital-based palliative care programs assist families with coordinating care upon an individual’s discharge from the hospital. Officials from one hospice-based palliative care program and a palliative care program operated by a hospital both stated that they provide education about end-of-life care to family members. The programs we identified communicate frequently with individuals and their families regarding end-of-life issues. Representatives of the PACE, WPP, and palliative care providers and ALTCS officials we interviewed stated that they work with individuals and their families to develop a plan of care that reflects each individual’s choices. For example, a representative of a PACE provider described how the interdisciplinary care team fosters communication with the individual about what type of care he or she wants to receive at the end of life, including pain and symptom management. Representatives of a provider of both PACE and WPP described how the interdisciplinary care team establishes goals with the individual and includes a physician and social worker to facilitate discussions involving end-of-life issues. A hospital-based palliative care program’s interdisciplinary care team holds meetings with family members to discuss an individual’s health status, prognosis, and end-of-life wishes, and another palliative care program has discharge coordinators follow up with individuals for as long as services are required. An ALTCS official stated that case managers discuss with beneficiaries what their needs are and what care they want to receive. Representatives of palliative care, PACE, and WPP providers informed us that they develop close, trusting relationships with individuals through their frequent communication to facilitate discussions about end-of-life care. Representatives of PACE, WPP, and palliative care providers we interviewed stated that communicating with individuals and their families about end-of-life issues earlier, rather than later, in the individual’s illness makes it easier for both the individual and family to manage the decisions they face when the individual is closer to death. Representatives of a WPP provider stated that they have continuous conversations with individuals and families about plans for the end of life, and representatives of a PACE provider noted that they have these discussions early because such discussions become more challenging when someone is very near the end of life. Representatives of another WPP provider stated that they have monthly conversations with individuals about which life-saving measures they would like implemented as their condition worsens. A PACE provider’s staff visits an individual nearing the end of life every other day to ensure that the individual’s and family’s needs are being met. Representatives of a palliative care provider described how they repeatedly discuss with individuals near the end of life the availability of other services such as hospice. Programs we identified use a variety of tools to foster communication among the members of the care team concerning individuals’ needs as they near the end of life. Staff members of a provider of both PACE and WPP use a checklist to identify changes in an individual’s condition. The checklist is completed at an individual’s periodic review or whenever there is a change in health status and helps inform the care team about the need to discuss end-of-life planning with the individual. Representatives of providers described the benefits of electronic medical records in promoting communication among members of the care team. Representatives of a PACE provider and a palliative care program stated that creating an electronic medical record accessible to all members of the care team facilitates communication among the team regarding the condition of each beneficiary and increases the quality of care. A palliative care provider distributed laptop computers and handheld wireless devices to all clinical staff. Using these devices, clinical staff can both access and input information when they visit an individual’s home, which keeps all staff who interact with the individual informed. Another palliative care provider shares clinicians’ notes and correspondence electronically, which enhances communication. Representatives of a hospice-based palliative care provider in Oregon stated that the physicians they work with are more comfortable discussing end-of-life issues with their patients since the 1997 enactment in Oregon of the Death with Dignity Act, which focused attention in the state on end-of- life care and the options available to individuals. Representatives of a palliative care program operated by a health care system we interviewed stated that passage of this act helped create an environment in Oregon where end-of-life issues are discussed more openly. The WPP, PACE, and palliative care providers initiate or encourage advance care planning to assist individuals with planning for the end of life, making decisions about future medical care, and sharing information with family members. Representatives of all the PACE providers stated that they assist individuals with advance care planning tasks, such as completing advance directives and identifying health care proxies, that is, those who can make health care decisions on behalf of the individuals. Representatives of a provider of PACE and WPP stated that each individual begins the advance care planning process as soon as he or she is admitted to the program. This provider’s staff members work with individuals to identify health care proxies and persuade individuals to communicate their decisions to family members. Representatives of a WPP provider stated that the staff have monthly conversations with individuals about their end-of-life choices, such as do-not-resuscitate orders. Representatives of another WPP provider stated that the care team encourages individuals and their families to plan for the end of life, and representatives of a provider of both PACE and WPP discuss with individuals all the medical services and interventions they wish to receive. Officials of palliative care programs stated that they offer assistance to individuals enrolled in their programs in completing advance directives and informing their families of any decisions they have made about their end-of-life care. One palliative care program operated by a hospice assists individuals with completing advance directives and informing family members of their decisions for the end of life. Palliative care programs operated by hospitals assist individuals with advance care planning tasks such as completing advance directives and making medical decisions. Representatives of a PACE provider in Oregon stated that they use Physician Orders for Life-Sustaining Treatment (POLST) forms to assist all individuals in their program with advance care planning. The POLST form is a physician’s order that communicates which medical interventions should be performed in the event of a health emergency. Similar to other advance directives, the POLST form allows individuals to document their choices regarding the use of life-sustaining procedures; a representative in Oregon stated that, unlike other advance directives, POLST forms are physician orders, which are more effective at communicating an individual’s preferences to providers, particularly when the individual is transferred across health care settings. A representative of an Oregon PACE provider stated that the POLST form makes an individual’s wishes clear and, because it is in the form of a physician’s order, legally protects medical personnel, including emergency medical technicians, when they carry out an individual’s documented choices during an emergency. Representatives of providers we interviewed described challenges they encounter to delivering some of the key components of end-of-life care. They described difficulties delivering supportive services and family and caregiver support to rural residents because of travel distances, fewer community-based service options, and an inability to hire adequate numbers of staff in rural areas. Representatives of providers also stated that they believe physician training and practices can inhibit the provision of pain and symptom management and advance care planning to individuals nearing the end of life. Representatives of providers we interviewed described difficulties delivering supportive services and family and caregiver support to rural residents because of travel distances, lack of community-based services, and insufficient numbers of nursing and personal care staff in rural areas. Representatives of providers we interviewed stated that significant distances between residents in rural areas make it difficult to provide family and caregiver support, such as respite care, and supportive services, such as personal care services. The length of time it takes for personal care staff to travel between individuals in rural areas decreases the number of services the providers can deliver in a day. In addition, representatives of providers told us that increases in fuel costs have affected how many services they can provide. Representatives of providers we interviewed also described how unpaved roads and inclement weather can increase travel time or prevent travel entirely when serving rural residents. Representatives of one provider stated that the challenge of providing transportation in rural areas is one of the barriers that has prevented the provider’s expansion into rural areas of the state. Representatives of providers we interviewed also cited the limited availability of certain services in rural areas as a challenge to serving individuals nearing the end of life who reside in those areas. Representatives of providers described difficulties in delivering supplies and medications to rural residents. For example, representatives of a hospice-based palliative care provider noted that the pharmacy service it contracts with to provide home delivery of medications cannot provide daily delivery in very rural areas and inclement weather may further delay deliveries. To address the problem, this provider has contracted with local rural pharmacies to provide emergency medication; however, in a two- county area, only one pharmacy is open 24 hours a day, making it difficult for individuals to access medications in an emergency. Representatives of another hospice-based palliative care provider and a WPP provider stated that they are sometimes unable to coordinate supportive services, such as meal delivery and personal care services, for individuals in rural areas because they are unable to locate providers of these services in these regions. In addition, representatives of providers noted that a lack of transit services makes it difficult to provide individuals living in rural areas with transportation to medical appointments or day centers. Representatives of providers stated that an insufficient number of nursing staff and personal care workers in rural areas makes it difficult to provide end-of-life care to those residents. For example, representatives of a provider of both PACE and WPP noted that it is often difficult to hire staff to work in more remote geographic areas, and they cited this as a barrier to expanding the provider’s services into additional rural areas of the state. In addition, representatives of one hospice-based palliative care provider that serves a remote rural area stated that they have been unable to maintain adequate numbers of health care workers to provide services to its patients because such workers are increasingly choosing to relocate to urban areas. Representatives of another hospice-based palliative care provider in a rural region also stated that they have difficulty finding qualified staff to fill these positions. State officials we interviewed said that PACE is not a feasible option in rural areas because of the requirement that the providers operate an adult day center. Representatives of a provider that was formerly a PACE provider stated that it was difficult to remain financially solvent as a PACE provider in a rural community because there were not enough eligible individuals to support an adult day center model. This provider ended its participation in PACE because the community it served did not have enough eligible individuals to justify the expense of a day center. Also, in rural areas, the distance to the PACE adult day center from the residences of individuals enrolled in the program can be a challenge for the PACE program’s transportation services. Representatives of providers we interviewed described how they believe physician training and practices may present challenges to providing pain and symptom management and advance care planning to individuals nearing the end of life. Representatives of providers stated that physicians often do not receive adequate training in pain and symptom management. A physician we interviewed who is the director of a hospital-based palliative care program stated that he believes because physicians lack training to recognize the need for pain and symptom management, individuals nearing the end of life often have difficulty accessing such services. Representatives of other palliative care providers we interviewed agreed that lack of physician training in pain and symptom management is a challenge to the provision of pain and symptom management. Representatives of a hospital-based palliative care provider believe that many medical schools do not provide sufficient training for physicians in pain and symptom management. Representatives of a palliative care provider operated by a health care system stated that they believe most physicians are not trained to provide pain and symptom management to individuals nearing the end of life. A recent article in the New England Journal of Medicine (NEJM) has also noted that physicians receive little or no training in the use of medications for pain and symptom management. Representatives of providers we interviewed also cited physician practices as challenges to individuals receiving pain and symptom management services as they near the end of life. A representative of a hospital-based palliative care provider stated that some physicians are reluctant to refer individuals to the program so that they can receive pain and symptom management because these physicians do not understand or recognize the need for such care. Representatives of providers we interviewed also described how, in their experience, physicians may fail to address pain in a timely manner. A representative of a hospital-based palliative care provider stated that patients’ severe pain may go untreated while physicians, intent on finding the cause of the pain, order extensive diagnostic testing. Representatives of a palliative care program operated by a health care system stated that some physicians perform aggressive medical procedures on individuals nearing the end of life. These representatives stated that they believe some physicians view providing pain and symptom management as “giving up” on a patient. Representatives of providers we interviewed described how physicians often do not engage in advance care planning with individuals nearing the end of life. For example, representatives of a hospice-based palliative care provider stated that they believe physicians do not spend enough time talking with individuals about end-of-life care options such as hospice. As was recently reported in NEJM, physicians receive little training in the compassionate discussion of end-of-life issues. Furthermore, ALTCS officials stated that, in their experience, physicians often do not inform individuals about advance directives. Representatives of a hospice-based palliative care provider stated that physicians sometimes provide individuals with incorrect information about care options for the end of life. Representatives of a PACE provider told us that some physicians resist ending curative care to allow individuals nearing the end of life to receive only supportive care services, and the article published in NEJM reported that some physicians regard the death of patient as a professional failure. In commenting on a draft of this report, CMS stated that the report is a useful description of a diverse set of provider types in very different settings, each of which provides useful services to persons coming to the end of life. CMS noted that the report is especially helpful as a time approaches when more Americans will be living with serious and eventually fatal chronic conditions. CMS also stated that it was useful that our report included individuals living with serious chronic conditions who might live for some years. However, CMS suggested that we avoid using the term terminal illness when referring to such individuals. We note that, in our draft report, we used this term only in the context of discussing the Medicare hospice benefit, which is, by definition, a benefit for individuals with terminal conditions. CMS also stated that we should mention other important components of end-of-life care including, for example, having the appropriate medical diagnosis and having all possible opportunities for a meaningful life. However, these issues are beyond the scope of our report. CMS also provided technical comments, which we incorporated where appropriate. CMS’s comments are reprinted in appendix I. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. At that time, we will send copies of this report to the Administrator of CMS and to other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, key contributors to this report were Nancy A. Edwards, Assistant Director; Beth Cameron Feldpush; Krister Friday; John Larsen; and Andrea E. Richardson.
Approximately 28 percent of all Medicare spending in 1999 was used to provide care for beneficiaries in the last year of their lives. The Medicare hospice benefit is specifically designed for end-of-life care but is an elected benefit for individuals who have a terminal diagnosis with a prognosis of 6 months or less if the disease runs its normal course. GAO was asked to identify examples of programs that provide key components of end-of-life care. Specifically, GAO (1) identified key components of end-of-life care, (2) identified and described how certain programs incorporate key components of end-of-life care, and (3) described the challenges program providers have identified to delivering the key components of end-of-life care. To identify the key components of end-of-life care, GAO relied on studies by the Institute of Medicine (IOM) and the Agency for Healthcare Research and Quality (AHRQ). To identify and describe programs that implement these key components and describe the challenges providers of these programs face, GAO conducted site visits to four states, Arizona, Florida, Oregon, and Wisconsin, that, in addition to other criteria, demonstrated a high use of end-of-life services. We interviewed officials of federal, state, and private programs in these four states that provide care to individuals nearing the end of life. The IOM and AHRQ studies identified the following key components in providing care to individuals nearing the end of life: care management to coordinate and facilitate service delivery; supportive services, such as transportation, provided to individuals residing in noninstitutional settings; pain and symptom management; family and caregiver support such as respite care; communication among the individuals, families, and program staff; and assistance with advance care planning to aid individuals with making decisions about their future care. The programs GAO identified in the four states incorporate key components of end-of-life care when delivering services to individuals nearing the end of life. These programs use care management, either through a case manager or an interdisciplinary care team of health care professionals, to ensure continuity of care and the delivery of appropriate services. The programs also provide supportive services, such as personal care services or meal delivery, to assist individuals in their homes. Pain and symptom management is provided by these programs to treat pain and other symptoms of an individual who is seriously ill. These programs provide family and caregiver support through services that alleviate demands on the caregiver and by providing bereavement support for family members. The programs foster communication with individuals and family members to plan care that reflects each individual's choices. In addition, these programs use tools such as electronic medical records to facilitate communication among staff members. The programs GAO identified initiate and encourage advance care planning for the end of life and assist individuals with making decisions about future medical care, such as completing advance directives and identifying health care proxies, that is, those who can make health care decisions on behalf of the individual. Providers of the programs GAO identified described challenges they encounter to delivering some of the key components of end-of-life care. Providers described difficulties delivering supportive services and family and caregiver supports to rural residents because of travel distances, fewer community-based service options, and an inability to hire adequate numbers of staff in rural areas. Providers also stated that, in their experience, physician training and practices can inhibit the provision of pain and symptom management and advance care planning to individuals nearing the end of life. A recent article published in a medical journal GAO reviewed identified similar issues with physician training and practices. The Centers for Medicare & Medicaid Services (CMS), the agency that administers Medicare and Medicaid, commented that the report is a useful description of diverse provider types that deliver services to persons coming to the end of life. CMS noted that the report is especially helpful as a time approaches when more Americans will be living with serious and eventually fatal chronic conditions.
Plasma is the liquid portion of blood, containing nutrients, electrolytes (dissolved salts), gases, albumin, clotting factors, hormones, and wastes. Many components of plasma are used, and include treatments for the trauma of burns and surgery and for replacing blood elements that are lacking as a result of disease, such as hemophilia. Table 1 lists the plasma components that are currently available in the United States and their primary uses. The various plasma-derived products are purified from the plasma pool by a process known as fractionation. This process separates plasma proteins based on the inherent differences of each protein. Fractionation involves changing the conditions of the pool (for example, the temperature or the acidity) so that proteins that are normally dissolved in the plasma fluid become insoluble, forming large clumps called precipitate. The insoluble protein can be collected by spinning the solution at high speeds or through filtration. One of the most effective ways for carrying out this process is the addition of alcohol to the plasma pool while simultaneously cooling the pool. For this reason, the process is sometimes called cold alcohol fractionation or ethanol fractionation. This procedure is carried out in a series of steps so that a single pool of plasma yields several different protein products such as albumin and immune globulins. It is estimated that each year, as many as a million patients rely on products manufactured from human plasma: more than 400,000 are given albumin, 15,000 to 18,000 are given factor VIII, 3,000 to 5,000 receive factor IX, greater than 20,000 receive immune globulin intravenous (IGIV), and an estimated 100,000 to 500,000 receive immune globulin intramuscular (IGIM). Additional patients receive a variety of hyperimmune globulins and other specialized products. Plasma used for plasma-derived products manufactured and distributed in the United States can only be collected at facilities registered with the FDA. Centers require donors to provide proof that they are legally in the United States and have a local permanent residence. About 85 percent of plasma is collected from paid donors in a commercial setting and is known as source plasma. Through a process known as plasmapheresis, the plasma is removed and the red cells are reinfused into the donor. The remaining 15 percent of plasma is collected from volunteer donors and is known as recovered plasma. From the whole blood, plasma is “recovered”—that is, the red cells, platelets, and cryoprecipitate are separated for transfusion and the unused plasma is either transfused as plasma or sent for further manufacturing into plasma products. On the basis of a European Union policy position, many European countries are working toward self-sufficiency in plasma products using an all-volunteer system, although most countries continue to depend on U.S. products made from paid donors and on source plasma obtained from U.S. donations. Units of plasma collected as source plasma contain approximately 825 milliliters, while recovered plasma from whole blood donations contain approximately 250 milliliters. Thus, more than three times as many donated units of recovered plasma are required to make up a pool of equal volume to one made up of only source plasma. Approximately 370 paid plasma collection centers annually collect about 11 million liters of plasma from 1.5 million donors, involving a total of approximately 13 million separate donations each year. The industry, through its trade organization, the American Blood Resources Association, maintains a limited national donor deferral registry that is checked for each first-time donor. This is a list of known donors who are unsuitable for further donations because of positive test results. Repeat donors’ records are checked at the plasmapheresis center where the plasma is removed. Most of these centers also ensure that donors are not migrating from one center to another over the 48-hour minimum donation interval.The vast majority of source plasma is processed by four companies: Alpha Therapeutic Corporation, Baxter Healthcare Corporation, Bayer Corporation, and Centeon LLC. An additional 1.8 million liters of plasma are collected annually from approximately 8 million volunteer (not paid) donors who contribute 12 to 13 million whole blood donations. Volunteer donors give blood at American Red Cross blood centers and independent blood centers represented by America’s Blood Centers; the plasma is recovered for further manufacturing. Plasma collected by the American Red Cross is fractionated under contract by Baxter Healthcare and the Swiss Red Cross and returned to the American Red Cross for distribution. Plasma collected at member facilities of America’s Blood Centers is currently sold only to the Swiss Red Cross, which manufacturers the various plasma products and sells them through U.S. distributors. Paid donors typically receive between $15 and $20 for the 2 hours required to remove whole blood, separate the plasma from the cells and serum, and reinfuse the latter back into the donor. Source plasma donors may donate once every 48 hours but no more than twice a week. Whole blood donors can donate only once every 56 days since their red cells are not reinfused as is done with the paid donor. Donor screening is designed to prevent the donation of blood by persons who have known risk factors or other conditions such as low blood pressure. All prospective donors, both paid and volunteer, are screened for medical history and risk behaviors. High-risk donors, those whose blood or plasma may pose a health hazard, are encouraged to exclude themselves. Everyone who seeks to donate plasma must answer a series of behavioral and medical questions. If the answers indicate high risk, the prospective donor is deferred from donating. The screening requirements are completed before the donor is allowed to give plasma. Additionally, paid donors must pass an annual physical examination and a brief medical examination each time they donate. Similarly, volunteer donors undergo a brief medical examination each time they donate. The American Blood Resources Association’s National Donor Deferral Registry is one method by which the plasma industry has attempted to ensure that donors who are presenting to donate for the first time at a plasma center are checked for past deferrals at other centers. The American Red Cross has a similar system that is a national list of those deferred through their blood collection system. Each member facility of America’s Blood Centers maintains its own donor deferral list against which donors are checked. All donors are tested for certain viruses known to be transmissible through blood, including HBV, HCV, and HIV. The specific screening tests check for the presence of hepatitis B surface antigen (HBsAg), antibodies to hepatitis C (anti-HCV), HIV-1 antigen (Ag) and antibodies to HIV types 1 and 2 (anti-HIV). Donors with repeatedly reactive test results are rejected from further donations. (See app. II for more information on testing procedures.) For units found to be reactive on HIV tests, the positive units and all previously donated plasma units not pooled for manufacture in the preceding 6 months are retrieved, and those professional services who receive the plasma products are notified according to federal regulations (21 C.F.R. 610.46). All of the plasma fractionation companies have also received permission from the FDA to begin clinical trials of the polymerase chain reaction (PCR) technique, a more sensitive test that is now available, to detect viral material for HIV, HBV, and HCV. PCR is used to amplify the number of copies of a specific region of DNA or RNA in order to produce enough DNA or RNA to be adequately tested. This technique appears to be able to identify, with a very high probability, disease-causing viruses such as HIV, HBV, and HCV. Because PCR testing detects virus particles at the genetic level, infected donors can be identified days or even months sooner than if only traditional antibody or antigen testing is performed, thus shortening the window period. PCR testing is being investigated using minipools that can combine over 500 individual donations. All plasma used in the manufacturing process that undergoes PCR testing must be nonreactive for that specific test. We calculated the risk of incorporating an infectious unit of plasma into a plasma pool for HIV, HBV, and HCV for both volunteer and paid plasma donors. Overall viral marker rates for HIV, HBV, and HCV are higher among individuals who present themselves to donate at paid plasma centers than among those who come to volunteer blood centers. This is due to higher HCV rates among paid donors. Units that test positive are excluded. The incidence rate of collecting infectious units from donors who are in the window period between the time they become infected and the time they test positive is much higher among paid plasma donors than among volunteer donors. However, a number of safety initiatives have been instituted by paid plasma centers that greatly reduce the likelihood of infectious units being pooled for manufacturing. Nevertheless, the final—or residual—risk of an infectious unit entering a plasma pool remains somewhat higher for paid donors than for volunteer donors. There are at least four potential ways in which viral agents go undetected during donation and may thus be transmitted through blood products.First, there exists a very rare chronic carrier state in which a clinically asymptomatic, yet infectious, donor will persistently test negative on a donation screening test. Second, a viral agent may have a large degree of genetic diversity so that laboratory screening tests fail to identify some infectious donors who harbor an atypical genetic variant. Third, laboratory error in performing screening tests may occur, allowing positive units to be made available for transfusion. Finally, the donor may have a negative laboratory test during the window period before the virus is detected by currently licensed screening tests. The majority of cases in which an infectious donation will be included in a plasma pool is a result of this last circumstance. As a result, modeling techniques have been developed to determine the risk estimates of incorporating these infectious window period units into the blood supply. To determine the marker rate for HIV in plasma donations, we obtained data from California’s Department of Health Services (DHS), which collects information on these rates for volunteer blood donors and paid donors at plasma collection facilities. We obtained information on HIV, HBV, and HCV viral marker rates from the American Red Cross for donors who donate at their centers. The American Blood Resources Association provided us with data on repeatedly reactive test results for paid donors who donate at their centers. We adjusted these data to obtain the viral marker rates. In addition, we obtained information on incidence rates among American Red Cross and American Blood Resources Association donors to adjust for the effect of such variables as first-time donor versus repeat donors, the length of the interdonation interval (the time period between donations), and the number of seroconverters found among plasma donors. We also compared the residual risk of a potentially infectious plasma donation by a volunteer versus paid plasma donor actually entering a plasma pool by examining the effect of the length of the window period as well as the use of only “qualified donors” and the 60-day inventory hold program instituted by the paid plasma industry. (See app. I for the calculations we used to derive our risk estimates). Although it has been difficult to obtain data on viral marker rates among paid plasma donors, data collected by California highlight differences between paid plasma and volunteer whole blood donors. As shown in table 2, among the 833,178 units tested, 89 units (.0107 percent) tested positive for HIV-1. Donations at plasma centers showed a higher rate of testing positive for HIV-1 than did donations at blood banks. Plasma centers had an HIV-1 rate of .0266 percent (26.6 per 100,000 units tested), while units collected at blood banks had a rate of .0032 percent (3.2 per 100,000 units tested). Thus, California plasma centers had over an eight-fold higher rate of HIV-1 positive donations than blood banks had among their volunteer donors. For both blood banks and plasma centers in California, the seroprevalence rates for HIV have decreased significantly over time. More than 7 million units were tested at California blood banks between 1990 and 1996. Over this period, HIV-1 seroprevalence among donors declined from .015 percent to .003 percent. Over 4.5 million units were tested at California plasma centers during this same time frame. The HIV-1 seroprevalence among plasma donors declined during this period from .056 percent in 1990 to .027 percent in the second half of 1996. However, while the rates of HIV are dropping in both groups, there is a consistent pattern of higher marker rates among paid donors than among volunteer donors. (See fig. 1.) Although the California data were based on similar reporting requirements and time frames for paid and volunteer donors, they only examined HIV marker rates. There was also some question as to whether multiple counting of donors may have skewed the results of the reporting. Because of these concerns, we also obtained information on marker rates from the American Red Cross and American Blood Resources Association for donors who presented themselves to donate at their respective collection centers. We obtained data from the American Red Cross on 2,954,773 volunteer whole blood donations from donors less than 60 years old (those used for plasma products) between January 1, 1996, and June 30, 1997. This includes donations that have occurred since the introduction of the HIV-1 antigen screening test, implemented on March 15, 1996. As shown in table 3, these data showed that 6.9 out of every 100,000 donations were found to test positive for HIV, while the rates were 33.4 per hundred thousand for HBsAg, and 112.4 per hundred thousand for HCV (results from confirmatory testing). Assuming that no donation is positive for more than one virus, then 1 of every 6,549 volunteer donations is potentially infectious for HIV, HBV, or HCV. We obtained data from the American Blood Resources Association on 4.6 million paid plasma donations in the second half of 1994. The donations contained in these data only included repeatedly reactive test results—confirmatory testing was not performed. We have therefore adjusted these data by the rate at which repeatedly reactive donations confirm positive based on the rates seen in American Red Cross whole blood donations. As shown in table 4, these data showed that approximately 3.7 out of every 100,000 donations were positive for HIV, while the rates were 30.9 per hundred thousand for HBsAg and 226.2 per hundred thousand for HCV. Assuming that no donation is positive for more than one virus, then 1 of every 3,834 paid donations is potentially infectious for HIV, HBV, or HCV. These three data sets show differing viral marker rates for volunteer and paid plasma donors. The California data show much higher HIV-1 marker rates for paid plasma donors than volunteer donors. However, the data from the American Blood Resources Association show lower rates for HIV, similar rates for HBV, and higher rates for HCV than data obtained from the American Red Cross. Overall, the rates for paid donors are 1.7 times higher than the rates for volunteer donors, which is due to the higher rates among paid donors for HCV. The source plasma industry has recently introduced voluntary standards aimed at reducing the viral risks posed by two categories of paid plasma donations: donations from one-time donors and donations from donors who may be in the window period. One-time donors are a concern because some data show that the rates of viral infection are much higher among such donors. The individuals may either not be aware that they are infected or may be test-seeking. Donors in the window period are a concern because they may not be aware of their infection and the screening tests will not detect the infection. The first voluntary initiative, implemented in July 1997, eliminates the use of plasma from one-time donors. This standard requires that no units of plasma can be accepted for further processing unless the donor has successfully passed at least two health history interviews and two panels of all required screening tests within a 6-month period. Qualified donors are those who have passed through these criteria. Applicant donors, on the other hand, are individuals presenting themselves who have not been previously qualified as a donor in the past 6 months. This standard on first-time donors does not apply to volunteer donors. Neither the American Red Cross nor America’s Blood Centers imposes such a requirement for the use of plasma recovered from whole blood donations. Because the patterns of donation are very different for volunteer whole blood donors (who can donate no more frequently than once every 8 weeks) compared with paid plasma donors (who can donate as often as twice a week), the volunteer sector does not view a restriction that would require holding plasma until a donor returns to be a practical requirement. In fact, the average interval between donations for an American Red Cross donor is about 5 months. A second industry initiative is an inventory hold program that holds source plasma donations for 60 days. During this time, if a donor seroconverts and subsequently tests positive—or is otherwise disqualified—the earlier donation can be retrieved from inventory and destroyed. This standard, however, does not establish a true quarantine program that would exclude units from donors in the window period of infection, when viral infection cannot be determined. A donor who was within the window period could return 2 days after the initial donation, pass both health history interviews and screening tests, and contribute infected units that would be used after 60 days, if the donor were not tested again at a time outside the window period. The data provided to us for estimating risks for source plasma donations take this possibility into account. Furthermore, the 60-day inventory hold period does not appear to be adequate for all viruses under consideration, based on published data. The window period for HIV, HBV, and HCV, using detection of seroconversion as an end point, are approximately 22 days, 59 days, and 82 days, respectively. Thus, the 60-day hold period does not encompass the window period for HCV and is barely within the limit for HBV. However, the majority of window period units would be interdicted as most of these would fall within the 60-day hold period. PCR testing would close the window period for these viruses to approximately 11 days for HIV, 34 days for HBV, and 23 days for HCV. As a result, if such testing becomes available for mass screening, the 60-day inventory hold would cover the window period for these three viruses. We found the incidence rates of HIV, HBV, and HCV infection to be much higher for paid donors than for volunteer donors. These rates include donors who pass the initial screening and donate but subsequently seroconvert and are detected at a later donation. As a result, potentially infectious units from these donors may have been incorporated into a plasma pool for manufacturing. Since prevalence rates of viral markers merely indicate the proportion of infected persons in the population at a given time, independent of when infection occurred, they do not accurately portray the chances of incorporating an infectious window period unit into a plasma pool. Thus, to calculate the risk of collecting potentially infectious units—the incidence rate—the number of individuals who are seroconverting and the time between donations for such individuals (the interdonation interval) need to be taken into account. The data used to calculate incidence rates among volunteer donors are based on approximately 1 million donations from repeat donors under the age of 60 for the American Red Cross between July 1, 1996, and June 30, 1997. The interdonation interval for these donors averaged 154 days. However, repeat donors account for only 80 percent of volunteer blood donations. Thus, incidence calculations from first-time donors also need to be taken into account to obtain an overall risk estimate of collecting an infectious window period unit. A modified screening test was used to determine incidence rates among first-time donors, which showed that first-time whole blood donors have a 2.4 times higher HIV rate of prevalent infections than repeat donors. This information is combined to estimate the total incidence among volunteer blood donors. (See table 5.) We also obtained data from the American Blood Resources Association that were based on all of the approximately 4 million donations at the American Blood Resources Association-member centers over a 4-month period in the second half of 1997. The average interdonation interval among these donors was 5.3 days. The American Blood Resources Association’s qualified donor program does not collect plasma from first-time donors; therefore, no adjustment is needed for first-time donors. Table 6 shows the incidence rates among qualified source plasma donors for this period. When comparing the incidence rates between paid and volunteer plasma donors, we found that the incidence rates for HIV, HBV, and HCV were much higher for paid donors. HIV incidence rates were 19 times higher among paid donors (61.8 versus 3.3 for volunteer donors), while HBV and HCV rates were 31 times (245.5 versus 8.0) and 4 times higher (63.5 versus 14.9), respectively. Calculating the chances that an infectious unit will be made available for pooling includes factoring in the length of the window period expressed as a fraction of a year. Calculating this residual risk is a more statistically appropriate way to determine the true impact of window period donations. We calculated the residual risk of a potentially infectious unit being made available for pooling for units collected from volunteer donors. These estimates are shown in table 7. The estimated adjusted risk per million donations—that is, the residual risk—represents the incidence rate multiplied by the window period for each virus. The resulting point estimate for the risk of pooling an HIV seronegative unit from a window period donation is 1 in 689,655. For HBV and HCV, the corresponding estimates are 1 in 77,220 and 1 in 29,850, respectively. When combined, we calculated the risk of incorporating an infectious HIV, HBV, or HCV window period unit into a plasma pool from volunteer donors at 1 in every 20,872 units. Some researchers believe that an additional factor should be taken into account when determining the risks associated with HBV. This is because individuals who become infected with HBV show different patterns of response over time on the HBsAg test. (See app. I for a more complete discussion.) If such an adjustment is taken into account, the estimated total incidence per 100,000 person years for HBsAg would be 17.9, with an estimated adjusted risk per million donations of 28.9 and a point estimate of 1 in 34,614. This would yield an overall risk of incorporating an infectious window period HIV, HBV, or HCV unit into a plasma pool of 1 in every 15,622 units (instead of 1 in 20,872 without the adjustment). We also calculated the estimated residual risk for paid plasma donors, with and without a 60-day inventory hold. (See table 8.) The point estimate for the risk of collecting an HIV window period unit at a paid plasma donation center is 1 in 26,846. For HBV and HCV, the corresponding estimates are 1 in 2,520 and 1 in 7,008, respectively. Overall, the risk of incorporating an infectious HIV, HBV, or HCV window period unit into a plasma pool without taking into account the 60-day inventory hold program was 1 in 1,765 for paid plasma donors—12 times the risk for volunteer donors. To obtain an overall residual risk of incorporating a potentially infectious window period unit into a plasma pool from paid donors, the American Blood Resources Association data also took into account the effect of the 60-day inventory hold program for source plasma. This resulted in an overall risk estimate that would allow for the interdiction of numerous infectious window period units captured by the 60-day hold program. The resulting point estimate for the risk of pooling an HIV seronegative unit that is from a window period donation is 1 in 680,272. For HBV and HCV, the corresponding estimates are 1 in 18,574 and 1 in 27,824, respectively. Thus, the overall residual risk for paid plasma for HIV, HBV, and HCV is 1 in 10,959, compared with 1 in 20,872 for volunteer donors. This would mean that approximately 5.5 infectious units would be included in every 60,000 paid donations, whereas about 2.9 infectious units would be included in every 60,000 volunteer donations. Using the estimates based on the adjustment for HBV among volunteer donors (1 in 15,662) would mean that 3.8 infectious units would be included in every 60,000 volunteer donations. When comparing the overall residual risk of incorporating an infectious window period unit into a plasma pool for each of the three viruses examined in this study, the rates for HIV for volunteer and paid plasma donors are virtually identical (1 in 689,655 and 1 in 680,272, respectively); the rates for HCV are also similar (1 in 29,850 to 1 in 27,824). The major difference can be found for donors infected with HBV, where the residual risk for volunteer plasma donors is 1 in 77,220 compared with 1 in 18,574 for paid plasma donors. But taking into account the adjustment factor for HBV in volunteer plasma donors, the adjusted HBV estimate for volunteer donors becomes 1 in 34,614. Thus, while the risk for HBV transmission is greater for paid donors, the overall residual risks for the three viruses are closer once the 60-day hold is taken into account (1 in 15,662 for volunteer plasma donors versus 1 in 10,959 for paid plasma donors). This difference in the overall residual risk is statistically significant. Thus, the data suggest that the current risks of incorporating an infectious unit into a plasma pool remain somewhat higher for paid donors. (See table 9.) Concerns have been raised about the size of plasma pools because larger pools mean that a recipient of a product is exposed to more donors, raising the risks of infection because larger pools have more potentially infectious units included. In response to these concerns, manufacturers have recently taken steps to reduce the size of the plasma pools they use for producing plasma derivatives. Modeling techniques indicate that this effort can have an impact on infrequent users by minimizing their exposure to a certain number of donors. However, for frequent users of plasma products, such as hemophilia patients, this limit has a negligible impact due to the large number of different pools to which they are exposed throughout their lifetime. The different proteins that make up the various components of plasma are present in only minute quantities in a single donation of plasma. Therefore, most plasma product manufacturing facilities have been designed to work at large scales, using large plasma pools made up of donations from numerous donors, in order to permit manufacturing of sufficient quantities of products. The number of units combined into a common mixture for processing is known as the pool size. There has been discussion by the plasma industry, FDA, consumer groups, and some Members of Congress regarding the potential benefits of reducing the sizes of pools used by manufacturers to produce finished plasma products. While no units of plasma known to be positive for viruses are combined in plasma pools for production, infectious units may escape detection. A single unit has the potential to contaminate an entire pool. The larger the number of donors who contribute plasma to a pool, the greater the possibility that there will be at least one infectious unit included. Based on the estimates we calculated above, a pool of as few as 11,000 donations will still include one infectious unit. As recently as a year ago, FDA believed that initial fractionation pools contained 1,000 to 10,000 source plasma units or as many as 60,000 recovered plasma units. However, in response to congressional inquiry, the FDA obtained information from plasma manufacturers showing that, after adjusting for the combination of intermediates, pooling of material from several hundred thousand donors for single lots of some products sometimes occurred. For example, albumin can be added during intermediate processing steps or to a final product, such as factor VIII, for use as an excipient or stabilizer. This albumin often has been derived from another plasma pool that contains donations from others that are not part of the original pool. As a result of the concerns raised about pool size, the four major plasma fractionators have voluntarily committed to reducing the size of plasma pools (measured by total number of donors) to 60,000 for all currently licensed U.S. plasma products, including factor VIII, factor IX, albumin, and IGIV. This measurement takes into account the composition of starting pools, the combining of intermediates from multiple pools, and the use of plasma derivatives as additives or stabilizers in the manufacturing process. However, prior production streams are still being processed and distributed; as a result, products distributed through the end of 1998 may have been produced from pools that exceeded the 60,000-donor limit. The American Red Cross has also chosen to voluntarily reduce the size of the plasma pools from which its products are manufactured. As a policy, the American Red Cross has a 60,000-donor limit for plasma products that are further manufactured by Baxter Healthcare. Seventy-five percent of all American Red Cross plasma manufactured by the Swiss Red Cross is presently at the 60,000 limit, with plans to have all production at that level in the near future. Modeling techniques have been used to determine the degree of infectivity present in plasma pools of varying sizes. One major study using such a technique found that limiting the number of donors in a pool may only be beneficial for infrequent recipients. For example, the researchers calculated that for an infectious agent with a prevalence of 1 in 500,000 (such as a rare or emerging virus), a pool made up of 10,000 donations would yield a 2 in 100 chance of exposure to that agent for a one-time recipient. For frequent users of plasma products (100 infusions), this same pool size of 10,000 would yield an 86 in 100 chance of exposure to that agent, based on an assumption that the products would come from different pools. This effect is not significantly decreased by reducing the number of donors in a pool. Table 10 shows the effect of manufacturing scale on risk of exposure. These modeling data suggest that smaller plasma pool sizes will reduce the likelihood of transmission of viral agents to infrequent users of plasma products but will not have a major effect on those who are frequent recipients of such products. It is also important to note that risk of exposure does not always equate with risk of infection. In fact, risk of exposure is always greater than or equal to risk of infection. For example, the recent transmission of HCV by a plasma derivative that had not undergone viral inactivation procedures showed that the risk of seroconversion of recipients of this product increased with the number of positive HCV lots infused and the quantity of HCV viral material infused. However, not all recipients were infected; the highest percentage of seroconversions seen with the highest levels of HCV virus infused did not exceed 30 percent. The reasons for not observing seroconversions in 100 percent of the recipients may be due to two factors: (1) the recipient’s dose and (2) the reduction of infectiousness related to steps in the manufacture of the product in addition to viral removal and inactivation, such as duration of storage. Since it is possible that certain infectious units could make it through the donor screening, deferral, and testing process, manufacturers have introduced additional steps in the fractionation process to inactivate or remove viruses and bacteria that may have made their way into plasma pools. These techniques virtually eliminate enveloped viruses, such as HIV, HBV, and HCV. However, they are only partially effective against nonenveloped viruses, such as HAV and human parvovirus. All plasma components listed in table 1 undergo viral inactivation or removal steps during the manufacturing process. To be effective, inactivation techniques must disrupt the virus, rendering it noninfectious. The two main inactivation techniques are heat treatment and solvent-detergent treatment. Heat treatment is accomplished either by exposing the freeze-dried product to dry heat or suspending it in a solution. Another technique heats the completely soluble liquid product with the addition of various stabilizers, such as sucrose and glycine. The second technique, solvent-detergent washing, exposes the product to an organic solvent to dissolve the lipid coat of viruses, rendering them inactive without destroying the plasma-derived products. The lipid membrane contains critical viral proteins needed for infection of host cells. Disrupting the viral lipid envelope renders the virus noninfectious. However, solvent-detergent inactivation is only partially effective in eliminating non-lipid-coated viruses, such as HAV or human parvovirus. To disable the virus without inactivating plasma derivatives, a delicate balance in these procedures must be maintained. Heat and chemicals are particularly damaging to plasma proteins. A number of potentially safer methods are in use or under investigation. These include the use of filters to remove virus particles on the basis of the size of the virus; antibodies to capture the desired protein, while the viruses and unwanted components are washed away; and irradiation to inactivate viruses. Virucidal agents that can be removed during further manufacturing and exposure to ultraviolet light may also be safer methods for disabling viruses. Genetic engineering techniques are also being used to produce recombinant factors VIII and IX—that is, the genes to produce the proteins have been cloned and harvested in the laboratory. These products have, so far, been found free of human viruses. However, manufacturing of these recombinant products may include the use of human-derived products during production or as excipients in the final container. FDA has approved recombinant factor VIII and IX. Determining the effectiveness of these different procedures is accomplished by assessing the amount of viral clearance obtained through a particular inactivation or removal process. It is based on the amount of virus that is killed or removed and, therefore, the extent to which these processes eliminate viruses through manufacturing. Individual manufacturing steps can be specifically designed for viral clearance or they may be intended primarily as a purification process that will also assist in killing or removing viral agents. To meet FDA approval of their particular inactivation or removal technique, manufacturers must separately validate each clearance step. The viral inactivation and removal steps currently in use have all been demonstrated to reduce the levels of virus and, in many cases, likely eliminate them. (See app. III for a more complete discussion of viral clearance.) Even when the virus is not completely eliminated, a significant reduction in viral load is of value. While theoretically even a single virus is capable of causing infection, research has shown that infection is much more likely to occur with higher concentrations of virus. As a result of these techniques, there have been no documented cases of HIV, HCV, or HBV transmission since 1988 for plasma products that were properly inactivated. Although viral inactivation and removal techniques have been shown to be highly effective, they are only useful if the steps in the manufacturing process are carried out properly. Recent FDA inspections of plasma fractionation facilities have found numerous violations of current good manufacturing practices. Without strict adherence to these practices, the safety of plasma products could be compromised. The objective of good manufacturing practices is to ensure that plasma products are safe, effective, adequately labeled, and possess the quality purported. To achieve this goal, plasma manufacturers should operate in compliance with applicable regulations and principles of quality assurance. To ensure that manufacturing processes, including inactivation procedures, follow current good manufacturing procedures, FDA is authorized to inspect plasma fractionation establishments. If the manufacturer does not conform to the standards in its license or the regulations such that the safety and purity of the product is not ensured and this constitutes a danger to health, necessitating immediate corrective action, and the deficiencies are well documented, FDA may pursue an action to suspend the facility’s license. When deficiencies are noted during an inspection, FDA may also issue a warning letter to the facility. A warning letter does not suspend operations but rather gives the facility an opportunity to correct deviations. A warning letter acts as notification to a firm that FDA considers its activities to be in violation of statutory or regulatory requirements and that failure to take appropriate and prompt corrective action may result in further action by FDA. Recent inspections conducted at the four major fractionation companies found numerous deficiencies in each company’s adherence to current good manufacturing practices and resulted in consent decrees with two of the companies. (See table 11.) Many of the facilities slowed production as the firms reallocated resources to work on their corrective actions. The consent decree with Centeon required the company to cease distribution of all but two of its products while it brought its manufacturing standards into compliance with FDA statutes and regulations. In May 1997, FDA authorized the distribution of Centeon’s products from the facility. In a subsequent inspection, completed in July 1998, FDA found that Centeon had failed to fully comply with the consent decree and was notified to immediately cease manufacturing, processing, packing, holding, and distributing all biological and drug products manufactured at its facility. However, exceptions could be made for products deemed medically necessary. Examples of observations found by FDA inspectors during inspections of various plasma fractionation facilities included the following: In-house developed software that had not been validated was being used for performance of finished product testing. Calibration and preventive maintenance records were incomplete and sometimes inaccurate. Reports of problems with plasma products after distribution were not being reviewed and investigated in a timely manner. Viral inactivation processes used on several lots of factor VIII had deviations that were undetected or not corrected. Albumin product lots that failed final container testing for sterility were reprocessed by repooling, and there was no validation for these reprocessing steps. The cleaning process and removal of cleaning agent residues from fractionation kettles, bulk tanks, buffer tanks, or centrifuge bowls were not validated. Albumin manufacturing processes were not validated, and final products did not consistently conform to release specifications. (In 1997, 54 percent of albumin lots for one company failed final container inspection due to visible evidence of proteinaceous material.) To overcome these problems, the major fractionation companies have taken certain steps, such as increasing quality assurance and quality control and production staff and training, implementing capital investments at the fractionation facilities, and equipment process validation. FDA has also taken several actions within the last year to better ensure manufacturer compliance with current good manufacturing practices. In a previous study examining the safety of the blood supply, we had found inconsistencies in FDA inspection practices. As a result of this and another study examining FDA’s regulatory role in the field of biologics, a new inspection program was adopted. Under this program, FDA has designated two groups of investigators: one to focus on blood banks and plasmapheresis centers and another to focus on plasma fractionation and manufacturers of allergenic products, therapeutics, licensed in-vitro diagnostics, and vaccines. This approach is intended to ensure that all FDA current good manufacturing practice inspections are conducted by a single agency unit using a similar approach. If properly implemented, these actions by plasma manufacturers and FDA should help alleviate the problems related to adherence to current good manufacturing practices and quality assurance. We provided copies of a draft of this report to FDA and the Centers for Disease Control and Prevention for their review. Both generally agreed with our findings. They provided technical comments, which we incorporated as appropriate. We also provided copies of the draft report to the American Red Cross, the American Blood Resources Association, and the International Plasma Products Industry Association. Each provided technical comments, which we incorporated as appropriate. The American Blood Resources Association provided additional data on viral marker rates, which we have included. We will send copies of this report to the Secretary of Health and Human Services, the Lead Deputy Commissioner of FDA, and others who are interested. If you have any questions or would like additional information, please call me at (202) 512-7119 or Marcia Crosse, Assistant Director, at (202) 512-3407. Other contributors to this report were Kurt Kroemer, Project Manager, and Richard Weston, Senior Social Science Analyst. Our analysis of viral risks from volunteer and paid plasma donors included calculations for the three major viruses known to be transmissible through plasma products—HIV, HBV, and HCV—and is based on a model that calculated similar estimates for whole blood donations. We did not estimate risks associated with nonenveloped viruses, where current removal or inactivation techniques are only effective to a limited extent, because no screening tests are currently used for these viruses. The nonenveloped viruses currently known to be transmitted through plasma, primarily HAV and human parvovirus, are generally not life threatening. if there is a lag between the acquisition time of infection and the donor’s ability to transmit the infection to others by blood transfusion. Theoretically, such a lag would exist if, on initial exposure to the virus, the donor were able to sequester the virus in the organs of the immune system before becoming infectious. Experimental animal evidence suggests that the difference between the conventional and infectious windows for retroviruses, such as HIV, may range from 2 to 14 days. Two ways of measuring risk of infection from blood transfusions are to examine prevalence and incidence of disease. Prevalence indicates the overall proportion of infected persons in the population at a given time, independent of when the infection occurred. Incidence is the proportion of persons newly infected in the population during the period of time under study, or the rate of new infections. As such, incidence is calculated as the number of seroconverters divided by the person-time of observation, where the person-time of observation equals the number of donations multiplied by the mean time between donations (interdonation interval). To calculate the overall residual risk from window period donations, the incidence rate is multiplied by the length of the window period from seroconverting (repeat) donors. Adjustment factors can also be used to incorporate the effect of first-time donors and probability estimates for donors who do not return but may be in the infectious window period when they donate. Information on viral marker rates among volunteer and paid plasma donors in California was obtained from California’s Department of Health Services, Office of AIDS, HIV/AIDS Epidemiology Branch. Data illustrated in figure 1 are for HIV-1 confirmed positive test results. Starting in the second quarter of 1991, the totals do not include autologous donations. Information pertaining to data for the second half of 1996 were obtained from 49 blood banks and 15 plasma centers, representing approximately 75 percent of the overall California facilities required to report HIV antibody test results to DHS. Table I.1 outlines the calculations for the viral marker rates for volunteer plasma donors. This information was obtained from 19 American Red Cross regions and was based on 2,954,773 donations from donors under age 60. This is approximately 33 percent of the total number of donations made to the American Red Cross during the reporting period of this data collection effort (January 1, 1996, to June 30, 1997). 100,000 x (205 ÷ 2,954,773) 100,000 x (987 ÷ 2,954,773) 100,000 x (3,320 ÷ 2,954,773) There were 205 confirmed positive HIV donations found among the 19 regions reporting for the Infectious Disease Data Center. The number of positive donations per 100,000 is derived by dividing these 205 cases by the number of total donations and then multiplying the resulting figure by 100,000. For HIV, this calculation yielded an estimated 7 positive donations per 100,000 given at American Red Cross centers. Similar calculations can be used to obtain estimates for HBV and HCV. To obtain our estimate of 1 in every 6,549 volunteer donations as potentially infectious for HIV, HBV, or HCV, we added the positive donations per 100,000 for each virus (6.93 + 33.40 + 112.36) and divided 1 million by this amount. Table I.2 outlines the calculations for the viral marker rates for paid plasma donations. The calculations are based on 4,600,000 donations for HIV and HBV and 2,500,000 donations for HCV made in the second half of 1994 to 340 American Blood Resources Association collection centers. The number of confirmed positive donations is obtained by multiplying the number of units found to be repeatedly reactive by the rate at which units are confirmed positive in volunteer whole blood donations for the specific virus in question. (See table I.1 for these confirmed positive rates for each virus.) The number of positive donations per 100,000 is derived by dividing the number of confirmed positive donations by the total number of donations and multiplying by 100,000. Similar calculations can be used to obtain estimates for HBV and HCV. To obtain our estimate of 1 in every 3,834 paid donations as potentially infectious for HIV, HBV, or HCV, we added the positive donations per 100,000 for each virus (3.67 + 30.93 + 226.20) and divided 1 million by this amount. Number of positive donations per 100,000 169.28 x (2,116 x .080) 1,422.87 x (2,024 x .703) 5,655.00 x (9,750 x .580) Table I.3 outlines the incidence rates among repeat volunteer plasma donors, while table I.4 outlines the corresponding overall incidence rates for volunteer donors, taking into account first-time donations. These calculations are drawn from 1 year of donations for which the American Red Cross had the most recently available data (1,098,942 donations from July 1, 1996, to June 30, 1997). Table I.3: Incidence Rates Among Repeat Volunteer Plasma Donations 100,000 x (12 ÷ 1,098,942) (12 x 100,000) ÷ 10 x (2.59 x 16 ÷ 365) 100,000 x (29 ÷ 1,098,942) (29 x 100,000) ÷ 10 x (6.26 x 59 ÷ 365) 10 x (18.59 x 59 ÷ 365) 100,000 x (54 ÷ 1,098,942) (54 x 100,000) ÷ 10 x (11.65 x 82 ÷ 365) (.70 x 41) + (.25 x 0) + (.05 x 100) = 33.7 percent; 1 ÷ .337 = 2.97 (correction factor); because only 33.7 percent of donors seroconverting for HBV are likely identified with the HBsAg test, the observed incidence rate of HBsAg is multiplied by 1 ÷ 0.337 or 2.97; and 6.26 x 2.97 = 18.59, where 6.26 is the incidence rate per 100,000 person-years for HBsAg without the adjustment factor. To obtain an incidence rate for repeat donors, we multiplied the number of seroconverters (12) by 100,000 and divided the resulting number by the total number of donations times the interdonation interval as a fraction of a year. We calculated an incidence rate per 100,000 person years for HIV at 2.59. Taking this rate and multiplying it by the window period (as a fraction of a year) resulted in a risk per million of 1.1. Similar calculations can be used to obtain estimates for HBV and HCV. Table I.4: Total Incidence Rates and Residual Risk Estimates Among Volunteer Plasma Donations (See table I.3.) Estimated total incidence (per 100,000 person-years) (2.59 x .8) + (6.22 x .2) Estimated adjusted risk (per million donations) 10 x [3.31 x (16 ÷ 365)] (See table I.3.) Estimated total incidence (per 100,000 person-years) (6.26 x .8) + (15.02 x .2) (18.59 x .8) + (15.02 x .2) Estimated adjusted risk (per million donations) 10 x [8.01 x (59 ÷ 365)] 10 x [17.87 x (59 ÷ 365)] 1,000,000 ÷ 12.95 1,000,000 ÷ 28.89 (See table I.3.) Estimated total incidence (per 100,000 person-years) (11.65 x .8) + (27.96 x .2) Estimated adjusted risk (per million donations) Since approximately 80 percent of whole blood donations are collected from repeat donors, a correction factor is made taking into account the weighted average of first-time donation to ascertain the estimated total residual risk (for HIV, this is 3.32 incident cases per 100,000 person-years). To determine the risk that a donor was already infected and in the infectious, seronegative window period, the adjusted incidence rate for HIV of 3.32 was multiplied by .044 (the 16-day window period for antigen expressed as a fraction of a year), yielding a residual risk of 1.5 per million donations. Our point estimate was calculated by taking this residual risk and dividing by 1 million. Similar calculations can be used to obtain estimates for HBV and HCV. Table I.5 highlights the corresponding incidence rates and residual risk for paid plasma donors without taking into account the 60-day hold program. This information was obtained from the American Blood Resources Association and was based on 4,011,449 donations from 370 collection centers from July 1997 through October 1997. The confirmed positive donations were analyzed to ensure that they were, in fact, from qualified donors. Additionally, donation histories were examined for approximately 16,000 nonreactive donors (representing 300,288 donations) to obtain probability estimates for the effect of donors who did not return but may have donated a seronegative, but infectious, window period unit at their last donation. Calculations made above for volunteer donors were done in a similar fashion for paid plasma donors to obtain incidence rates, risks per million donations, and a point estimate. Table I.6 outlines the overall residual risk of incorporating an infectious window period unit from a paid plasma donor into a plasma pool. This table takes into account the effect of the 60-day inventory hold program to interdict window period units. The residual risk per million in table I.6 was obtained from the American Blood Resources Association and included several probability estimates for window period donations when the last donation was positive and for window period donations when the last donation was nonreactive. These latter probability estimates were performed for the approximately 300,000 nonreactive donations that made up the American Blood Resources Association’s data set. The residual risk per million of approximately 1.0 for HIV is based on an antigen window period unit of 16 days. This was calculated from information obtained from the American Blood Resources Association, which indicated that PCR testing would reduce the residual risk to .49 per million donations (11-day window period). Thus, the 1.0 used in our calculations to estimate the 16-day antigen window period is simply the midpoint between 1.47 for anti-HIV (22-day window period) and .49 using PCR testing. When final comparisons are made, the overall risk of incorporating an infectious HIV, HBV, or HCV window period unit into a plasma pool was 1 in 20,872 for volunteer plasma donors (or 1 in 15,662 taking into account the transient nature of HBV) and 1 in 10,957 for paid plasma donors. FDA’s protocols for viral testing stipulate that if the initial test for viruses is reactive, then a retest should be performed to verify the initial result. If the retest is also reactive, the blood facility should perform a second, more specific test to confirm the presence of the viral marker. Deciding whether a donation is or is not positive is also affected by the sensitivity and specificity of the viral tests. Initial tests are fast and usually automated and screen large numbers of samples. They are extremely sensitive in order to minimize the number of false-negative outcomes. Confirmatory tests are more time consuming and usually less sensitive than initial tests but are very specific. Table II.1 outlines the different types of viral test results and the consequent actions. Initial test is reactive. A retest in duplicate is performed. One or both duplicate tests are reactive. A confirmatory test is performed (this test is not always required); the prospective donor is deferred, and the collected unit is discarded. Initial test is negative; or if reactive, both duplicate tests are negative. None; the donor is not deferred. Duplicate tests are repeatedly reactive and confirmatory test is neither positive nor negative. The donor is deferred and the collected unit is discarded. Duplicate tests are repeatedly reactive and confirmatory test is positive. The donor is deferred and the collected unit is discarded. Any unit that is repeatedly reactive is considered positive even if confirmatory tests determine that the testing procedure produced a false-positive result. Such results require that the donor be deferred. FDA recommends but does not require that donors who are repeatedly reactive but indeterminate or negative by a confirmatory test should be notified and placed on donor deferral registries. As an added precaution against the inclusion of any plasma that may contain undetectable HIV virus, one company performs additional tests for the HIV antibody. Each donation is tested according to the standards noted above by supplementary testing using a different antibody test than that used in the initial screening procedure. The testing uses “minipools” derived from samples of 64 donations. Units corresponding to test samples that are confirmed reactive for anti-HIV at individual sampling are then rejected and the donor is deferred. Only nonreactive donations are considered to be acceptable for further manufacture. PCR testing—which is more sensitive than licensed antigen or antibody detection methods currently used to screen collected plasma—will be done on pools of plasma rather than single donations. This approach is being pursued because of the constant state of rapid evolution of nucleic acid diagnostics and increased cost-effectiveness of pool testing. FDA has noted that it considers pool testing an interim step, but the agency does believe that testing of plasma pools has public health benefits and should be implemented. Consistent with this position, tests for plasma pools are now under “investigational new drug” status and are planned to be used by all fractionators to test all units of donated plasma in minipools. Some companies have also determined that every product lot that is to be released should be tested one more time to ensure that there were not errors during the testing of the plasma, testing of the pools, and the manufacturing of the product. Final testing of lots for some companies includes tests for HBsAg, while other companies test for HIV using antibody testing and for HIV, HBV, HCV, and HAV using PCR tests. Heating and chemical inactivation are the two main methods in use today to inactivate viruses. Heating in solution, terminal dry heat, vapor heating, and dry heat under solvent are commonly used. For chemical inactivation, manufacturers typically use solvent-detergent techniques, ethanol (during fractionation), and low pH. Viral removal steps include partitioning and nanofiltration. Partitioning during purification includes ethanol fractionation and chromatography, whereas nanofiltration can be accomplished through adsorption or through filters that discriminate to 15 to 100 nanometers. To be effective, viral inactivation techniques must destroy at least one of the essential elements of viral replication. These techniques work in different ways to accomplish this task. Photosensitizing techniques use light-activated dyes that are irradiated, causing the dyes to convert to molecules that can destroy DNA or membrane lipoproteins. Heat treatment denatures viral proteins and nucleic acids, rendering them incapable of viral replication. Irradiation processes inhibit viral DNA by inducing breaks and linkages. Solvent-detergent techniques destroy the viral envelope in lipid-enveloped viruses. Viral removal methods, including chromatography and filtration, physically separate virus particles and other impurities from the desired plasma proteins. Validation of viral clearance steps is accomplished through a scaled-down production method to a laboratory model. Material is spiked with a marker virus (such as bovine viral diarrhea virus for HCV or duck hepatitis B virus for HBV); titers are then compared in the starting and ending material after performing the operations dictated by the laboratory model. This scaled-down model must maintain the physical parameters that will replicate the production method, including time, temperature, pressure, concentration, flow rates, and pH. It must also maintain the physical dimensions of volume, load, and surface area and column dimensions. These validation models cannot demonstrate complete elimination of a virus, but they can highlight the difference in titers in the beginning and end of the production model. This modeling will highlight the actual viral kill that has been accomplished through inactivation, removal, or both. The effect of multiple clearance steps may be combined if each step is independently validated and each is based on a mechanism that is different from other clearance steps. Units that have been tested for HIV that were in the window period show a range of genome copies per milliliter of 10 to 10 (with occasional spikes to 10 range), while seropositive units are in the range of 10. For albumin, the viral log reduction factor (LRF) using pasteurization has been shown to be greater than 7, while partitioning during fractionation shows LRFs to be greater than 6. Additionally, there have been no cases of HIV, HCV, or HBV transmission through albumin since initiation of heating (at 60 degrees Celsius for 10 hours) of the final containers. For IGIM, the cumulative LRF for HIV in one model was greater than 10.9 (6.2 using ethanol fractionation and 4.7 using solvent-detergent techniques). For IGIV, the cumulative LRF for one process was greater than 17.5 (5.9 using ethanol fractionation, 5.2 using solvent-detergent techniques, and 6.4 using pH 4). Processes for IGIV from another model show LRFs of 13.2 and 11.4 using ethanol fractionation and heat treating or ethanol fractionation and a pH of 4 using Pepsin, respectively). For antihemophilic factor, the cumulative LRF for one process was greater than 15.7 (5.2 using purification and 10.5 using heat treating at 60 degrees Celsius for 10 hours), while another company’s procedure showed LRFs of greater than 12 (2 using affinity chromatography and greater than 10 using solvent-detergent techniques). Similar reductions are found for coagulation factor IX. Thus, these LRFs for HIV are well above the levels of genome copies per milliliter found in units that are from window period and seropositive donations. For HCV, genome copies per milliliter found in window period units ranges from 10 to 10well above the levels of genome copies per milliliter found in units that are from window period and seropositive donations. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on blood plasma safety, focusing on: (1) comparing the risk of incorporating an infectious unit of plasma into further manufacturing from volunteer versus paid plasma donors for human immunodeficiency virus (HIV), hepatitis B (HBV), and hepatitis C (HCV); (2) examining the impacts on frequent and infrequent plasma users when pooling large numbers of plasma donations into manufactured plasma products; (3) assessing the safety of end products from plasma after they have undergone further manufacturing and inactivation steps to kill or remove viruses; and (4) examining the recent regulatory compliance history of plasma manufacturers. GAO noted that: (1) viral clearance techniques have made the risks of receiving an infected plasma product extremely low when manufacturers follow all the procedures in place to ensure safety; (2) while paid plasma donors are over one and a half times more likely to donate potentially infectious units (1 in every 3,834 units), a number of recent initiatives by the source plasma industry greatly reduce the chances of these units being pooled for manufacturing (to 1 in every 10,959 units); (3) even with these initiatives in place, the risks are still somewhat higher from plasma units donated by paid donors than from volunteer donors; (4) limiting the number of donors whose plasma is pooled for production into plasma products helps to reduce the risks of viral transmission for those receiving these products; (5) presently, a 60,000-donor limit has been established for each individual plasma product; (6) this effort has an impact on infrequent users by minimizing their exposure to a certain number of donors for the few times they would be infused with a plasma product; (7) for frequent users of plasma products, this donor limit has a negligible impact because of the large number of infusions that they receive and, thus, the large number of pools that they would be exposed to in the course of their lifetime; (8) a more significant step in reducing risk of infection occurs in manufacturing--where all plasma products for intravenous use undergo viral removal, inactivation procedures, or both--which virtually eliminates enveloped viruses such as HIV, HBV, and HCV; (9) this is supported by epidemiological data on the transmission of viruses through plasma products since the introduction of adequate viral removal and inactivation procedures in the late 1980s as well as laboratory data that characterize the effectiveness of viral clearance through these procedures; (10) certain advances are only effective if the processes used to produce finished plasma products adhere to current good manufacturing practices; (11) this, however, has not been the case with all of the major manufacturing companies that produce plasma products; and (12) without strict adherence to current good manufacturing practices related to the efficacy of viral removal and inactivation procedures, the safety of these plasma products could be compromised.
Visa applicants, including science students and scholars, generally begin the visa process by scheduling an interview at a consular post. On the day of the appointment, a consular officer reviews the application, interviews the applicant, and checks the applicant’s name in the Consular Lookout and Support System (CLASS). The consular officer then decides if the applicant will need a Security Advisory Opinion, which provides an opinion or clearance from Washington on whether to issue a visa to the applicant and may include a Visas Mantis check. In deciding if a Visas Mantis check is needed, the consular officer determines whether the applicant’s background or proposed activity in the United States could involve exposure to technologies on the Technology Alert List, which lists science and technology-related fields where, if knowledge gained from work in these fields were used against the United States, it could be potentially harmful. After a consular officer decides that a Visas Mantis security check is necessary for an applicant, several steps are taken to resolve the process. The consular officer prepares a Visas Mantis cable, which contains information on the applicant, and then transmits the information to Washington for an interagency security check. The State Department’s Bureau of Nonproliferation, the FBI, and other agencies review the information contained in the cable and then provide a response on the applicant to the Consular Affairs section of State headquarters. The Bureau of Nonproliferation and other agencies are given 15 working days to respond to State with any objections. However, State has agreed to wait for a response from the FBI before proceeding with each Visas Mantis case. Once State headquarters receives all the information pertaining to an applicant, Consular Affairs summarizes the information and transmits a response to the consular post. A consular official at post reviews the response and decides, based on the information from Washington, whether to issue the visa to the applicant. State cannot readily identify the total length of time it takes for a science student or scholar to obtain a visa. However, in discussions with State officials, we learned that a key factor that contributes to the length of time is whether an applicant must undergo a Visas Mantis. To obtain visa data on science students and scholars, and to determine how long the visa process takes, we reviewed all Visas Mantis cables received from posts between April and June 2003, which totaled approximately 5,000. Of these cases, 2,888 pertained to science students and scholars, of which approximately 58 percent were sent from China, about 20 percent from Russia, and less than 2 percent from India. We drew a random sample of 71 cases from the 2,888 science student and scholar visa applications to measure the length of time taken at various points in the visa process. The sample of 71 cases is a probability sample, and results from the data in this sample project to the universe of the 2,888 science visa applications. We found that visas for science students and scholars took on average 67 days from the date the Visas Mantis cable was submitted from post to the date State sent a response to the post. This is slightly longer than 2 months per application, on average. In the sample, 67 of the visa applications completed processing and approval by December 3, 2003. In addition, 3 of the 67 completed applications had processing times in excess of 180 days. Four of the cases in our sample of 71 remained pending as of December 3, 2003. Of the 4 cases pending, 3 had been pending for more than 150 days and 1 for more than 240 days. In addition to our sample of 71 cases, State provided us with data on two samples it had taken of Visas Mantis case processing times. Data on the first sample included 40 visa cases taken from August to October 2003; data on the second sample included 50 Visas Mantis cases taken from November and December 2003. State indicated that both samples show improvements in processing times compared with earlier periods in 2003. However, based on the documentation of how these cases were selected, we were unable to determine whether these were scientifically valid samples and therefore we could not validate that processing times have improved. For the first sample, the data show that 58 percent of the cases were completed within 30 days; for the second sample, the data show that 52 percent were completed within this time frame. In addition, the data for both samples show that lengthy waits remain in some cases. For example, 9 of the 40 cases had been outstanding for more than 60 days as of December 3, 2003, including 3 cases that had been pending for more than 120 days. Also, 9 of the 50 cases were still pending as of February 13, 2004, including 6 that had been outstanding for more than 60 days. State officials commented that most of the outstanding cases from both samples were still being reviewed by the agencies. During our fieldwork at posts in China, India, and Russia in September 2003, we also obtained data indicating that 410 Visas Mantis cases submitted in fiscal year 2003 were still outstanding more than 60 days at the end of the fiscal year. In addition, we found numerous cases— involving 27 students and scholars from Shanghai—that were pending more than 120 days as of October 16, 2003. We found that several factors, including interoperability problems among the systems that State and FBI use, contribute to the time it takes to process a Visas Mantis case. Because many different agencies, bureaus, posts, and field offices are involved in processing Visas Mantis security checks, and each has different databases and systems, we found that Visas Mantis cases can get delayed or lost at different points in the process. We found that in fiscal year 2003, some Visas Mantis cases did not always reach their intended recipient and as a result, some of the security checks were delayed. For example, we followed up with the FBI on 14 outstanding cases from some of the posts we visited in China in September 2003 to see if it had received and processed the cases. FBI officials provided information indicating that they had no record of receiving three of the cases, they had responded to State on eight cases, and they were still reviewing three cases. FBI officials stated that the most likely reason why they did not have a record of the three cases from State were due to cable formatting errors. State did not comment on the status of the 14 cases we provided to the FBI for review. However, a Consular Affairs official told us that in fall 2003, there were about 700 Visas Mantis cases sent from Beijing that did not reach the FBI for the security check. The official did not know how the cases got lost but told us that it took Consular Affairs about a month to identify this problem and provide the FBI with the cases. As a result, several hundred visa applications were delayed for another month. Figure 1 illustrates some of the time-consuming factors in the Visas Mantis process for our sample of 71 cases. While the FBI received most of the cases from State within a day, seven cases took a month or more, most likely because they had been improperly formatted and thus were rejected by the FBI’s system. In more than half of the cases, the FBI was able to complete the clearance process the same day, but some cases took more than 100 days. These cases may have taken longer because (1) the FBI had to investigate the case or request additional information from State; (2) the FBI had to locate files in field offices, because not all of its files are an electronic format; or (3) the case was a duplicate, which the FBI’s name check system also rejects. In most of the cases, the FBI was able to send a response—which it generally does in batches of name checks, not by individual case—to State within a week. The FBI provides the results of name checks for Visas Mantis cases to State on computer compact disks (CDs), a step that could cause delays. In December 2003, a FBI official told us that these CDs were provided to State twice a week. However, in the past, the CDs were provided to State on a less frequent basis. In addition, it takes time for data to be entered in State’s systems once State receives the information. In the majority of our sample cases, it took State 2 weeks or longer to inform a post that it could issue a visa. State officials were unable to explain why it took State this long to respond to post. Officials told us that the time frame could be due to a lack of resources at headquarters or because State was waiting for a response from agencies other than the FBI. However, the data show that only 5 of the 71 cases were pending information from agencies other than the FBI. During our visits to posts in September 2003, officials told us they were unsure whether they were adding to the wait time because they did not have clear guidance on when to apply the Visas Mantis process and were not receiving feedback on the amount of information they provided in their Visas Mantis requests. According to the officials, additional information and feedback from Washington agencies regarding these issues could help expedite Visas Mantis cases. Consular officers told us that they would like the guidance to be simplified—for example, by expressing some scientific terms in more easily understood language. Several consular officers also told us they had only a limited understanding of the Visas Mantis process, including how long the process takes. They told us they would like to have better information on how long a Visas Mantis check is taking so that they can more accurately inform the applicant of the expected wait. Consular officers at most of the posts we visited told us they would like more feedback from State on whether the Visas Mantis cases they are sending to Washington are appropriate, particularly whether they are sending too many or too few Visas Mantis requests. They said they would like to know if including more information in the security check request would reduce the time to process an application in Washington. Moreover, consular officers indicated they would like additional information on some of the outstanding Visas Mantis cases, such as where the case is in the process. State confirmed that it has not always responded to posts’ requests for feedback or information on outstanding cases. Aside from the time it takes to process Visas Mantis checks, an applicant also has to wait for an interview. State does not have data or criteria for the length of time applicants at its overseas posts wait for an interview, but at the posts we visited in September 2003, we found that it generally took 2 to 3 weeks. Furthermore, post officials in Chennai, India, told us that the interview wait time was as long as 12 weeks during the summer of 2003 when the demand for visas was greater than the resources available at post to adjudicate a visa. Officials at some of the posts we visited indicated they did not have enough space and staffing resources to handle interview demands and the new visa requirement that went into effect on August 1, 2003. That requirement states that, with a few exceptions, all foreign individuals seeking to visit the United States need to be interviewed prior to receiving a visa. Factors such as the time of year an applicant applies for a visa, the appointment requirements, and the staffing situation at posts generally affect how long an applicant will have to wait for an interview. State and FBI officials acknowledged that visa waits have been a problem but said they are implementing improvements to the process and working to decrease the number of pending Visas Mantis cases. For example, State and FBI officials told us that the validity of Visas Mantis checks for students and scholars has been extended to 12 months for applicants who are returning to a program or activity and will perform the same functions at the same facility or organization that was the basis for the original Visas Mantis check. FBI officials said that to address delays stemming from problems with lost case files or systems that are not interoperable, the FBI is working on automating its files and setting up a common database between the field offices and headquarters. They also told us they have set up a tracking system within the FBI for all Security Advisory Opinions, including Visas Mantis cases. Consular Affairs officials told us that State has invested about $1 million on a new information management system that it said would reduce the time it takes to process Visas Mantis cases. They described the new system as a mechanism that would help strengthen the accountability of Visas Mantis clearance requests and responses, establish consistency in data collection, and improve data exchange between State and other agencies involved in the clearance process. In addition, officials said the system would allow them to improve overall visa statistical reporting capabilities and data integrity for Mantis cases. The new system will be paperless, which means that the current system of requesting Visas Mantis clearances by cable will be eliminated. State officials told us that the system is on schedule for release early this year and that the portion relating to Security Advisory Opinions will be operational sometime later this year. However, challenges remain. FBI officials told us that the name check component of the FBI’s system would not immediately be interoperable with State’s new system but that they are actively working with State to seek solutions to this problem. Nonetheless, FBI and State have not determined how the information will be transmitted in the meantime. We were not able to assess the new system since it was not yet functioning at the time of our review. Officials from Consular Affairs and the FBI told us they are coordinating efforts to identify and resolve outstanding Visas Mantis cases. For example, they have been working together on a case-by-case basis to make sure that cases outstanding for several months to a year are completed. However, State officials said they do not have a target date for completion of all the outstanding cases, which they estimated at 1,000 in November 2003. In addition to improvements to the Visas Mantis process, State officials told us that they are monitoring post resource needs and adding staff as needed. These officials also told us that State added 66 new officers in 2003 and plans to add an additional 80 in 2004. In conclusion, Mr. Chairman, agency officials recognize that the process for issuing a visa to a science student or scholar can be an important tool to control the transfer of sensitive technology that could put the United States at risk. They also acknowledge that if the process is lengthy, students and scholars with science backgrounds might decide not to come to the United States, and technological advancements that serve U.S. and global interests could be jeopardized. Our analysis of a sample of Visas Mantis cases from April to June 2003 show that some applicants faced lengthy waits. While the State Department and the FBI report improvements in Visas Mantis processing times, our analysis of data from the posts we visited in September 2003 and our contact with post officials in January 2004 show that there are still some instances of lengthy waits. State’s and FBI’s implementation of the Visas Mantis process still has gaps that are causing wait times for visas. State’s new information management system could improve the Visas Mantis process. Nevertheless, it is unclear whether the new system will address all the current issues with the process. To help improve the process and reduce the length of time it takes for a science student or scholar to obtain a visa, we are recommending that the Secretary of State, in coordination with the Director of the FBI and the Secretary of Homeland Security, develop and implement a plan to improve the Visas Mantis process. In developing this plan, the Secretary should consider actions to establish milestones to reduce the current number of pending Visas Mantis develop performance goals and measurements for processing Visas Mantis provide additional information through training or other means to consular posts that clarifies guidance on the overall operation of the Visas Mantis program, when Mantis clearances are required, what information consular posts should submit to enable the clearance process to proceed as efficiently as possible, and how long the process takes; and work to achieve interoperable systems and expedite transmittal of data between agencies. In commenting on our draft report, State said it had taken some actions to improve the Visas Mantis process and it would study our recommendation to make further improvements. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you or other members of the committee may have. For future contacts regarding this testimony, please call Jess Ford or John Brummet at (202) 512-4128. Individuals making key contributions to this testimony included Jeanette Espinola, Heather Barker, Janey Cohen, and Andrea Miller. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Each year thousands of international science students and scholars apply for visas to enter the United States to participate in education and exchange programs. They offer our country diversity and intellectual knowledge and are an economic resource. At the same time, the United States has important national security interests in screening these individuals when they apply for a visa. At a House Committee on Science hearing in March 2003, witnesses raised concern about the length of time it takes for science students and scholars to obtain a visa and about losing top international students to other countries due to visa delays. GAO reviewed 1) how long it takes a science student or scholar from another country to obtain a visa and the factors contributing to the length of time, and 2) what measures are under way to improve the process and decrease the number of pending cases. State Department (State) cannot readily identify the time it takes for a science student or scholar to obtain a visa. State has not set specific criteria or time frames for how long the visa process should take, but its goal is to adjudicate visas as quickly as possible, consistent with immigration laws and homeland security objectives. GAO found that the time it takes to adjudicate a visa depends largely on whether an applicant must undergo an interagency security check known as Visas Mantis, which is designed to protect against sensitive technology transfers. Based on a random sample of Visas Mantis cases for science students and scholars sent from posts between April and June 2003, GAO found it took an average of 67 days for the security check to be processed and for State to notify the post. In addition, GAO's visits to posts in China, India, and Russia in September 2003 showed that many Visas Mantis cases had been pending 60 days or more. GAO also found that the way in which Visas Mantis information was disseminated at headquarters level made it difficult to resolve some of these cases expeditiously. Furthermore, consular staff at posts GAO visited said they were unsure whether they were contributing to lengthy waits because they lacked clear guidance on when to apply Visas Mantis checks and did not receive feedback on whether they were providing enough information in their Visas Mantis requests. Another factor that may affect the time taken to adjudicate visas for science students and scholars is the wait for an interview. While State and Federal Bureau of Investigation (FBI) officials acknowledged there have been lengthy waits for visas, they report having measures under way that they believe will improve the process and that they are collaborating to identify and resolve outstanding Visas Mantis cases. In addition, State officials told GAO they have invested about $1 million to upgrade the technology for sending Visas Mantis requests. According to State officials, the new system will help to reduce the time it takes to process Visas Mantis cases.
For our 2013 high risk update, we determined that two areas warranted removal from the High Risk List due to the progress that had been made—Management of Interagency Contracting and IRS Business Systems Modernization. Additional details for both areas can be found in Appendix I. A brief summary follows. Interagency contracting—where one agency either places an order using another agency’s contract or obtains contracting support services from another agency—can help streamline the procurement process, take advantage of unique expertise in a particular type of procurement, and achieve savings. While this method of contracting can save the government money and effort when properly managed, it also poses a variety of risks. In 2005, we designated the management of interagency contracting as high risk due in part to unclear lines of accountability between customer and assisting agencies and the potential for improper use, including out- of-scope work and noncompliance with competition requirements. We identified the continuing need for additional management controls and guidance and clearer definitions of roles and responsibilities as keys to addressing these issues. We also highlighted challenges agencies faced in fully realizing the benefits of interagency contracts, including the lack of data and the risk of potential duplication when new contracting vehicles are created. To address these issues, we identified the need for a policy framework and business case analysis requirements to support the creation of certain new contracts and improved data on existing interagency contracts. As detailed in our 2013 high risk update report, we are removing the management of interagency contracting from the High Risk List based on: (1) continued progress made by agencies in addressing identified deficiencies, (2) establishment of additional management controls, (3) creation of a policy framework for establishing new interagency contracts, and (4) steps taken to address the need for better data on these contracts. Specifically, most agencies have taken steps to implement and reinforce interagency contracting policies to address prior concerns about the improper use of these contracts. For example, we have noted improvements in procedures used in making purchases on behalf of the Department of Defense (DOD)—the largest user of interagency contracts. These included better defined roles and responsibilities and enhanced controls over funding procedures. Additionally, the DOD Inspector General has reported a significant decrease in problems with DOD procurements through other federal agencies in congressionally mandated reviews of interagency acquisitions. With respect to management controls, Federal Acquisition Regulation (FAR) provisions on interagency acquisitions were revised to require that agencies make a best procurement approach determination to justify the use of an interagency contract and prepare written interagency agreements outlining the roles and responsibilities of customer and assisting organizations. As we recently reported, OMB analyzed reports from the 24 agencies that account for almost all contract spending government- wide and found that most had implemented management controls to reinforce the new FAR requirements and strengthen the management of interagency acquisitions. All 24 agencies also reported having oversight mechanisms to ensure their internal controls were operating properly. In response to congressional direction and our prior recommendation, OMB established a policy framework in September 2011 to govern the creation of new interagency contract vehicles. The framework addresses concerns about potential duplication by requiring agencies to develop a thorough business case prior to establishing certain contract vehicles. Finally, in response to our recommendations, OMB and the General Services Administration have taken a number of steps to address the need for better data on interagency contract vehicles. These efforts should enhance both government-wide efforts to manage interagency contracts and agency efforts to conduct market research and negotiate better prices. Importantly, congressional oversight sustained over several years has been vital in addressing the issues that led this area to be designated high risk. Removing the management of interagency contracting from the High Risk List does not mean that the federal government’s use of these contracts is without challenges. But, we believe there are mechanisms in place that OMB and federal agencies can use to identify and address interagency contracting issues before they put the government at significant risk for waste, fraud, or abuse. We also will continue to monitor developments in this area. Internal Revenue Service (IRS) Business Systems Modernization (BSM) is a multi-billion dollar, highly complex effort that involves the development and delivery of a number of modernized tax administration and internal management systems as well as core infrastructure projects that are intended to replace the agency’s aging business and tax processing systems. In 1995, we identified serious management and technical weaknesses in IRS’s modernization program that jeopardized its successful completion. We recommended many actions to fix the problems, and added IRS’s modernization to GAO’s High Risk List. In 1995, we also added IRS’s financial management to GAO’s High Risk List, due to long-standing and pervasive problems that hampered the effective collection of revenues and precluded the preparation of auditable financial statements. We combined the two issues into one high-risk area in 2005 since resolution of the most serious financial management problems depended largely on the success of the business systems modernization program. Throughout the years, Congress conducted oversight of the BSM program by, among other things, requiring that IRS submit annual expenditure plans that needed to meet certain conditions, including a review by GAO. Because of the significant progress made in addressing the high-risk area, starting in fiscal year 2012, Congress did not require the submission of an annual expenditure plan. GAO, Investment Management: IRS Has a Strong Oversight Process But Needs to Improve How It Continues Funding Ongoing Investments, GAO-11-587 (Washington, D.C.: July 20, 2011). Engineering Institute’s Capability Maturity Model Integration (CMMI), which calls for disciplined software development and acquisition practices which are considered industry best practices. In September 2012, IRS’s application development organization reached CMMI maturity level 3, a high achievement by industry standards. As with all areas removed from the High Risk List, we will continue to monitor how future events unfold both with the IRS modernization efforts and in the Enforcement of Tax Laws, which remains on the High Risk List. This year, we added two new areas to the High Risk List—Limiting the Federal Government’s Fiscal Exposure by Better Managing Climate Change Risks and Mitigating Gaps in Weather Satellite Data. Additional details for both areas can be found in Appendix II. A brief summary follows. Climate change is a complex, crosscutting issue that poses risks to many environmental and economic systems—including agriculture, infrastructure, ecosystems, and human health—and presents a significant financial risk to the federal government. Among other impacts, climate change could threaten coastal areas with rising sea levels, alter agricultural productivity, and increase the intensity and frequency of severe weather events. As observed by the United States Global Change Research Program, the impacts and costliness of weather disasters— resulting from floods, drought, and other events such as tropical cyclones—are expected to increase in significance as what are considered “rare” events become more common and intense due to anticipated changes in the global climate system. Moreover, according to the National Oceanic and Atmospheric Administration’s National Climatic Data Center (NCDC), the United States has sustained 144 weather and climate-related disasters since 1980, in which overall damages reached or exceeded $1 billion each, with 14 events in 2011 and 11 events in 2012. NCDC estimates that 2012 will surpass 2011 in terms of aggregate costs for annual billion-dollar disasters, even with fewer disasters. The federal government owns extensive infrastructure, such as defense installations, and manages 29 percent of the land in the United States; and insures property through the National Flood Insurance Program and crops through the Federal Crop Insurance Corporation. As of November 2012, FEMA owes the Treasury approximately $20 billion—up from $17.8 billion pre-Superstorm Sandy—and had not repaid any principal on the loan since 2010. Further, the federal government’s crop insurance costs have increased in recent years—rising from an average of $3.1 billion per year from fiscal years 2000 through 2006, to an average of $7.6 billion per year from fiscal years 2007 through 2012—and ,according to the Congressional Budget Office, are projected to increase further. The federal government also provides emergency aid in response to natural disasters. For example, we reported in September 2012 that major disaster declarations have increased over recent decades to a record of 98 in fiscal year 2011 compared with 65 in 2004. Had FEMA adjusted the indicator on which it principally relies to determine whether to recommend that a jurisdiction receive public assistance funding, to reflect changes in personal income and inflation, 44 percent and 25 percent fewer disaster declarations, respectively, would have met the threshold for public assistance during fiscal years 2004 through 2011. Over that period, the Federal Emergency Management Agency (FEMA) obligated more than $80 billion in federal assistance for major disasters. The federal government’s exposure to major disasters continues to pose risks. Most recently, Congress provided more than $60 billion in budget authority for disaster assistance in the wake of Superstorm Sandy. We have found that the federal government is not well positioned to address the fiscal exposure presented by climate change, and needs a government-wide strategic approach with strong leadership to manage related risks. We reported in 2009 that while policymakers increasingly viewed climate change adaptation—defined as adjustments to natural or human systems in response to actual or expected climate change—as a risk-management strategy to protect vulnerable sectors and communities that might be affected by changes in the climate, the federal government’s emerging adaptation activities were carried out in an ad hoc manner and were not well coordinated across federal agencies, let alone with state and local governments. Subsequently, in May 2011, we reported that there was no coherent strategic government-wide approach to climate change funding and that federal officials do not have a shared understanding of strategic government-wide priorities. At that time, we recommended that the appropriate entities within the Executive Office of the President clearly establish federal strategic climate change priorities, including the roles and responsibilities of the key federal entities, taking into consideration the full range of climate-related activities within the federal government. The relevant federal entities have not directly addressed this recommendation. Federal agencies have made some progress toward better organizing across agencies, within agencies, and among different levels of government; however, the increasing fiscal exposure for the federal government calls for more comprehensive and systematic strategic planning, including, but not limited to, the following: A government-wide strategic approach with strong leadership and the authority to manage climate change risks that encompasses the entire range of related federal activities and addresses all key elements of strategic planning. Federal agencies recently released draft climate change adaptation plans. While individual agency actions are necessary, a centralized strategy driven by a government-wide plan is also needed to reduce the federal fiscal exposure to climate change, maximize investments, achieve efficiencies, and better position the government for success. More information to understand and manage federal insurance programs’ long-term exposure to climate change and analyze the potential impacts of an increase in the frequency or severity of weather-related events on their operations. A government-wide approach for providing (1) the best available climate-related data for making decisions at the state and local level and (2) assistance for translating available climate-related data into information that officials need to make decisions. Potential gaps in satellite data need to be effectively addressed. Improved criteria for assessing a jurisdiction’s capability to respond to and recover from a disaster without federal assistance, and to better apply lessons from past experience when developing disaster cost estimates. Potential gaps in environmental satellite data beginning as early as 2014 and lasting as long as 53 months have led to concerns that future weather forecasts and warnings—including warnings of extreme events such as hurricanes, storm surges, and floods—will be less accurate and timely. A number of decisions are needed to ensure contingency and continuity plans can be implemented effectively. We and others— including an independent review team reporting to the Department of Commerce and the department’s Inspector General—have raised concerns that problems and delays on environmental satellite acquisition programs will result in gaps in the continuity of critical satellite data used in weather forecasts and warnings. The importance of such data was recently highlighted by the advance warnings of the path, timing, and intensity of Superstorm Sandy. Since the 1960s, the United States has used both polar-orbiting and geostationary satellites to observe the earth and its land, oceans, atmosphere, and space environments. Polar-orbiting satellites constantly circle the earth in an almost north-south orbit providing global coverage of environmental conditions that affect the weather and climate. As the earth rotates beneath it, each polar-orbiting satellite views the entire earth’s surface twice a day. In contrast, geostationary satellites maintain a fixed position relative to the earth from a high-level orbit of about 22,300 miles in space. Used in combination with ground, sea, and airborne observing systems, both types of satellites have become an indispensable part of monitoring and forecasting weather and climate. Polar-orbiting satellites provide the data that go into numerical weather prediction models, which are a primary tool for forecasting weather days in advance—including forecasting the path and intensity of hurricanes and tropical storms. Geostationary satellites provide frequently-updated graphical images that are used to identify current weather patterns and provide short-term warnings. In regards to polar satellites, the National Oceanic and Atmospheric Administration (NOAA) must make decisions about (1) whether and how to extend support for legacy satellite systems so that their data might be available if needed, (2) how much time and resources to invest in improving satellite models so that they assimilate data from alternative sources, (3) whether to pursue international agreements for access to additional satellite systems and how best to resolve any security issues with the foreign data, (4) when and how to test the value and integration of alternative data sources, and (5) how these preliminary mitigation plans will be integrated with NOAA’s broader end-to-end plans for sustaining weather forecasting capabilities. NOAA must also identify time frames for when these decisions will be made. We have ongoing work assessing NOAA’s efforts to limit and mitigate potential polar satellite data gaps. For the geostationary satellites, NOAA must demonstrate its progress in conducting training and simulations for contingency scenarios, evaluating the status of viable foreign satellites, and working with the user community to account for differences in product coverage under contingency scenarios. These steps are critical for NOAA to move forward in documenting the processes it will take to implement its contingency plans. Once these activities are completed, NOAA should update its contingency plan to provide more details on its contingency scenarios, associated time frames, and any preventative actions it is taking to minimize the possibility of a gap. We have ongoing work assessing NOAA’s actions to ensure that its plans are viable and that continuity procedures are in place and have been tested. One area—Modernizing the Outdated U.S. Financial Regulatory System—has been modified due to changing circumstances to include the Federal Housing Administration (FHA). To reflect these changing circumstances, the name of the area has been changed to Modernizing the U.S. Financial Regulatory System and Federal Role in Housing Finance. We first designated this area as high risk in 2009 due to the urgent need to reform the fragmented and outdated U.S. financial regulatory system. As detailed in our 2013 high risk update report, many actions are under way to implement oversight by new regulatory bodies and new requirements for market participants, although many rulemakings remain unfinished. Among the additional actions needed are resolving the role of the two housing-related government-sponsored enterprises—Fannie Mae and Freddie Mac—that continue operating under government conservatorships. However, a new challenge for the markets has also evolved as the decline in private sector participation in housing finance that began with the 2007-2009 financial crisis has resulted in much greater activity by FHA, whose single-family loan insurance portfolio has grown from about $300 billion in 2007 to more than $1.1 trillion in 2012. Although required to maintain capital reserves equal to at least 2 percent of its portfolio, FHA’s capital reserves have fallen below this level, due partly to increases in projected defaults on the loans it has insured. As a result, we are modifying this high-risk area to include FHA and acknowledging the need for actions beyond those already taken to help restore FHA’s financial soundness and define its future role. One such action would be to determine the economic conditions that FHA’s primary insurance fund would be expected to withstand without drawing on the Treasury. Recent events suggest that the 2-percent capital requirement may not be adequate to avoid the need for Treasury support under severe stress scenarios. Additionally, actions to reform the government- sponsored enterprises and to implement mortgage market reforms in the Dodd-Frank Act will need to consider the potential impacts on FHA’s risk exposure. Additional information on this area is provided on page 81 of our 2013 high risk update. Since our 2011 update, sufficient progress has been made to narrow the scope of three areas, including Strengthening Department of Homeland Security Management Functions. In 2003, we designated implementing and transforming the Department of Homeland Security (DHS) as high risk because DHS had to transform 22 agencies—several with major management challenges—into one department. Further, failure to effectively address DHS’s management and mission risks could have serious consequences for U.S. national and economic security. Given the significant effort required to build and integrate a department as large and complex as DHS, our initial high-risk designation addressed the department’s initial transformation and subsequent implementation efforts, to include associated management and programmatic challenges. At that time, we reported that the creation of DHS was an enormous undertaking that would take time to achieve, and that the successful transformation of large organizations, even those undertaking less strenuous reorganizations, could take years to implement. Over the past 10 years, the focus of this high-risk area has evolved in tandem with DHS’s maturation and evolution. The overriding tenet has consistently remained the department’s ability to build a single, cohesive and effective department that is greater than the sum of its parts—a goal that requires effective collaboration and integration of its various components and management functions. In 2007, in reporting on DHS’s progress since its creation, as well as in our 2009 high risk update, we reported that DHS had made more progress in implementing its range of missions rather than its management functions, and that continued work was needed to address an array of programmatic and management challenges. DHS’s initial focus on mission implementation was understandable given the critical homeland security needs facing the nation after the department’s establishment, and the challenges posed by its creation, integration and transformation. As DHS continued to mature, and as we reported in our assessment of DHS’s progress and challenges 10 years after 9/11, we found that the department implemented key homeland security operations and achieved important goals in many areas to create and strengthen a foundation to reach its potential.identified that more work remained for DHS to address weaknesses in its operational and implementation efforts, and to strengthen the efficiency and effectiveness of those efforts. We further reported that continuing weaknesses in DHS’s management functions had been a key theme impacting the department’s implementation efforts. Recognizing DHS’s progress in transformation and mission implementation, our 2011 high risk update focused on the continued need to strengthen DHS’s management functions (acquisition, information technology, financial management, and human capital) and integrate those functions within and across the department, as well as the impact of these challenges on the department’s ability to effectively and efficiently carry out its missions. While challenges remain for DHS to address across its range of missions, the department has made considerable progress in transforming its original component agencies into a single cabinet-level department and positioning itself to achieve its full potential. As a result, we narrowed the scope of the high-risk area and changed the name from Implementing and Transforming the Department of Homeland Security to Strengthening the Department of Homeland Security Management Functions. Since our last high risk update in January 2011, we have regularly met with senior DHS officials to discuss the department’s progress in addressing this high-risk area and written letters summarizing our feedback on DHS’s progress and work remaining to address the high-risk designation, most recently in December 2012. Our ongoing dialogue with DHS at the most senior levels has enabled us to understand DHS’s perspectives and provided an opportunity for us to consistently communicate our views on DHS’s progress and work remaining. DHS has made important progress in implementing, transforming, strengthening, and integrating its management functions, including taking numerous actions specifically designed to address our criteria for removing areas from the High Risk List; however, this area remains high risk because the department has significant work ahead. Leadership commitment. The Secretary, Deputy Secretary, and Under Secretary for Management of Homeland Security and other senior officials have continued to demonstrate commitment and top leadership support for addressing the department’s management challenges. They have also taken actions to institutionalize this commitment to help ensure the long-term success of the department’s efforts. For example, in May 2012, the Secretary of Homeland Security modified the delegations of authority between the Management Directorate and its counterparts at the component level to clarify and strengthen the authorities of the Under Secretary for Management across the department. Senior DHS officials have also periodically met with us over the past 4 years to discuss the department’s plans and progress in addressing this high-risk area, during which we provided feedback on the department’s efforts. According to these officials, and as demonstrated through their progress, the department is committed to demonstrating measurable, sustained progress in addressing this high-risk area. Corrective action plan. DHS has established a plan for addressing this high-risk area. Specifically, in a September 2010 letter to DHS, we identified and DHS agreed to achieve 31 actions and outcomes that are critical to addressing the challenges within the department’s management areas and in integrating those functions across the department. These key actions and outcomes include, among others, validating required acquisition documents in accordance with a department-approved, knowledge-based acquisition process, and obtaining and then sustaining unqualified audit opinions for at least 2 consecutive years on the department-wide financial statements. In January 2011, DHS issued its initial Integrated Strategy for High Risk Management, which included key management initiatives and related corrective action plans for addressing its management challenges and the outcomes we identified. DHS provided updates of its progress in implementing these initiatives and corrective actions in its later versions of the strategy—June 2011, December 2011, June 2012, and September 2012. The comprehensive strategy, if implemented and sustained, provides a path for DHS to be removed from GAO’s High Risk List. Framework to monitor progress. DHS has established a framework for monitoring its progress in implementing its corrective actions and addressing the 31 actions and outcomes. In the June 2012 update to the Integrated Strategy for High Risk Management, DHS included, for the first time, performance measures to track its progress in implementing all of its key management initiatives. Additionally, the Under Secretary for Management holds quarterly internal progress review meetings with senior officials from each management function to discuss progress toward achieving milestones and meeting performance goals. It will be important for DHS to continue to track progress toward achieving its goals and monitor and refine its measures and corrective actions, as needed. Capacity. In June 2012, DHS identified the resources needed to implement most (154 of 173) of its corrective actions, but needs to continue to identify resources for the remaining corrective actions; determine that sufficient resources and staff are committed to initiatives; work to mitigate shortfalls and prioritize initiatives, as needed; and communicate to senior leadership critical resource gaps. DHS also identified ways in which it is leveraging resources to implement corrective actions, which is particularly important in light of constrained budgets. For example, in October 2012, DHS reported that it is pooling resources and working across functional lines to create cross functional, matrixed teams and executive steering committees to ensure timely implementation of the strategy. However, it is too soon to determine whether this approach is a sustainable way for DHS to address the resource challenges and capacity gaps that have affected its implementation efforts at the department and component levels. Demonstrated, sustained progress. DHS has made important progress in implementing corrective actions across its management functions, but it has not yet demonstrated sustainable, measurable progress in addressing key challenges that continue to remain within these functions and in the integration of those functions. DHS has implemented a number of actions demonstrating the department’s progress in improving its management functions. For example, DHS established the Office of Program Accountability and Risk Management in October 2011 to be responsible for the department’s overall acquisition governance process. DHS also established a formal IT Program Management Development Track and staffed Centers of Excellence with subject matter experts to that as of March 2012, approximately two-thirds of the department’s major IT investments we reviewed (47 of 68) were meeting current cost and schedule commitments (i.e., goals). Additionally, in the financial management area, DHS has reduced the number of material weaknesses in internal controls and obtained a qualified audit opinion on its fiscal year 2012 financial statements. DHS has also implemented common policies, procedures, and systems, such as those related to human capital, across its management functions. However, DHS still has considerable work ahead in many areas. For example, in September 2012, we reported that most of DHS’s major acquisition programs continue to cost more than expected, take longer to deploy than planned, or deliver less capability than promised. We identified 42 programs that experienced cost growth or schedule slips, or both, with 16 of the programs’ costs increasing from a total of $19.7 billion in 2008 to $52.2 billion in 2011—an aggregate increase of 166 percent. Further, while DHS has defined and begun to implement a vision for a tiered governance structure to improve information technology (IT) management, we reported in July 2012 that the governance structure covers less than 20 percent (about 16 of 80) of DHS’s major IT investments and 3 of its 13 portfolios. DHS has also been unable to obtain an audit opinion on its internal controls over financial reporting, and needs to obtain and sustain unqualified audit opinions for at least two consecutive years on the department-wide financial statements. Finally, federal surveys have consistently found that DHS employees are less satisfied with their jobs than the government-wide average. Key to addressing the department’s management challenges is DHS demonstrating the ability to achieve sustained progress across the 31 actions and outcomes we identified as needed to address the high-risk designation, to which DHS agreed. As shown in table 1, we believe DHS has fully addressed 6, mostly addressed 2, partially addressed 16, and initiated 7 of the 31 key actions and outcomes. To more fully address our high-risk designation, DHS needs to continue implementing its Integrated Strategy for High Risk Management and show measurable, sustainable progress in implementing its key management initiatives and corrective actions and achieving outcomes. In doing so, it will be important for DHS to: make continued progress in addressing the 31 actions and outcomes and demonstrate that systems, personnel, and policies are in place to ensure that progress can be sustained over time; maintain its current level of top leadership support and sustained commitment to ensure continued progress in executing its corrective actions through completion; continue to implement its plan for addressing this high-risk area and periodically report its progress to Congress and GAO; closely track and independently validate the effectiveness and sustainability of its corrective actions and make midcourse adjustments, as needed; and monitor the effectiveness of its efforts to establish reliable resource estimates at the department and component levels, address and work to mitigate any resource gaps, and prioritize initiatives as needed to ensure it has the capacity to implement and sustain its corrective actions. We will continue to monitor DHS’s efforts in this high-risk area to determine if the actions and outcomes are achieved and sustained. Additional information on this area is provided on page 161 of our 2013 high risk update. Overall, the government continues to take high-risk problems seriously and is making long-needed progress toward correcting them. Congress has acted to address several individual high-risk areas through hearings and legislation. Our high risk update and high risk website, http://www.gao.gov/highrisk/, can help inform the oversight agenda for the 113th Congress and guide efforts of the administration and agencies to improve government performance and reduce waste and risks. In support of Congress and to further progress to address high-risk issues, we continue to review efforts and make recommendations to address high- risk areas problems. Continued perseverance in addressing high-risk areas will ultimately yield significant benefits. In that regard, the Government Performance Results Act (GPRA) Modernization Act of 2010 (GPRAMA) provides the Executive Branch and Congress with new tools to identify and address management weaknesses that are undermining agencies’ capacity to achieve results. For example, the act requires agencies, in their annual performance plans, to describe the major management challenges they face—which, by definition, cover issues we have identified as high risk—as well as the actions they plan to address these challenges. In addition, agencies are to identify performance goals, performance measures, and milestones to gauge progress toward resolving these challenges. In addition, OMB is required to develop long-term goals to improve management functions across the government. The act specifies that these goals should include five areas: financial management, human capital management, information technology management, procurement and acquisition management, and real property management. We have identified these areas as key management challenges for the government. Moreover, some aspects of these areas have warranted our designation as high risk, either government-wide or at certain agencies. OMB is required to provide clear milestones and periodic status reports on progress being made and actions needed for additional progress. Over the years, the Committee on Homeland Security and Governmental Affairs and its predecessors have done commendable work focusing attention on improving government management and performance—by reporting out legislation, such as the original GPRA and GPRAMA, and through hearings, such as this one. Moving forward, congressional oversight and sustained attention by top administration officials will be essential to ensure further improvement in the management and performance of federal programs and operations and addressing high-risk areas. Thank you, Mr. Chairman, Ranking Member Coburn, and Members of the Committee. This concludes my testimony. I would be pleased to answer questions. For further information on GAO’s high risk program, contact J. Christopher Mihm at (202) 512-6806 or [email protected]. For information on DHS, contact Cathleen A. Berrick, 202-512-3404 or [email protected]. Contact points for the individual high-risk areas are listed in GAO-13-283 and on our high-risk website, http://www.gao.gov/highrisk. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. We are removing the management of interagency contracting from the High Risk List based on (1) continued progress made by agencies in addressing previously identified deficiencies, (2) establishment of additional management controls, (3) creation of a policy framework for establishing new interagency contracts, and (4) steps taken to address the need for better data on these contracts. Congressional oversight and the leadership of the Office of Management and Budget’s (OMB) Office of Federal Procurement Policy (OFPP)—which provides direction on government-wide procurement policies—have been vital in addressing the issues that led this area to be designated high risk. Interagency contracting—where one agency either places an order using another agency’s contract or obtains contracting support services from another agency—can help streamline the procurement process, take advantage of unique expertise in a particular type of procurement, and achieve savings. Interagency contracts are designed to leverage the government’s buying power and allow for agencies to meet the demands for goods and services at a time when the federal government is focused on achieving efficiencies in the acquisition process. While this method of contracting can save the government money and effort when properly managed, it also poses a variety of risks. In 2005, we designated the management of interagency contracting as high risk due in part to unclear lines of accountability between customer and assisting agencies and the potential for improper use, including out- of-scope work and noncompliance with competition requirements. In our 2007 high risk update, we identified the continuing need for (1) additional management controls and guidance and (2) clearer definitions of roles and responsibilities as the keys to addressing these issues. In our 2011 high risk update, we highlighted additional challenges agencies faced in fully realizing the benefits of interagency contracts, including the lack of data and the risk of potential duplication when new contracting vehicles are created. Duplication among interagency contracts can result in missed opportunities to leverage the government’s buying power and may adversely affect the administrative efficiencies and cost savings expected with their use. To address these issues, our prior work identified the need for (1) a policy framework and business case analysis requirements to support the creation of certain new contracts and (2) improved data on existing interagency contracts. The federal government has made significant progress in reducing the interagency contracting risks that led to our high-risk designation. In our 2009 and 2011 high risk updates we noted improvements in procedures used in making purchases on behalf of the Department of Defense (DOD)—the largest user of interagency contracts. These included better defined roles and responsibilities and enhanced controls over funding procedures. Additionally, the DOD Inspector General has reported a significant decrease in problems with DOD procurements through other federal agencies in congressionally mandated reviews of interagency acquisitions. We also noted that the General Services Administration (GSA) and OMB have established corrective action plans to implement our prior recommendations. Since our last update, as discussed in the following sections, federal agencies have continued to address weaknesses related to the use, creation, and oversight of interagency contracting vehicles. Strengthened management controls for the use of interagency contracts. Most agencies have taken steps to implement and reinforce interagency contracting policies to address prior concerns about the improper use of these contracts. In response to congressional direction, Federal Acquisition Regulation (FAR) provisions on interagency acquisitions were revised to require that agencies make a best procurement approach determination to justify the use of an interagency contract and prepare written interagency agreements outlining the roles and responsibilities of customer and assisting organizations.procurement approach determination ensures that the requesting agency considers factors such as the suitability of the contract vehicle and compliance with laws and policies. Congress also strengthened requirements for interagency acquisitions performed on behalf of DOD as well as the competition rules for placing orders on multiple-award The best contracts, which are commonly used in interagency acquisitions. As we recently reported, OMB’s October 2012 analysis of reports from the 24 agencies that account for almost all contract spending government-wide found that most had implemented management controls to reinforce the new FAR requirements and strengthen the management of interagency acquisitions. All 24 agencies also reported having oversight mechanisms to ensure their internal controls were operating properly. New controls over creation of new interagency contract vehicles. In response to congressional direction and our prior recommendation, OMB established a policy framework in September 2011 to govern the creation of new interagency contract vehicles. The framework addresses concerns about potential duplication by requiring agencies to develop a thorough business case prior to establishing certain contract vehicles. The guidance further requires senior agency officials to approve the business cases and post them on an OMB website to provide interested federal stakeholders an opportunity to offer feedback. OMB then is able to conduct follow-up with sponsoring agencies if significant questions, including ones related to duplication, are raised during the vetting process. OMB also has established a new strategic sourcing governance council, which is expected to examine how to use existing interagency contract vehicles to support government-wide strategic sourcing efforts. Improved data on interagency contracts. In response to our recommendations, OMB and GSA have taken a number of steps to address the need for better data on interagency contract vehicles. These efforts should enhance both government-wide efforts to manage interagency contracts and agency efforts to conduct market research and negotiate better prices. To promote better and easier access to data on existing contracts, OMB has made improvements to its Interagency Contract Directory, a searchable online database of indefinite-delivery vehicles available for interagency use. It has also posted information on government-wide acquisition contracts and blanket purchase agreements available for use under the Federal Strategic Sourcing Initiative on an OMB website, accessible by federal agencies. Improving the availability of data is also a key facet of GSA’s Schedules Modernization initiative, launched in June 2012. GSA has several pilot projects underway to collect and share data on its Multiple Award Schedules program, with the goal of improving pricing. GSA also has assembled a data team to improve access to comprehensive and reliable data across GSA contracting programs. Removing the management of interagency contracting from the High Risk List does not mean that the federal government’s use of these contracts is without challenges. For example, we and the DOD Inspector General have found instances in which DOD did not complete best procurement approach determinations as required. is necessary. But, we believe there are mechanisms in place that OMB and federal agencies can use to identify and address interagency contracting issues before they put the government at significant risk for waste, fraud, or abuse. For example, the revised FAR rules on interagency acquisitions require senior procurement executives to submit an annual report on interagency acquisitions to OMB, which can use these to identify issues and risks at the agency level as well as government-wide trends. In addition, many agencies have reported building interagency contracting into internal reviews. Finally, we plan to continue to monitor the management of interagency contracts in our reviews of federal contracting. GAO-13-133R and Department of Defense, Inspector General, Contracting Improvements Still Needed in DOD’s FY 2011Purchases Made Through the Department of Veterans Affairs, DODIG-2013-028 (Alexandria, VA.: Dec. 7, 2012). We are removing the Internal Revenue Service’s (IRS) Business Systems Modernization (BSM) program from the High Risk List because of IRS’s progress in addressing the significant weaknesses in information technology (IT) and financial management capabilities that led to the high-risk designation, and its commitment to sustaining progress in the future. As we have with other areas we have removed, we will continue to monitor this area, as appropriate, to ensure that the improvements we have noted are sustained. BSM is a multi-billion dollar, highly-complex effort that involves the development and delivery of a number of modernized tax administration and internal management systems as well as core infrastructure projects that are intended to replace the agency’s aging business and tax processing systems. It is critical to providing improved and expanded service to taxpayers and internal business efficiencies for IRS and providing the reliable and timely financial management information needed to better enable the agency to justify its resource allocation decisions and funding requests. IRS began modernizing its timeworn, paper-intensive approach to tax returns processing in the mid-1980s. In 1995, we identified serious management and technical weaknesses in the modernization program that jeopardized its successful completion. We recommended many actions to fix the problems, and added IRS’s modernization to our High Risk List. In 1995, we also added the agency’s financial management to our High Risk List due to long-standing and pervasive problems which hampered the effective collection of revenues and precluded the preparation of auditable financial statements. We combined the two issues into one high-risk area in 2005 since resolution of the most serious financial management problems depended largely on the success of the business systems modernization program. GAO, High-Risk Series: An Update, GAO-09-271 (Washington, D.C.: Jan. 22, 2009), and GAO-07-310. policies, procedures, and tools for developing and managing project requirements. IRS also implemented the initial phase of several key automated financial management systems, including a cost accounting module that it populated with data; developed a methodology to allocate costs to its business units; improved the reliability of its property and equipment records; and made significant progress in addressing long- standing deficiencies in controls over tax revenue collections, tax refund disbursements, and hard-copy tax receipts and related data. In addition, IRS completed several pilot projects to demonstrate its ability to determine the full cost of its programs and activities. However, we kept BSM on the High Risk List because many challenges remained, including (1) improving processes for delivering modernized IT systems within cost and schedule estimates, (2) developing the cost and revenue information needed to support day-to-day decision making, and (3) addressing outstanding weaknesses in information security. Throughout those years, Congress conducted oversight of the BSM program by, among other things, requiring that IRS submit annual expenditure plans that needed to meet certain conditions, including a review by GAO. GAO-11-278. monitoring the effectiveness of corrective actions taken in response to our information security recommendations. Since 2011, IRS has worked to address these issues. For example, the agency delivered the initial phase of CADE 2 and began the daily processing and posting of individual taxpayer accounts in January 2012, enhancing tax administration and improving service by enabling faster refunds for more taxpayers, allowing more timely account updates, and faster issuance of taxpayer notices. Also, in March 2012, IRS established the database housing all individual taxpayer account data and has plans underway to gradually increase its use for customer service and compliance purposes. Further, in May 2012, IRS initiated plans for phase 2 of CADE 2, which is in large part intended to address the unpaid assessment financial material weakness we have reported on in the past. As IRS progresses with this planning effort, it will be important for the agency to identify functionality it can deliver early on so it can begin reaping benefits for its employees and taxpayers and making progress towards retiring the legacy Individual Master File. GAO, Financial Audit: IRS’s Fiscal Years 2012 and 2011 Financial Statements, GAO-13-120 (Washington, D.C.: Nov. 9, 2012). application software so that the versions in use are now supported by vendors, (2) improved the auditing and monitoring capabilities of a general support system, and (3) tested its general ledger system for tax transactions in its current operating environment. Further, IRS funded critical software upgrades for some of its key financial reporting systems, including its administrative accounting system and its procurement system, which was an important step toward addressing its information system issues. These improvements led us to conclude that IRS’s remaining deficiencies in internal controls over information security no longer constitute a material weakness for financial reporting as of September 30, 2012. However, IRS still needs to strengthen its program for monitoring the effectiveness of corrective actions taken in response to our information security recommendations. IRS also took additional steps to strengthen its IT management capabilities. For example, in July 2011, we noted that IRS had in place close to 80 percent of the practices needed for an effective investment management process, including all of the practices needed for effective project oversight. In October 2011, we also reported that IRS had embarked on an effort to improve its software development practices using the Carnegie Mellon University Software Engineering Institute’s Capability Maturity Model Integration (CMMI), which calls for disciplined software development and acquisition practices which are considered industry best practices. In September 2012, IRS’s application development organization reached CMMI maturity level 3, a high achievement by industry standards. Finally, in October 2011, we highlighted CADE 2 as one of seven successful acquisitions in the federal government because, up to that point, it had achieved cost, schedule, scope, and performance goals through the use of critical success factors, including program staff actively engaged with stakeholders, program staff having the right knowledge and skills, agency executives engaged in the program, and streamlined and targeted governance., IRS officials are also applying these critical success factors to other programs at IRS. Because of the significant progress made in addressing this high-risk area over the years, starting in fiscal year 2012, Congress did not require the submission of an annual expenditure plan. While we are removing IRS’s BSM program from the High Risk List, we will nonetheless continue to closely monitor the agency’s efforts because the modernization program is complex and critical to administering and enforcing tax laws. In addition, the remaining recurring deficiencies in information security, along with new deficiencies we identified during our audit of IRS’s fiscal year 2012 financial statements, merit continued and consistent commitment and attention from IRS management. Specifically, IRS will need to continue to take steps to (1) improve its testing and monitoring capabilities, (2) ensure that policies and procedures are updated, and (3) address unresolved and newly identified control deficiencies, to sustain progress in improving its information system controls and have greater assurance that financial and taxpayer data will not remain vulnerable to inappropriate use, modification, or disclosure, possibly without being detected. We currently have a mandate to perform annual reviews of IRS’s major information technology programs and also perform the annual audit of IRS’s annual financial statements including the effectiveness of internal controls over financial reporting systems. We plan to continue to monitor IRS’s BSM program through these reviews. Climate change poses risks to many environmental and economic systems—including agriculture, infrastructure, ecosystems, and human health—and presents a significant financial risk to the federal government. The United States Global Change Research Program (USGCRP) has observed that the impacts and costliness of weather disasters will increase in significance as what are considered “rare” events become more common and intense due to climate change. Among other impacts, climate change could threaten coastal areas with rising sea levels, alter agricultural productivity, and increase the intensity and frequency of severe weather events such as floods, drought, and hurricanes. Weather-related events have cost the nation tens of billions of dollars in damages over the past decade. For example, in 2012, the administration requested $60.4 billion for Superstorm Sandy recovery efforts. These impacts pose significant financial risks for the federal government, which owns extensive infrastructure, insures property through federal flood and crop insurance programs, provides technical assistance to state and local governments, and provides emergency aid in response to natural disasters. However, the federal government is not well positioned to address this fiscal exposure, partly because of the complex, cross-cutting nature of the issue. Given these challenges and the nation’s precarious fiscal condition, we have added Limiting the Federal Government’s Fiscal Exposure to Climate Change to our 2013 list of high-risk areas. Climate change adaptation—defined as adjustments to natural or human systems in response to actual or expected climate change—is a risk- management strategy to help protect vulnerable sectors and communities that might be affected by changes in the climate. For example, adaptation measures may include raising river or coastal dikes to protect infrastructure from sea level rise, building higher bridges, and increasing the capacity of storm water systems. Policymakers increasingly view climate change adaptation as a risk-management strategy to protect vulnerable sectors and communities that might be affected by changes in the climate, but, as we reported in 2009, the federal government’s emerging adaptation activities were carried out in an ad hoc manner and were not well coordinated across federal agencies, let alone with state and local governments. The federal government has a number of efforts underway to decrease domestic greenhouse gas emissions, but decreasing global emissions depends in large part on cooperative international efforts. Further, according to the National Research Council (NRC) and USGCRP, greenhouse gases already in the atmosphere will continue altering the climate system for many decades. As such, the impacts of climate change can be expected to increase fiscal exposure for the federal government in many areas: Federal government as property owner. The federal government owns and operates hundreds of thousands of buildings and facilities that could be affected by a changing climate. In addition, the federal government manages about 650 million acres––29 percent of the 2.27 billion acres of U.S. land––for a wide variety of purposes, such as recreation, grazing, timber, and fish and wildlife. In 2007, we recommended that that the Secretaries of Agriculture, Commerce, and the Interior develop guidance for resource managers that explains how they are expected to address the effects of climate changes, and the three departments generally agreed with the recommendation. We have ongoing work related to adapting infrastructure and the management of federal lands to a changing climate. Federal insurance programs. Two important federal insurance efforts—the National Flood Insurance Program (NFIP) and the Federal Crop Insurance Corporation—are based on conditions, priorities, and approaches that were established decades ago and do not account for climate change. NFIP has been on our High Risk List since March 2006 because of concerns about its long-term financial solvency and related operational issues. In March 2007, we reported that both of these insurance programs’ exposure to weather-related losses had grown substantially, and that the agencies responsible for them had done little to develop the information necessary to understand their long-term exposure to climate change. We recommended that the responsible agencies analyze the potential long-term fiscal implications of climate change and report their findings to Congress. The agencies agreed with the recommendation and contracted with experts to study their programs’ long-term exposure to climate change, but the results of the work have not yet been reported to Congress. In addition, in June 2011, we reported that external factors continue to complicate the administration of NFIP and affect its financial stability. In particular, the Federal Emergency Management Agency (FEMA), which administers NFIP, has not been authorized to account for long-term erosion when updating flood maps used to set premium rates for NFIP, increasing the likelihood that premiums would not cover future losses. We suggested that Congress consider authorizing NFIP to account for long-term flood erosion in its flood maps, and the Biggert-Waters Flood Insurance Reform Act of 2012 requires FEMA to use information on topography, coastal erosion areas, changing lake levels, future changes in sea levels, and intensity of hurricanes in updating its flood maps. While these provisions respond to our suggestion to Congress, their ultimate effectiveness will depend on their implementation by FEMA. It is too early to evaluate such efforts, but we plan to examine NFIP in the near future. Technical assistance to state and local governments. The federal government invests billions of dollars annually in infrastructure projects that state and local governments prioritize and supervise. These projects have large up front capital investments and long lead times that require decisions about how to address climate change to be made well before its potential effects are discernable. We reported in October 2009 that insufficient site-specific data—such as local temperature and precipitation projections—make it hard for state and local officials to justify the current costs of adaptation efforts for potentially less certain future benefits. We recommended that the appropriate entities within the Executive Office of the President develop a strategic plan for adaptation that, among other things, identifies mechanisms to increase the capacity of federal, state, and local agencies to incorporate information about current and potential climate change impacts into government decision making. USGCRP’s 2012-2021 strategic plan for climate change science, released in April 2012, recognizes this need by identifying enhanced information management and sharing as a key objective, and USGCRP is undertaking several actions designed to better coordinate that use and application of federal climate science. We have ongoing work related to these issues. In addition, gaps in satellite coverage, which could occur as soon as 2014, are expected to affect the continuity of climate and space weather measurements important to developing the information needed by state and local officials. According to National Oceanic and Atmospheric Administration program officials, a satellite data gap would result in less accurate and timely weather forecasts and warnings of extreme events—such as hurricanes, storm surges, and floods. We have concluded that the potential gap in weather satellite data is a high-risk area and added it to the High Risk List this year. Disaster aid. In the event of a major disaster, federal funding for response and recovery comes from the Disaster Relief Fund managed by FEMA and disaster aid programs of other participating federal agencies. The federal government does not budget for these costs and runs the risk of facing a large fiscal exposure at any time. We reported in September 2012 that disaster declarations have increased over recent decades to a record of 98 in fiscal year 2011 compared with 65 in 2004. Over that period, FEMA obligated over $80 billion in federal assistance for disasters. We found that FEMA has had difficulty implementing longstanding plans to assess national preparedness capabilities and that FEMA’s indicator for determining whether to recommend that a jurisdiction receive disaster assistance does not accurately reflect the ability of state and local governments to respond to disasters. In September 2012, we recommended, among other things, that FEMA develop a methodology to more accurately assess a jurisdiction’s capability to respond to and recover from a disaster without federal assistance. FEMA concurred with this recommendation. GAO-10-113. of relevant agencies and programs, it has no mechanisms for making or enforcing important decisions and priorities. In May 2011, we found no coherent strategic government-wide approach to climate change funding and that federal officials do not have a shared At that time, we understanding of strategic government-wide priorities. recommended that the appropriate entities within the Executive Office of the President clearly establish federal strategic climate change priorities, including the roles and responsibilities of the key federal entities, taking into consideration the full range of climate-related activities within the federal government. The relevant federal entities have not directly addressed this recommendation. Federal agencies have made some progress toward better organizing across agencies, within agencies, and among different levels of government; however, the increasing fiscal exposure for the federal government calls for more comprehensive and systematic strategic planning including, but not limited to, the following: A government-wide strategic approach with strong leadership and the authority to manage climate change risks that encompasses the entire range of related federal activities and addresses all key elements of strategic planning. More information to understand and manage federal insurance programs’ long-term exposure to climate change and analyze the potential impacts of an increase in the frequency or severity of weather-related events on their operations. A government-wide approach for providing (1) the best available climate-related data for making decisions at the state and local level and (2) assistance for translating available climate-related data into information that officials need to make decisions. Actions to address potential gaps in satellite data. Improved criteria for assessing a jurisdiction’s capability to respond and recover from a disaster without federal assistance, and to better apply lessons from past experience when developing disaster cost estimates. Additional information on this area is provided on page 61 of our 2013 high risk update. For 2013, we are designating a new high-risk area—Mitigating Gaps in Weather Satellite Data. We and others—including an independent review team reporting to the Department of Commerce and the department’s Inspector General—have raised concerns that problems and delays on environmental satellite acquisition programs will result in gaps in the continuity of critical satellite data used in weather forecasts and warnings. The importance of such data was recently highlighted by the advance warnings of the path, timing, and intensity of Superstorm Sandy. Since the 1960s, the United States has used both polar-orbiting and geostationary satellites to observe the Earth and its land, oceans, atmosphere, and space environments. Polar-orbiting satellites constantly circle the Earth in an almost north-south orbit providing global coverage of environmental conditions that affect the weather and climate. As the Earth rotates beneath it, each polar-orbiting satellite views the entire Earth’s surface twice a day. In contrast, geostationary satellites maintain a fixed position relative to the Earth from a high-level orbit of about 22,300 miles in space. Used in combination with ground, sea, and airborne observing systems, both types of satellites have become an indispensable part of monitoring and forecasting weather and climate. For example, polar- orbiting satellites provide the data that go into numerical weather prediction models, which are a primary tool for forecasting weather days in advance, including forecasting the path and intensity of hurricanes and tropical storms. Geostationary satellites provide frequently-updated graphical images that are used to identify current weather patterns and provide short-term warnings. For more than 40 years, the United States has operated two separate operational polar-orbiting meteorological satellites systems: the Polar- orbiting Operational Environmental Satellite series, which is managed by National Oceanic and Atmospheric Administration (NOAA)—a component of the Department of Commerce; and the Defense Meteorological Satellite Program (DMSP), which is managed by the Air Force. The government also relies on data from a European satellite program, called the Meteorological Operational (MetOp) satellite series. These satellites are positioned so that they cross the Equator in the early morning, midmorning, and early afternoon in order to obtain regular updates throughout the day. With the expectation that combining the two separate U.S. polar satellite programs would result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two programs into a single new satellite acquisition, which became the National Polar-orbiting Operational Environmental Satellite System (NPOESS). However, in the years that followed, NPOESS encountered significant technical challenges in sensor development and experienced program cost growth and schedule delays, in part due to problems in the program’s management structure. After several restructurings and recurring challenges, in February 2010, the Executive Office of the President’s Office of Science and Technology Policy announced that NOAA and DOD would no longer jointly procure NPOESS; instead, each agency would plan and acquire its own satellite system. Specifically, NOAA, with support from the National Aeronautics and Space Administration (NASA), would be responsible for the afternoon orbit, and DOD would be responsible for the early morning orbit. The U.S. partnership with the European satellite agency for data from the midmorning orbit would continue as planned. Subsequently, NOAA initiated its replacement program, the Joint Polar Satellite System (JPSS). JPSS consists of a demonstration satellite— called the Suomi National Polar-orbiting Partnership (NPP)—launched in October 2011; two satellites, with at least five instruments planned for each, to be launched by March 2017 and December 2022, respectively; two stand-alone satellites to accommodate three additional instruments; and ground systems for the entire program. The program is currently estimated to cost $12.9 billion. In June 2012, we reported that NOAA and NASA made progress in establishing the JPSS program and in launching and operating the demonstration satellite, but noted that program officials expect there to be a gap in satellite observations before the first JPSS satellite is launched. Specifically, NOAA officials anticipate a gap in the afternoon orbit from 18 to 24 months between the time that NPP reaches the end of its lifespan and when the first JPSS satellite is fully ready for operational use. We identified other scenarios where the gap could last from 17 to 53 months. For example, the gap would be 17 months if NPP lasts 5 years until October 2016 and JPSS is launched as planned in March 2017 and undergoes a 12-month on-orbit checkout before it is fully operational. Alternatively, if NPP lasts only 3 years—which NASA engineers consider possible due to poor workmanship in the fabrication of the instruments— and JPSS launches 1 year later than currently planned, the gap in satellite observations could reach 53 months. Figure 1 depicts a potential gap in the afternoon orbit. After NPOESS was disbanded, DOD also began planning its own follow- on polar satellite program. However, it halted work in early 2012 because it still has two legacy DMSP satellites in storage that will be launched as needed to maintain observations in the early morning orbit. The agency currently plans to launch its two remaining satellites in 2014 and 2020. Moreover, DOD is working to identify alternatives to meet its future environmental satellite requirements. However, in June 2012, we reported that there is a possibility of satellite data gaps in DOD’s early morning orbit. The two remaining DMSP satellites may not work as intended because they were built in the late 1990s and will be quite old by the time they are launched. If the satellites do not perform as expected, a data gap in the early morning orbit could occur as early as 2014. Satellite data gaps in the morning or afternoon polar orbits would lead to less accurate and timely weather forecasting; as a result, advanced warning of extreme events would be affected. Such extreme events could include hurricanes, storm surges, and floods. For example, the National Weather Service performed case studies to demonstrate how its forecasts would have been affected if there were no polar satellite data in the afternoon orbit, and noted that its forecasts for the “Snowmaggedon” winter storm that hit the Mid-Atlantic coast in February 2010 would have predicted a less intense storm further east, with about half of the precipitation at 3, 4, and 5 days before the event. Specifically, the models would have under-forecasted the amount of snow by at least 10 inches. Similarly, a European weather organization recently reported that NOAA’s forecasts of Superstorm Sandy’s track could have been hundreds of miles off without polar-orbiting satellites—rather than identifying the New Jersey landfall within 30 miles 4 days before landfall, the models would have shown the storm remaining at sea. In June 2012, we reported that while NOAA officials communicated publicly and often about the risk of a polar satellite data gap, the agency had not established plans to mitigate the gap. At the time, NOAA officials stated that the agency would continue to use existing satellites as long as they provide data and that there were no viable alternatives to the JPSS program. However, our report noted that a more comprehensive mitigation plan was essential since it is possible that other governmental, commercial, or foreign satellites could supplement the polar satellite data. For example, other nations continue to launch polar-orbiting weather satellites to acquire data such as sea surface temperatures, sea surface winds, and water vapor. Also, over the next few years, NASA plans to launch satellites that will collect information on precipitation and soil moisture. Because it could take time to adapt ground systems to receive, process, and disseminate an alternative satellite’s data, we noted that any delays in establishing mitigation plans could leave the agency little time to leverage its alternatives. We recommended that NOAA establish mitigation plans for pending satellite gaps in the afternoon orbit as well as potential gaps in the early morning orbit. In September 2012, the Under Secretary of Commerce for Oceans and Atmosphere (who is also the NOAA Administrator) reported that NOAA had several actions under way to address polar satellite data gaps, including (1) an investigation on how to maximize the life of the demonstration satellite, (2) an investigation on how to accelerate the development of the second JPSS satellite, and (3) the development of a mitigation plan to address potential data gaps until the first JPSS satellite becomes operational. The Under Secretary also directed NOAA’s Assistant Secretary to, by mid-October 2012, establish a contract to conduct an enterprise-wide examination of contingency options and to develop a written, descriptive, end-to-end plan that considers the entire flow of data from possible alternative sensors through data assimilation and on to forecast model performance. In October 2012, NOAA issued a mitigation plan for a potential 14 to 18 month gap in the afternoon orbit, between the current polar satellite and the first JPSS satellite. The plan identifies and prioritizes options for obtaining critical observations, including alternative satellite data sources and improvements to data assimilation in models. It also lists technical, programmatic, and management steps needed to implement these options. However, these plans are only the beginning. The agency must make difficult decisions on which steps it will implement to ensure that its mitigation plans are viable when needed. For example, NOAA must make decisions about (1) whether and how to extend support for legacy satellite systems so that their data might be available if needed, (2) how much time and resources to invest in improving satellite models so that they assimilate data from alternative sources, (3) whether to pursue international agreements for access to additional satellite systems and how best to resolve any security issues with the foreign data, (4) when and how to test the value and integration of alternative data sources, and (5) how these preliminary mitigation plans will be integrated with the agency’s broader end-to-end plans for sustaining weather forecasting capabilities. NOAA must also identify time frames for when these decisions will be made. We have ongoing work assessing NOAA’s efforts to limit and mitigate potential polar satellite data gaps. Geostationary environmental satellites transmit frequently updated images of the weather currently affecting the United States to every national weather forecast office in the country. These are the satellite images that the public often sees on television news programs. NOAA plans to have its $10.9 billion Geostationary Operational Environmental Satellite-R (GOES-R) series replace the current fleet of geostationary satellites, which will begin to reach the end of their useful lives in 2015. The GOES-R program has undergone a series of changes since 2006 and now consists of four geostationary satellites and a ground system. However, problems with instrument and ground system development caused a 19-month delay in completing the program’s preliminary design review, which occurred in February 2012. In June 2012, we reported that GOES-R schedules were not fully reliable and that they could contribute to delays in satellite launch dates. Program officials acknowledged that the likelihood of meeting the October 2015 launch date was 48 percent. While NOAA’s policy is to have two operational satellites and one backup satellite in orbit at all times, continued delays in the launch of the first GOES-R satellite could lead to a gap in satellite coverage. This policy proved useful in December 2008 and again in September 2012 when the agency experienced problems with one of its operational satellites, but was able to move its backup satellite into place until the problems were resolved. However, beginning in April 2015, NOAA expects to have only two operational satellites and no backup satellite in orbit until GOES-R is launched and completes an estimated 6-month post-launch test period. As a result, there could be a year or more gap during which time a backup satellite would not be available. If NOAA were to experience a problem with either of its operational satellites before GOES-R is in orbit and operational, it would need to rely on older satellites that are beyond their expected operational lives and may not be fully functional. Any further delays in the launch of the first satellite in the GOES-R program would likely increase the risk of a gap in satellite coverage. In September 2010, we reported that NOAA had not established adequate continuity plans for its geostationary satellites. Specifically, in the event of a satellite failure, with no backup available, NOAA planned to reduce its operations to a single satellite and if available, rely on a satellite from a foreign nation. However, the agency did not have plans that included processes, procedures, and resources needed to transition to a single or foreign satellite. Without such plans, there would be an increased risk that users would lose access to critical data. We recommended that NOAA develop and document continuity plans for the operation of geostationary satellites that included implementation procedures, resources, staff roles, and timetables needed to transition to a single satellite, foreign satellite, or other solution. In September 2011, NOAA developed an initial continuity plan that generally includes these elements. Specifically, NOAA’s plan identified steps it would take in transitioning to a single or foreign satellite; the amount of time this transition would take; roles of product area leads; and resources such as imaging product schedules, disk imagery frequency, and staff to execute the changes. In December 2012, NOAA issued an updated plan that provides additional contingency scenarios. However, it is not evident that critical steps have been implemented, including simulating continuity situations and working with the user community to account for differences in various continuity scenarios. These steps are critical for NOAA to move forward in documenting the processes it will take to implement its contingency plans. Once these activities are completed, NOAA should update its contingency plan to provide more details on its contingency scenarios, associated time frames, and any preventative actions it is taking to minimize the possibility of a gap. We have ongoing work assessing NOAA’s actions to ensure that its plans are viable and that continuity procedures are in place and have been tested. Additional information on this area is provided on page 155 of our 2013 high risk update. Progress has been made in one of the three areas we identified in our 2011 High Risk List—the Department of the Interior’s (Interior) reorganization of its oversight of offshore oil and gas activities. Reorganization. In October 2011, following the transfer of the Minerals Management Service’s oil and gas revenue collection functions to the newly created Office of Natural Resources Revenue, Interior established two new bureaus to provide oversight of offshore resources and operational compliance with environmental and safety requirements. The new Bureau of Ocean Energy Management (BOEM) is responsible for leasing and approval of offshore development plans while the new Bureau of Safety and Environmental Enforcement (BSEE) is responsible for lease operations, safety, and enforcement. Because the responsibilities of these two bureaus are closely interconnected and depend on effective coordination, Interior developed memoranda and standard operating procedures to define roles and responsibilities and facilitate and formalize coordination. Interior also enacted numerous policy changes intended to improve its oversight of offshore oil and gas activities, such as new requirements and policies designed to mitigate the risk of a subsea well blowout or spill. In July 2012, we concluded that Interior has fundamentally completed its reorganization of its oversight of offshore oil and gas activities. In ongoing and future reviews, our primary focus will be to assess Interior’s remaining challenges to managing oil and gas resources— revenue collection and human capital. In so doing, we will also continue to consider Interior’s reorganization and its effect on the agency’s ability to oversee federal lands and waters. Revenue collection. In 2008, we reported that Interior collected lower levels of revenues for oil and gas production than all but 11 of 104 oil and gas resource owners whose revenue collection systems were evaluated in a comprehensive industry study—these resource owners included many other countries as well as some states. We recommended that Interior (1) undertake a comprehensive reassessment of its revenue collection policies and processes and (2) establish a balance between collecting revenues and ensuring that public lands and waters remain an attractive option for oil and gas development. In response to our recommendation, Interior contracted for a study called “Comparative Assessment of the Federal Oil and Gas Fiscal System” with the goal to inform decisions about federal lease terms, such as royalties, by consistently comparing the federal oil and gas fiscal systems with those of other countries and identifying ways to increase revenues and improve diligent development. Interior completed this study in October 2011 but Interior is still in the process of deciding if and how to use the results of the study to alter its lease terms. In addition, Interior continues to work to implement a number of our recommendations directed at improving Interior’s ability to conduct oil and gas production verification inspections. Finally, Interior is working to implement our recommendations to correct numerous problems with it’s efforts to collect data on oil and gas produced on federal lands, including missing data, errors in company-reported data on oil and gas production, sales data that did not reflect prevailing market prices for oil and gas, and a lack of controls over changes to the data that companies reported. We are currently engaged in a review of Interior’s revenue collection practices that will evaluate, among other things, Interior’s progress in addressing our recommendations. Human capital. We have reported that the bureaus responsible for oversight and management of federal oil and gas resources on federal lands and in federal waters—Bureau of Land Management (BLM) and the Minerals Management Service (the predecessor to BOEM and BSEE)—have encountered persistent problems in hiring, training, and retaining staff. For example, in 2010, we found that both BLM and the Minerals Management Service experienced high turnover rates in key oil and gas inspection and engineering positions, potentially affecting their oversight of oil and gas development on federal leases. For fiscal years 2012 and 2013, Congress provided funds to BOEM and BSEE in the Gulf of Mexico to establish higher minimum rates of pay for key positions—chiefly geophysicists, geologists, and petroleum engineers—for up to 25 percent of the usual minimum rate of pay. BOEM and BSEE officials in the Gulf of Mexico told us that the pay increase reduced attrition rates for these positions. However, it is uncertain how Interior will address staffing shortfalls to oversee offshore resources in the long term. In July 2012, we reported that Interior was creating a new training program for its inspection staff (such as BSEE’s National Offshore Training Program to train inspectors and engineers), but that it may take up to 2 years before new inspection staff are fully trained. Further, human capital issues also exist at BLM and the management of onshore oil and gas. For example, BLM faces similar challenges in hiring, training, and retaining staff for key positions but Interior has not received congressional approval or funds to establish higher minimum rates of pay for these positions as did BOEM and BSEE. We are currently engaged in a review of Interior’s efforts to meet its human capital challenges. As part of this effort, we will focus on the causes of Interior’s human capital challenges, actions taken, and how Interior plans to measure the effectiveness of corrective actions. Additional information on this area is provided on page 76 of our 2013 high risk update. To recognize progress at the Department of Energy (DOE) on the National Nuclear Security Administration’s (NNSA) and Office of Environmental Management’s (EM) execution of nonmajor projects— projects with values of less than $750 million—we are shifting the focus of its high-risk designation to major contracts and projects executed by NNSA and EM, those contracts and projects with values of $750 million or greater. Two of our reviews completed in 2012 focused on nonmajor projects found that these projects were being completed in large part, although additional and sustained attention by DOE is needed to adequately set and document performance baselines and further demonstrate that these actions result in improved performance. These reports included recommendations to DOE to clearly define, document, and track the scope, cost, and completion date targets for each of its projects, as required by DOE’s project management order. DOE agreed with these recommendations and plans to apply lessons learned from successful EM projects to its broader portfolio of projects and activities. With further monitoring of this area to ensure that progress is sustained, coupled with continued efforts and commitment by top leadership to address contract and project management weaknesses, nonmajor project performance issues will have been sufficiently addressed. DOE continues to demonstrate strong commitment and top leadership support for improving contract and project management in EM and NNSA, building on its corrective action plan developed in 2008. In December 2010, the Deputy Secretary convened a DOE Contract and Project Management Summit to discuss strategies for additional improvement in contract and project management. The participants identified six barriers to improved performance and reported in April 2012 on the status of initiatives to address these barriers. In addition, DOE has continued to release guides for implementing its revised order for Program and Project Management for the Acquisition of Capital Assets (DOE O 413.3B), such as for cost estimating, using earned value management, and for forming project teams. Further, DOE has taken steps to enhance project management and oversight by requiring peer reviews and independent cost estimates for projects with values over $100 million and by improving the accuracy and consistency of data in DOE’s central repository for project data. Challenges remain for the successful execution of major projects. NNSA and EM are currently managing 10 major projects with combined estimated costs totaling as much as $65.7 billion. We have continued to document significant cost increases and schedule delays as well as technical challenges impacting project design. NNSA is tasked with modernizing the nation’s aging nuclear weapons production facilities, a challenging effort that will take years and cost billions of dollars. EM faces ongoing complex and long-term challenges in removing radioactive and hazardous chemical contaminants—left over from decades of weapons production—from soil, groundwater, and facilities. Billions of dollars have already been spent, and will continue to be spent over the coming decades to treat and dispose of this waste. In recognition of the significance of these challenges, particularly in a time of fiscal constraint, in 2012, multiple committees of the Senate and House of Representatives held oversight hearings focused on needed improvements to DOE contract management and project performance. Further, the National Defense Authorization Act for Fiscal Year 2013 includes provisions significant to considerations about NNSA contract and project management, such as cost containment provisions for two of NNSA’s largest construction projects, both of which have experienced cost and schedule delays; a requirement that NNSA submit to Congress reports including expected cost savings associated with the award of contracts to manage and operate NNSA facilities; and creation of an advisory panel to make recommendations on revising the governance of the nuclear security enterprise. Until DOE can consistently demonstrate that recent changes to policies and processes are resulting in improved performance on major projects, NNSA and EM will remain on the High Risk List. Additional information on this area is provided on page 218 of our 2013 high risk update. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government is a large and complex entity, with about $3.5 trillion in outlays in fiscal year 2012 funding a broad array of programs and operations. GAO maintains a program to focus attention on government operations that it identifies as high risk due to their greater vulnerabilities to fraud, waste, abuse, and mismanagement or the need for transformation to address economy, efficiency, or effectiveness challenges. Since 1990, more than one-third of the areas previously designated as high risk have been removed from the list because sufficient progress was made to address the problems identified. The biennial high risk update describes the status of high-risk areas listed in 2011 and identifies any new high-risk area needing attention by Congress and the executive branch. Solutions to high-risk problems offer the potential to save billions of dollars, improve service to the public, and strengthen the performance and accountability of the U.S. government. In the past 2 years, notable progress has been made in the vast majority of areas that were on GAO's 2011 High Risk List. Congress passed several laws and took oversight actions to help address high-risk areas. Top administration officials at the Office of Management and Budget and the individual agencies have continued to show their commitment to ensuring that high-risk areas receive attention and action. Additional progress is both possible and needed in all the high-risk areas on GAO's 2013 list. Sufficient progress has been made to remove the high-risk designation from two high-risk areas on the 2011 list, Management of Interagency Contracting and Internal Revenue Service Business Systems Modernization . While these two areas have been removed from the list, GAO will continue to monitor them. This year, GAO also has added two areas, Limiting the Federal Government's Fiscal Exposure by Better Managing Climate Change Risks , and Mitigating Gaps in Weather Satellite Data . In 2003, GAO designated implementing and transforming the Department of Homeland Security (DHS) as high risk because DHS had to transform 22 agencies--several with major management challenges--into one department, and failure to address associated risks could have serious consequences. While challenges remain across its missions, DHS has made considerable progress in transforming its original component agencies into a single department. As a result, GAO narrowed the scope of the high-risk area and changed the name from Implementing and Transforming the Department of Homeland Security to Strengthening the Department of Homeland Security Management Functions . To more fully address this high-risk area, DHS needs to further strengthen its acquisition, information technology, and financial and human capital management functions. Of the 31 actions and outcomes GAO identified as important to addressing this area, DHS has fully or mostly addressed 8, partially addressed 16, and initiated 7. Moving forward, DHS needs to, for example, do the following: Acquisition management . Validate required acquisition documents in a timely manner, and demonstrate measurable progress in meeting cost, schedule, and performance metrics for its major programs. GAO reported in September 2012, for example, that 42 major programs experienced cost growth, schedule slips, or both, and most programs lacked foundational documents needed to manage risk and measure performance. Information technology management . Demonstrate for at least two consecutive investment increments that actual cost and schedule performance is within established baselines, and that associated mission benefits have been achieved. DHS has begun to implement a governance structure to improve program management consistent with best practices, but the structure covers less than 20 percent of DHS's major information technology investments. Financial management . Achieve clean opinions for at least two consecutive years on departmentwide financial statements, and implement new or upgrade existing components' financial systems. DHS received a qualified opinion on its fiscal year 2012 financial statements, and is in the early planning stages of its financial systems modernization efforts. The high risk report contains GAO's views on progress made and what remains to be done to bring about lasting solutions for each high-risk area. Perseverance by the executive branch in implementing GAO's recommended solutions and continued oversight and action by Congress are essential to achieving progress. GAO is dedicated to continue working with Congress and the executive branch to help ensure additional progress is made.
The FCS concept is designed to be part of the Army’s Future Force, which is intended to transform the Army into a more rapidly deployable and responsive force that differs substantially from the large division-centric structure of the past. The Future Force is to be offensively oriented and will employ revolutionary concepts of operations, enabled by new technology. The Army envisions a new way of fighting that depends on networking the force, which involves linking people, platforms, weapons, and sensors seamlessly together in a system-of-systems. If successful, the FCS system-of-systems concept would integrate individual capabilities of weapons and platforms, thus facilitating interoperability and open system designs. This concept would represent a significant improvement over the traditional approach of building superior individual weapons that must be retrofitted and netted together after the fact. The Army is reorganizing its current forces into modular brigade combat teams, each of which is expected to be highly survivable and the most lethal brigade-sized unit the Army has ever fielded. The Army expects FCS- equipped brigade combat teams to provide significant warfighting capabilities to DOD’s overall joint military operations. The Army is implementing its transformation plans at a time when current U.S. ground forces continue to play a critical role in ongoing conflicts in Iraq and Afghanistan. The Army has instituted plans to spin out selected FCS technologies and systems to current Army forces throughout the program’s system development and demonstration phase. FCS is to be composed of advanced, networked air and ground-based combat and maneuver sustainment systems, unmanned ground and air vehicles, and unattended sensors and munitions. (See fig. 1.) The soldier is the centerpiece of the system-of-systems architecture and is networked with 14 FCS core systems and numerous other enabling systems referred to as complementary programs. FCS is expected to be networked via a command, control, communications, computers, intelligence, surveillance, and reconnaissance architecture, including networked communications, network operations, sensors, battle command systems, training, and both manned and unmanned reconnaissance and surveillance capabilities that will enable improved situational understanding and operations at a level of synchronization heretofore unachievable. With that, FCS brigade combat teams are expected to be able to execute a new tactical paradigm based on the quality of firsts—the capability to see first, understand first, act first, and finish decisively. Fundamentally, the FCS concept is to replace mass with superior information—allowing the soldier to see and hit the enemy first rather that to rely on heavy armor to withstand a hit. The Army is using a management approach for FCS that centers on an LSI to provide significant management services to help the Army define and develop FCS and reach across traditional Army mission areas. Because of its partner-like relationship with the Army, the LSI’s responsibilities include requirements development, design, and selection of major system and subsystem contractors. The team of Boeing and its subcontractor, Science Applications International Corporation, is the LSI for the FCS system development and demonstration phase of acquisition, which is expected to extend until 2017. The FCS LSI is expected to act on behalf of the Army to optimize the FCS capability, maximize competition, ensure interoperability, and maintain commonality in order to reduce life-cycle costs, and for overall integration of the information network. Boeing also acts as an FCS supplier in that it is responsible for developing two important software subsystems. Army officials have stated they did not believe the Army had the resources or flexibility to use its traditional acquisition process to field a program as complex as FCS under the aggressive timeline established by the then-Army Chief of Staff. The Army will maintain oversight and final approval of the LSI’s subcontracting and competition plans. The John Warner National Defense Authorization Act for Fiscal Year 2007 mandated that the Secretary of Defense carry out a Defense Acquisition Board milestone review of FCS not later than 120 days after the system-of- systems preliminary design review, which is now tentatively scheduled for May 2009. The legislation is consistent with our 2006 report on FCS wherein we recommended that the Secretary of Defense establish a Defense Acquisition Board milestone review following the Army’s design review. Moreover, we recommended that this should be a go/no-go review of the FCS program based on its ability to meet knowledge markers consistent with DOD acquisition policy and best practices and demonstrate the availability of funds necessary to meet program costs. According to the law, DOD’s 2009 milestone review of FCS should include an assessment for each of the following: (1) whether the warfighter’s needs are valid and can be best met with the concept of the program; (2) whether the concept of the program can be developed and produced within existing resources; and (3) whether the program should (a) continue as currently structured; (b) continue in restructured form; or (c) be terminated. Furthermore, the Congress stipulated that the Secretary make specific determinations when making the assessment concerning the future course of the FCS program. The original language contained six criteria the Secretary was to use when answering the three assessment questions. In our 2008 report on the FCS program, we recommended that the Secretary establish objective and quantitative criteria that the FCS program will have to meet in order to justify its continuation and gain approval for the remainder of the acquisition strategy. Subsequently, the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 amended and expanded the existing requirements and added four new criteria. These changes expand the scope of supporting information the Congress mandated to be included with the DOD milestone review report. For example, the 2009 Act requires the Secretary, when making his assessment of the program, to determine whether actual demonstrations, rather than simulations, have shown that the software for the program is on path to achieve threshold requirements on cost and schedule. Appendix III contains the legislative requirements for the 2009 milestone review. For the purposes of our analysis, we aggregated the congressional criteria into four key areas: technology maturity, requirements/design, demonstrations (FCS concept and network), and cost. In 2008, we found that the progress made by FCS, in terms of knowledge gained, was commensurate with a program in early development but was well short of a program halfway through its development schedule and its budget. In view of these findings, we recommended, in part, that the Secretary of Defense establish criteria that the FCS must meet in the 2009 milestone review in order to justify continuation along with identifying viable alternatives to FCS. In response to this recommendation, and to facilitate the Secretary’s assessment of the status of FCS and to decide the program’s future, the Under Secretary of Defense for Acquisitions, Technology and Logistics issued an acquisition decision memorandum in August 2008 to the Secretary of the Army outlining the information the Army must provide. The Under Secretary established criteria for supporting information in five program areas: program execution, unmanned systems, manned ground vehicles, network, and test/experimentation/demonstration. The Under Secretary has established specific criteria within each of the five areas, as shown in Appendix IV. For example, in the area of program execution, the Army must demonstrate that the FCS, Joint Tactical Radio System (JTRS), and Warfighter Information Network Tactical (WIN-T) programs’ development, build, and test schedules are aligned and executable. The Under Secretary’s memorandum also instructed the Army to mature its acquisition approach to deliver initial increments of FCS capability to infantry brigade combat teams rather than the originally planned heavy brigades. For the FCS core program, the Under Secretary stated that the Army shall pursue an incremental or block approach to acquiring FCS capability. Along with the mandated 2009 milestone review of FCS, the Congress has required the DOD and the Army to perform analyses and report separately on two core systems of the FCS system-of-systems. Specifically, the Assistant Secretary of Defense for Networks and Information Integration is to report on an analysis of the FCS communications network and software. This report, due not later than September 30, 2009, will include assessments of issues such as network vulnerability to enemy attack, electronic warfare, jamming, and adverse weather. (See app. V.) Compared with the criteria to be used for the milestone review, the FCS program has significant knowledge gaps. Specifically, the program has yet to show that critical technologies are mature, design issues have been resolved, requirements and resources are matched, performance has been demonstrated versus simulated, and costs are affordable. The Army will be challenged to convincingly demonstrate the knowledge necessary to warrant an unqualified commitment to FCS at the 2009 milestone review. Four of the critical technologies have not yet achieved minimally acceptable maturity levels despite being in development for at least 6 years. The schedule to complete the remaining preliminary design reviews is aggressive, and it seems clear from the results of the initial system-level preliminary design reviews that numerous performance trade-offs will be needed to close gaps between FCS requirements and designs. Actual demonstrations (versus modeling and simulation) of the FCS concept, including its critical survivability aspects, have been limited to date; demonstrated network performance is particularly limited with many key questions yet to be answered. Finally, FCS costs appear likely to increase again at a time when available funds may decline. In making the assessment of whether the FCS program should continue, DOD is required by congressional direction to make a determination of whether each critical technology for the program is at least TRL 6. The Army has struggled to attain this level of maturity, despite being a lower standard than preferred by DOD policy and falling short of best practices. At TRL 6, a representative model or prototype exists and is tested in a relevant environment—a maturity level well beyond TRL 5 where the technology demonstrates functionality in a laboratory environment but does not have the physical form or fit of the finished product. Appendix VI contains a complete listing and description of TRLs. Army technology officials stated the purpose of TRL 6 demonstrations is to build confidence the concept is technically feasible, and TRL 6 actually means extensive testing remains before TRL 7 can be achieved. Maturing technologies to TRL 7 (prototype possessing the form, fit, and function of the finished product that is demonstrated in a realistic environment) prior to starting product development is a best practice and a DOD policy preference. Against these standards, all FCS technologies should have achieved TRL 7 as the program proceeded into the system development and demonstration phase in May 2003. Even if the Army does demonstrate TRL 6 in 2009, extending technology development this late into the acquisition process puts FCS at risk for experiencing problems that may require large amounts of time and money to fix. The Army anticipates that all the critical technologies will reach TRL 6 by the milestone review, but this projection deserves closer examination and perspective. The Army may be unable to demonstrate technology maturity as quickly as it plans. Based on Army assessments from January 2009, three of the 44 FCS critical technologies were rated TRL 7 and 37 were rated TRL 6. The remaining technologies are expected to complete TRL 6 demonstrations prior to the system-of-systems preliminary design review, but some of those scheduled demonstrations are slipping. Appendix VII contains a list of all FCS critical technologies with their 2007 and 2008 TRL ratings and Army projections for attaining TRL 6. Thirteen of the technologies that the Army rated at TRL 6 are awaiting validation from technology review authorities—independent teams convened by the FCS program manager and from the Director, Defense Research and Engineering. These reviews could actually downgrade maturity levels if demonstration results do not support the Army’s TRL designation. This occurred in 2007 with the mid-range munition’s terminal guidance. In 2008, independent reviewers cautioned the Army about the maturity levels of three technologies: (1) JTRS ground mobile radio, (2) Mobile Ad-hoc Networking Protocols, and (3) Wideband Networking Waveforms. According to Army officials, the Army had claimed these technologies had demonstrated TRL 6; however, the independent reviewers suggested the Army consider providing additional justification to strengthen the case for a TRL 6. Consequently, it is not clear whether independent reviewers will concur with the Army’s assertion that these technologies have demonstrated TRL 6 maturity. Table 1 illustrates both the actual progress the Army has made maturing FCS critical technologies and projected progress through the production decision. As we have shown in the past, accepting lower technology levels in development frequently increases program schedule and cost. In the case of FCS, the downgrade in TRLs is particularly troublesome because TRL 6 represents a significant development step over TRL 5. Army engineers maintain that anything beyond TRL 6 is a system integration matter and not necessarily technology development. Leading commercial firms treat adapting the technologies to the space, weight, and power demands of their intended environment—in essence, TRL 7—as part of technology development. Even if one accepts the lower standard of TRL 6 at program start, the integration of these technologies into systems and subsystems should have taken place in the first half of development, which DOD refers to as “system integration.” As a complex, networked system-of-systems, FCS will have unprecedented integration issues. Yet, FCS system integration will have to occur in the second half of development, where it will compete for resources that are intended to be for demonstration of the system. As we have previously reported, advancing technologies to TRL 6 has been especially challenging. The Army’s history of maturing FCS technologies does not inspire confidence that it will be able to execute the optimistic and challenging integration plans involved with advancing technologies to a TRL 7 before the production decision in 2013. Technologies critical to FCS survivability are illustrative of the program’s technology maturity issues. FCS survivability involves a layered, network- centric approach that consists of detecting the enemy first to avoid being fired upon; if fired upon, neutralizing the incoming munition before it hits an FCS vehicle; and finally, having sufficient armor to defeat those munitions that make it through the preceding layers. Each of these layers depends on currently immature technologies to provide the aggregate survivability needed for FCS vehicles. Many of the technologies intended for survivability have experienced developmental delays. As a key component of FCS survivability, the short range active protection system is intended to neutralize incoming munitions and help protect vehicles from threats such as rocket-propelled grenades. Initially, Army requirements for the system included the ability to defeat long-range anti- armor threats, such as antitank missiles. However, Army officials have decided to delay demonstration of this capability until 2011 or 2012. The Army held a short-range active protection system demonstration in the latter part of 2008 and declared that the system had reached TRL 6. The results of these demonstrations are pending validation from technology review authorities. It is important to note that the Army plans to continue active protection system technology development and demonstration for some time to ensure that it is an operationally effective and safe capability. This is challenging because the active protection system is to provide 360- degree protection for the relatively lightly-armored FCS manned ground vehicles by using, among other things, sensors, processors, rocket motors, and a counter-munition warhead to counter multiple threats. Lightweight hull and vehicle armor technology for FCS vehicles is also problematic because it will not be sufficiently advanced to provide military usefulness for several years. The Army is developing armor- related critical technologies in a phased approach. The initial phase of armor development only recently demonstrated TRL 6. The results of these demonstrations are also pending validation from technology review authorities. The Army intends for that initial version to satisfy threshold (or minimally acceptable) survivability requirements and plans to use it only in prototypes of manned ground vehicles. The second phase of armor is expected to meet objective (or desired) survivability requirements but is not scheduled to reach TRL 5 until fiscal year 2011. Even then, Army engineers do not believe that armor design will meet weight requirements. The third phase will be used for low-rate production vehicles and is scheduled to demonstrate TRL 6 in 2012. This armor is expected to satisfy objective threat requirements and be 25 percent lighter than the second armor iteration. The Army plans to mature the fourth and final phase of armor to a TRL 6 in fiscal year 2014. The Army also plans to make manufacturing technology investments in the armor area in order to reduce its production costs. For the 2009 milestone review, Congress has directed DOD, for each system and network component of the program, to assess key design knowledge and risks, based on system functional reviews, preliminary design reviews, and technical readiness levels. Now tentatively scheduled for May 2009, the system-of-systems preliminary design review is a major technical review to assess whether the full suite of FCS systems and information network are ready for detailed design and that the FCS detailed performance requirements can be obtained within cost, schedule, risk, and other system constraints. The Army has continued to gain knowledge about FCS development, but design knowledge expected to be available at the time of the 2009 milestone review may not provide confidence that FCS design risks are at acceptable levels. Key design risks include the Army’s ability to accomplish all system-level design work in the time remaining before the 2009 system-of-systems preliminary design review, demonstrate that emerging system designs match detailed requirements, and mitigate recognized technical risks to acceptable levels. This challenge has its roots in the fact that the Army started FCS development in 2003 without establishing firm requirements and preliminary designs to meet those requirements; that is, demonstrating a match between customer needs and available resources. Consequently, the Army is still seeking to stabilize FCS designs at a time when the program is already past the mid-point of development phase—the point when a program following best practices and DOD policy would normally conduct a critical design review demonstrating a stable, producible design capable of meeting performance requirements. Having passed that mid-point, FCS is now far out of alignment with current DOD policy, which requires a program to show a match between requirements and resources at or shortly after development start. Over the past year, the Army has continued the process of setting and refining requirements in order to establish system designs. At the system- of-systems level, requirements are relatively stable. At the individual system level, requirements continue to evolve. The Army scheduled a series of 15 system-level preliminary design reviews, with the first held in 2007 and the last expected to occur in March 2009, in order to assess whether individual systems are ready to proceed into detailed design. Although the Army plans to conduct all system design reviews by the end of March 2009, the schedule to close out all the reviews may take some time, and requirements and design trade-offs will be necessary. Several examples are illustrative: The preliminary design review for the Multi-Function Utility/Logistics and Equipment Vehicle occurred in December 2007 and noted critical design problems regarding vehicle weight reduction. The Army did not close the weight issue until some 10 months later, in October 2008. The Small Unmanned Ground Vehicle had its preliminary design review in October 2008 and has now entered into detailed design. Operational requirements call for the vehicle to operate for 6 hours between battery changes within a temperature range of minus 25 degrees and 120 degrees. However, the vehicle does not meet those requirements at any temperature. Even with optimum operating temperature, mission length is no longer than 5.4 hours. Additionally, the vehicle cannot satisfy operational requirements for storage at temperatures of 60 degrees below zero because its motor lubricant decomposes and battery becomes useless. Consequently, the Army now plans to remove the batteries and provide for special storage. During the first part of the network preliminary design review held in November 2008, the Army recognized that there are significant gaps between the FCS requirements and the emerging network design. These include the JTRS handheld radio; ground mobile radio; and airborne, maritime, and fixed-station radios; the WIN-T increment 3; and the Wideband Networking Waveform and Soldier Radio Waveforms. The Army has not yet been able to obtain validation of its TRL 6 rating for JTRS ground mobile radio; the mobile, ad-hoc networking protocols; and Wideband Networking Waveforms. According to Army officials, if additional funding is provided and developments are fully successful, they will not fully meet FCS requirements until about 2017 or 2018. The Army conducted the second part of its network preliminary design review in January 2009. The results were not available for inclusion in this report. For several months, the Army has been conducting a series of technical reviews of various aspects of the FCS manned ground vehicle requirements and designs. Those efforts culminated at the manned ground vehicle preliminary design review in January 2009. The results of that review were not available in time for inclusion in this report. According to Army assessments, key risks remain within several areas: software development and integration, network and transport, manned and unmanned platforms, and average unit production cost. Many risks involve the likelihood that requirements may be unachievable when or as expected. The assessment of these risks will be a key determinant in the overall feasibility of the FCS concept and the ability to execute the FCS acquisition strategy going forward. FCS is also working to address significant areas of high risk such as network performance and scalability, immature network architecture, and synchronization of FCS with the JTRS and WIN-T programs. JTRS and WIN-T are also having difficulty with technology maturation and are at risk of being delayed or delivering incomplete capabilities to FCS. In a 2007 acquisition memorandum, DOD stated that its acquisition policy was to adjust requirements and technical content to deliver as much as possible of planned capability within budgeted cost. At the same time, it directed the services to establish Configuration Steering Boards in order to review all requirements changes and any significant technical configuration changes that have the potential to result in cost and schedule impacts. Despite this direction, the Army has not established a steering board for FCS. DOD officials told us that such a board would be useful for providing input to FCS requirements and design trade-offs. In making the assessment of whether the FCS program should continue, Congress required DOD to make a determination on whether actual demonstrations, rather than simulations, have shown that the concept of the program will work. FCS brigade combat teams are expected to be able to execute a new tactical paradigm based on what the Army refers to as “the quality of firsts”—the capability to see first, understand first, act first, and finish decisively. Because this paradigm depends on the aggregate performance of interdependent FCS systems versus the performance of any single system, it is essential that this concept be proven through demonstrations. While modeling and simulation are essential to assessing the performance of FCS, they must be anchored in actual demonstrations. DOD will be challenged to meet the congressional direction to demonstrate (versus simulate) that the FCS warfighting concept will work by the time of the 2009 milestone review. At this point in the program, the FCS concept has been simulated but has not been convincingly demonstrated in any sort of field event. This stems from the fact that technologies have not finished development and prototype systems with the essential network components are not ready to be built yet. In preliminary field demonstrations, some people, sensors, and platforms have been connected and information was transferred from one to the other. Basic capabilities of the unmanned aerial and ground vehicles, as well as some of the unattended sensors and munitions, have been demonstrated. The manned ground vehicles have demonstrated some of their mobility and lethality capabilities. There have been some technology demonstrations of early versions of the lightweight armor and an active protection system, but the feasibility of the FCS survivability concept remains uncertain. Nothing approaching a demonstration of the “quality of firsts” paradigm has yet been attempted nor will it be before the 2009 milestone review. The Defense Acquisition Board has established criteria for the 2009 Review including several in a category entitled “Test/Experimentation/Demonstration.” (See app. IV.) However, none of the criteria address the issue of demonstrating that the FCS concept will work. Instead, the criteria call for the demonstration of some early FCS prototypes and the completion of some events such as a 2008 joint service experiment. The Defense Acquisition Board criteria also include several that call for delivery of certain early prototypes and others that call for demonstration of selected capabilities. Without questioning the value of these individual criteria, it is not clear what they will tell decision makers about the value or demonstration of the FCS concept as a whole. In making the assessment of whether the FCS program should continue, Congress required DOD to make several determinations, including (1) whether actual demonstrations, rather than simulations, have shown that the software for the program is on a path to achieve threshold requirements on cost and schedule; (2) whether the program’s planned major communications network demonstrations are sufficiently complex and realistic to inform major decision points; (3) the extent to which manned ground vehicle survivability is likely to be reduced in a degraded communications network environment; (4) the level of network degradation at which FCS manned ground vehicle survivability is significantly reduced; and (5) the extent to which the FCS communications network is capable of withstanding network attack, jamming, or other interference. In addition, the Assistant Secretary of Defense for Networks and Information Integration is required to submit a report to Congress on the FCS communications network and software. That report is to be submitted by September 30, 2009 and is to include an assessment of the communications network that will specifically address areas such as vulnerability to network attack, electronic warfare, adverse weather, and terrain; dependence on satellite communications support; and operational availability and performance under degraded conditions. The report is also to include assessments of the communications network’s test schedule and Army efforts to synchronize funding, schedule, and technology maturity of critical networking programs with FCS. Appendix V contains the comprehensive criteria from the legislation directing this review. These assessments of the capabilities and vulnerabilities of the FCS network will be important in determining if the FCS concept is feasible. However, as we reported last year, the Army had an understanding of network requirements and how to build the network, but many challenges and work remained before the network would reach maturity. Hence, network development and demonstration is at a very early stage and therefore, the network assessments will most likely be based on analysis and simulations rather than demonstrated results. Even if software development proceeds on schedule and technical risks of key network elements, such as JTRS and WIN-T are successfully retired, the uncharted nature of the FCS network makes predicting its eventual performance difficult. Army test officials are assessing network scalability, which relates to increasing the number of radios, or nodes, on the network, through limited testing. However, the number of nodes used in testing to date has been limited, using only 30 nodes, while a brigade combat team may require as many as 5,000 nodes. Considering that mobile, ad-hoc networks have limited scalability, and performance decreases as more nodes are added, the ultimate FCS network performance is difficult to predict. To date, actual demonstrations of FCS software have been limited to the early spin out tests and experiments, and it is not yet known whether the information network is technically capable of delivering the quality of service needed to make the FCS warfighting concept possible. At the time of the FCS milestone review in 2009, the extent of network demonstration is expected to be very limited. For example, in 2008, the Army demonstrated, among other basic capabilities, sensor control, terrain analysis, and unmanned platform planning and operations. Other limited demonstrations are scheduled on a regular basis. For example, in the 2008 joint service experiment, several portions of the FCS network—including an early version of the system-of-systems common operating environment, the unattended sensors, and Non-Line-of-Sight Launch System—were evaluated in terms of their basic operation and interoperability with other systems. The first major demonstration of the FCS network is the limited user test scheduled for fiscal year 2012, which will be at least a year after the critical design review and only about a year before the start of low-rate initial production for the core FCS program. This event comes after the vehicle designs on manned ground platforms have been established. One of the key objectives of that test will be to identify the contributions and limitations of the network regarding the ability of the FCS brigade combat team to conduct missions across the full spectrum of operations. However, the fully automated battle command system is not expected to be available until 2013 when the Army expects 100 percent of the network capabilities, including software, to be available. As a key part of the overall FCS communications network, it is uncertain whether FCS software requirements can be achieved within cost and schedule estimates. The first of 4 software builds has been delivered and qualified, and build 2 is still in development, with a planned delivery in 2010. As we have reported earlier, FCS software estimates continue to grow, and the total estimate for the network and platforms is projected to total over 100 million lines of computer code, which is more than triple the size the program estimated in 2003. Army officials have identified 16 risks in the software arena, or specific areas where there is a risk of not achieving goals within cost and schedule estimates, including system-of- systems common operating environment, network management/quality of service, network security/information assurance, distributed fusion management, and estimated effective source lines of code. According to Army officials, software development costs are capped at approximately $2.6 billion. As a result, Army officials stated that they have had to defer some planned FCS capabilities to later software builds. Yet, development experience to date, coupled with the risks yet to be resolved, raise questions as to whether the necessary software can be developed within cost and schedule estimates. Alternatively, the Army may have to reduce or eliminate FCS requirements. In making the assessment of whether the FCS program should continue, Congress required DOD to make a determination on (1) what the cost estimate for the program is, including spin outs, and an assessment of confidence levels for that estimate; and (2) what the affordability assessment for the program is, given projected Army budgets, based on that cost estimate. For the 2009 milestone review, DOD and the Army are expected to provide the updated program cost estimate and an affordability assessment for the FCS program. The Army has indicated that the most recent cost estimate for the program is no longer valid, but it has not yet completed an official updated estimate. While full details are not yet available, the Army is considering plans to request additional funds for FCS beyond the current cost estimate of $159 billion. Those plans would involve additional development costs of about $2 billion and procurement costs of about $17 billion over the current cost estimate. Where the Army has offset some cost increases in the past with reductions in program content, we are not yet aware of any similar actions to offset the expected cost increases. According to DOD officials, DOD’s Cost Analysis Improvement Group is expected to prepare an updated independent cost estimate for the milestone review. Previous estimates from the group have been significantly higher than the Army’s, particularly regarding the cost to develop software. DOD officials also stated that DOD’s Program Analysis and Evaluation group may be tasked to provide input for an FCS affordability assessment. These assessments are intended to cover all of the costs, including those for the spin outs, which will be necessary to fully field the FCS program. This would be the first complete cost estimate that will include spin outs and other costs. The Army now projects that the costs of its revised FCS spin out initiative will be about $21 billion beyond the core FCS program costs of $159 billion. In addition to FCS-specific costs, complementary program costs are separate from FCS and represent significant additional commitments from the Army and other services. Several of these complementary programs have funding issues of their own. For example, the JTRS and the WIN-T programs are not yet fully funded to develop the full capabilities currently required by the FCS program. Ultimately, FCS’s affordability will hinge on two factors: the actual cost of the program and the availability of funds. Heretofore, there has not been a sound basis for preparing a firm cost estimate. The preliminary design review process should provide a better foundation for one. Yet, such an estimate would have the confidence of a program in early development, with many risks and unprecedented challenges to meet. As it stands, FCS commands the largest portion of the Army’s acquisition budget and, as currently planned, will continue to do so for many years. The Army continues to indicate its willingness to accept the high risks of the program and make trade-offs in both requirements and other programs to accommodate its growing costs. Since the program began, costs have increased from $92 billion to $159 billion, which only covers the cost to equip one-third of the Army’s active forces. Indicative of the tension between program costs and available funds, the Army recently proposed deferring upgrades to current systems such as the Abrams Tank and Bradley Fighting Vehicle to free up funds for FCS. This tension seems only likely to worsen, as indications are that FCS costs are about to increase again at the same time competition for funds—both between near-term and far-term needs within DOD and between defense and other needs within the federal government—is intensifying. The Army’s position has been that it will reduce FCS capabilities to stay within available development funds but at some point, reductions in FCS capability— whether driven by money or technical feasibility—will fall below an acceptable level. That level appears as yet indefinable. The 2009 milestone review will not only require DOD to decide if FCS is technically feasible and militarily worthwhile, it will provide the opportunity to structure the emerging program so that it complies with current acquisition policy and is knowledge-based—thus more conducive to oversight. On several scores, the current FCS program falls short. Its acquisition strategy is more schedule-driven than it is knowledge–based and is unlikely to be executable, with a significant amount of development and demonstration yet to be completed. The timing of upcoming commitments to production funding puts decision makers in the difficult position of making production commitments without knowing if FCS will work as intended. For example, the Army plans for FCS core production to directly follow the early NLOS-C production, which may be premature based on design maturity and demonstrations expected to be done up to that point. Likewise, the Army’s schedule for providing early FCS capabilities to current forces is hurried, as spin out systems may not be fully demonstrated before the Army commits to their production. Finally, the Army’s potential adoption of an incremental approach to FCS acquisition could represent another major restructure of the program. While an incremental approach is generally preferable, it would represent the fourth different strategy for the FCS program that DOD and the Congress will be asked to evaluate and oversee. We have previously reported that to date, the FCS program has advanced through acquisition milestones without having achieved the level of knowledge preferred by best practices and DOD’s own policies and a commensurate level of information needed for oversight, given the scope of the program and the risks it entails. The issuance of DOD’s 2008 acquisition instruction underscores the wide variance between policy and the FCS acquisition strategy. Ideally, requirements trades would already have been made and a high-confidence design established. This would position the program to move toward maturity as evidenced by such measures as successful completion of subsystem critical design reviews, maturity of critical manufacturing processes, planned corrective actions to hardware and software deficiencies, and adequate developmental testing. At this point, however, FCS has yet to establish a firm system-of-systems design and is several years from any large-scale testing at the system-of- systems level. The milestone review represents an opportunity to judge FCS on critical knowledge markers and set it on a more reasonable course with opportunities for effective and meaningful oversight from the Army, DOD, and the Congress. Under its current acquisition strategy, the FCS is neither knowledge-based nor does it lend itself to meaningful oversight. Figure 2 compares a knowledge-based approach to developing a weapon system (consistent with DOD policy) with the approach taken for FCS. Best practices for successful product development include three knowledge points (KP). Knowledge Point 1 should occur at development start and is attained when technologies and resources match requirements; KP 2 should occur at the mid-point between development and production and is attained when the product design performs as expected; and KP 3 should occur at production start and is attained when production can meet cost, schedule, and quality targets. Ideally the preliminary design review occurs at or near the start of development and the critical design review occurs mid-way through development. As shown in figure 2 above, FCS technology development and system development and demonstration phases will overlap by several years. The Army has scheduled only 2 years between the critical design review in 2011 and the production decision in 2013. This leaves little time to gain knowledge between the two events, and is particularly important because the critical design review is the point at which a program begins building fully-integrated, production-representative prototypes whose testing will prove the design’s maturity and form the basis for the low-rate production decision. Instead, FCS will rely on less mature prototypes and the decision to proceed into production will be made without a mature design. As a result of the current acquisition approach, the FCS program may not be executable given the amount of development budget remaining and the development work that remains to be done, as illustrated in figure 3 below. At the preliminary design review, the program expects to have all critical technologies mature to TRL 6, system-level requirements nearing completion, and a preliminary design available to reconcile technologies with requirements. Using DOD policy as a reference, this is about the point at which the FCS program should be ready to begin. Should the program be approved to continue on its present course at the 2009 milestone review, the Army would have to complete development—in essence, the entire system development phase—with 40 percent of its financial and schedule resources remaining. This is not to judge either the value of the work done to date or the rate of progress, but rather to underscore where the program really is in terms of the development process. Accordingly, ahead of FCS remains what is typically the most expensive part of system development: completing the detailed system and network designs and building prototypes and using them to demonstrate that the system will work. In the case of FCS, there are the added challenges of integrating multiple technologies and showing that the system of systems as a whole will work, including the unprecedented network. The late completion of the system development activities that will demonstrate whether FCS can deliver the promised capability is at odds with the early requests for production funds. Additional maturation of critical technologies, followed by the challenging prospect of integrating FCS subsystems and systems, lies ahead. Design work is ongoing and many designs remain to be matured and verified. A key indicator of the Army’s progress in this area will be the percentage of design drawings that will be released to manufacturing at the critical design review, currently scheduled for fiscal year 2011. The Army is currently fabricating key FCS prototypes, many of which are scheduled for delivery in the 2010 time frame. After they are delivered, much additional engineering work will remain to be conducted as part of a disciplined test, fix, and retest approach. For example, several prototypes will be built based on preliminary versus final designs, and will not have all key technologies integrated. In this sense, they will not be representative of production items. Many of the results of these demonstrations, and other key test and evaluation results, will not be available until late in the program, creating difficulty in applying knowledge gained from previous tests into subsequent tests. For example, a key system-of-systems test scheduled before the low-rate production decision is the limited user test 3 in 2012 to assess brigade combat team network capabilities. This test will be the first large-scale FCS test that will include a majority of the developmental prototypes and a large operational unit and occurs only one year before the low-rate initial production decision for the core FCS program. This test is important because the Congress has required a broad network demonstration to be conducted before starting low-rate production of the core FCS program. This demonstration is also expected to occur in fiscal year 2012 as part of the limited user test. Finally, the Army will have to develop and mature production processes for a wide range of FCS systems. Our work has shown that development costs for programs with mature technologies at the start of system development increased by a modest average of 4.8 percent over the original estimate, whereas development costs for programs with immature technologies increased by 34.9 percent. Our work has also shown that most development cost growth occurs after the critical design review. Specifically, of the 28.3 percent cost growth that weapon systems average in development, 19.7 percent occurs after the critical design review. In the case of FCS, the Army’s strategy is schedule-driven and calls for beginning low-rate production in 2013 and initial operational capability in 2015, which leaves little time to overcome the remaining technological and engineering challenges the program faces prior to committing to production. Thus, it is likely that under the current schedule, additional cost growth would be incurred as the Army works through these remaining challenges. According to DOD officials, the Systems and Software Engineering group, within DOD’s Acquisition, Technology, and Logistics organization, has been tasked to conduct a systems engineering review of FCS that will include an evaluation of risks associated with the FCS acquisition strategy, test plans, software, and key complementary programs. According to the Systems and Software Engineering group, the assessment will also cover the FCS system engineering plan for reasonable exit criteria associated with critical design review and production readiness. The reporting objectives for this effort include, among other things, clearly illustrating the risks and challenges of proceeding to critical design review as planned. The Systems and Software Engineering group’s review is expected to provide input to address three of the required congressional assessments—FCS requirements/design, concept demonstration, and software demonstration—and should provide critical information on the amount of FCS development and demonstration work yet to be completed and its expected cost and schedule. Funding commitments for production begin before FCS capabilities are demonstrated and even before the critical design review is held. This puts decision makers in a difficult position, particularly when considering that FCS is to deliver more than a better set of equipment—it embodies a new concept of combat. Procurement funding for core FCS production facilities will be requested for fiscal year 2011, the budget for which will be presented to Congress in February 2010—several months after the milestone review and before the stability of the FCS design is assessed at the critical design review. In fact, based on results of system-level preliminary design reviews conducted to date, the Army could still be working to close action items resulting from the system-of-systems preliminary design review when it requests funding for FCS core production facilities. Further, when Congress is asked to approve funding for low-rate initial production of core FCS systems, the Army will not yet have proven that the FCS network and the program concept will work. A key demonstration of the FCS network, limited user test 3, is currently scheduled for later in 2012, after the Congress will have received the fiscal year 2013 budget submission. This is illustrated further in figure 4 below. Since fiscal year 2003, the Army has been required by Congress to develop and field the NLOS-C early in order to provide a self-propelled indirect fire capability. The Department of Defense Appropriations Act for 2008 required the Army to deliver eight NLOS-C prototypes by the end of calendar year 2008 and to field early production versions of the system by fiscal year 2010. These systems are to be in addition to those needed for developmental and operational testing. The Army determined that a set of 18, a full battalion’s worth, would be needed to meet the intent of the act’s language in terms of the early production units. Although the NLOS-C is one of eight FCS manned ground vehicles, it is proceeding about 5 years ahead of the other vehicles. The Army began procuring long-lead production items for the NLOS-C vehicle in 2008 to meet the requirement for the early production versions. According to program officials, an urgent need to build Mine-Resistant Ambush Protected vehicles diverted subcontractor resources away from the NLOS-C efforts. Officials further indicated that technological challenges associated with a lack of completed production facilities and specialized tooling also contributed to delays. The Army accepted delivery of the first two NLOS-C prototypes in fiscal year 2008 and the remaining six vehicles in the following two years. A Defense Acquisition Board decision to begin low-rate production for the additional set of 18 NLOS-C vehicles was expected in December 2008. Details of that decision were not available for inclusion in this report. If approved, the Army expects delivery of six early production units per year in fiscal years 2010 through 2012. None of these early NLOS-C vehicles will meet FCS threshold requirements nor will they be operationally deployable. Rather, they will be used as training assets for the Army Evaluation Task Force. In order to meet the early fielding dates, the Army will begin production of the NLOS-C vehicles with immature technologies and designs. Several key technologies, such as lightweight armor, the active protection system, and the JTRS radios will not be fully mature for several years. Much requirements definition work remains for all the manned ground vehicles, including the NLOS-C. Software development is in its early stages. Design work on the manned ground vehicles also remains to be done, including work on the chassis and mission modules. Significant challenges involving integrating the technologies, software, and design will follow. To the extent that these aspects of the manned ground vehicles depart from the early production cannons, costly rework of the cannons may be necessary if they will ever be used for other than training purposes. The Army’s efforts and financial investments made on the NLOS-C vehicles could create additional pressure to proceed with FCS core production, prior to achieving a solid basis of knowledge on which to move forward. Production on the cannon is beginning 5 years in advance of the production decision on the FCS core systems. By the time of that decision, in fiscal 2013, the Army plans to have invested about $12 billion in FCS procurement funds and more than $50 billion for FCS overall. In addition, the Army also plans to invest millions in production facilities in which to build the vehicles. These activities all contribute to starting up the manned ground vehicle industrial base. If the FCS strategy goes according to plan, FCS core production would directly follow NLOS-C production, with long lead items for the FCS core program providing a transition. That may be premature based on the expected design maturity and demonstrations expected to be done to that point. DOD has attempted to make a distinction between NLOS-C and the core FCS program, but the linkages continue to exist in the FCS acquisition strategy. If decision makers were to consider delaying FCS core production because it was not ready, a gap could develop when early NLOS-C production ends. Sustaining the industrial base could then become an argument against an otherwise justified delay. The Army initiated spin out development in 2004, when it embarked on an effort to bring selected FCS capabilities to current force heavy brigade combat teams while development of the core FCS program remained under way. In 2006, the Army established the Army Evaluation Task Force to use, evaluate, and train with the spin out capabilities, and the Task Force began its testing under that brigade construct in early 2008. In mid- 2008, the Army changed its focus from fielding spin out equipment to heavy brigades and instead to field the equipment to infantry brigade combat teams beginning in fiscal year 2011. Army officials stated that this change occurred because infantry brigades are the optimal forces to fight in an urban environment, are being used in combat more than other types of forces, and are the most vulnerable forces. Accordingly, the Army now proposes to have 43 infantry brigade combat teams fully equipped with spin out equipment by 2025 at a total cost of $21 billion, with over $5 billion to be provided in fiscal years 2010 to 2015. DOD officials have reviewed the Army’s revised FCS spin out plans, but they have not yet made a decision to approve those plans. The switch to infantry brigades led the Army to abandon its previous plan for a series of three spin outs and instead pursue a two-phased effort termed “early” and “threshold” with respective planned production commitment dates of fiscal years 2010 and 2013. The early spin out items are not expected to meet all FCS threshold requirements nor will the threshold spin out items have the same network and battle command capabilities as in the core FCS program. The early spin out will include: Non-Line-of-Sight Launch System, Urban and Tactical Unattended Ground Sensors, early versions of the system-of-systems common operating two types of Joint Tactical Radios, integrated computer system, environment and battle command software, Small Unmanned Ground Vehicle, Class I Unmanned Aerial Vehicle, and Ground Soldier System. The second phase of spin outs will include improved versions of the above systems as well as add the Multifunction Utility/Logistics and Equipment vehicle, Class IV Unmanned Aerial Vehicle, Armed Robotic Vehicle— Assault (Light), and Centralized Controller. With the advent of the new structure, the Army moved its initial spin out production decision from January 2009 to December 2009. However, testing to date has not made a convincing case for this production commitment for several reasons. First, the Army has conducted only one test focused on the infantry brigade combat team structure. The two initial spin out tests—a technical field test in early 2008 to verify technical aspects of the capabilities and force development test and evaluation in May 2008 to validate requirements and training associated with those capabilities—occurred prior to the restructure and therefore employed heavy brigade combat team constructs. While Army officials have indicated that the force development test results have applicability to the infantry brigades, the test’s major objective in terms of construct was to confirm the organizational structure and equipment distribution for a spin out-equipped heavy brigade combat team. The third test in July 2008, a preliminary limited user test to assess maturity, interoperability, and contribution of spin out systems, did utilize the infantry brigade structure. However, because of the restructure, that test was a shortened 2-day version of an event originally planned as a much longer effort focused on the heavy brigade combat team. Additionally, testing completed to date employed spin out systems that are not in the form that will be fielded. In fact, four of the systems planned for the early spin out have only been tested in surrogate or non-production representative forms (not in a mature or final configuration). The Ground Soldier System has not yet been included in any testing. Table 2 shows the versions of the prototypes used in each of the three tests to date. Using surrogate and non-production representative systems is problematic because it does not conclusively show how well the spin out systems can address current force capability gaps in situational awareness, force protection, and lethality. Moreover, they limit the ability to translate spin out tactical operations from heavy brigade to infantry combat teams and from spin outs to the core FCS. In fact, DOD’s current acquisition policy requires that systems meet approved requirements and are demonstrated in their intended environments using the selected production- representative articles before the engineering and manufacturing development phase—which precedes the production phase—can end. Army test officials and equipment users told us, and test reports for the 2008 spin out tests confirm, that the surrogates and non-production representative systems limited the ability to gauge system performance, forced adjustments in testing, and made it difficult to know whether beneficial lessons were learned in testing. Officials from the Army’s independent testing organization, the Army Test and Evaluation Command, stated that prototype JTRS radios impact the ability to evaluate overall system effectiveness regarding such factors as range and reliability. They also noted that radio performance can impact tactics used by the testing unit. Army officials who actually participated in the testing expressed similar views, and noted that the surrogates limited tactical operations. As a result, they said, the Army is immature tactically in terms of what it knows about spin out operations. The three tests scheduled for 2009 will continue to include surrogate and non-production representative systems. As in past tests, surrogates will take the place of JTRS handheld radios in all three tests. As noted by Army testers, this surrogate radio has limited basic functionality and will impact the evaluation of performance for systems used in conjunction with it, including the Non-Line-of-Sight Launch System and unattended ground sensors. According to Army officials, they will not have production representative versions of this radio to test until initial operational test and evaluation in fiscal year 2011. In addition, JTRS ground mobile radios used in 2009 testing are to consist of a mix of non-production and production representative models, but the composition will be heavily weighted toward the non-production representative models. Of the 16 total radios planned for use in the limited user test, only 4 are expected to be the production representative version. Additionally, Army officials told us that if these radios are delayed, they will not be able to properly operate and evaluate the needed networking capabilities. The schedule for completing 2009 testing is tight, and the issues identified in the 2008 testing may not be resolved prior to the spin out production decision. According to Army and DOD officials, the Army Evaluation Task Force has proven extremely useful in identifying system issues and suggesting design changes. While the Army is working to improve spin out systems in accordance with the Task Force’s testing observations and recommendations, it does not plan to prove out all final designs prior to the production decision. For example, the Army is redesigning the Tactical Unattended Ground Sensor because 2008 testing showed that it had issues with range, battery life, and hardware reliability. However, the Army does not expect to have the final version of the redesigned sensor available until February 2010, after the initial spin out production decision has been made. The Army is also redesigning the Urban Unattended Ground Sensor in accordance with testing feedback because that sensor had issues with battery life, user set-up time, and display of data. A final version of that sensor will not be available until February 2010. Additionally, the JTRS ground mobile radio may not be able to achieve its schedule for a production decision, which would impact the FCS spin out initiative. The Army may be unable to thoroughly assess spin outs’ military utility for current forces because testing planned for 2009 is very compressed and leaves little time for analysis before the production decision. Under the revised spin out structure, the Army expects to conduct technical field, force development, and limited user tests in a back-to-back period from July through September 2009. This schedule allows the Army only 12 weeks to conduct all the tests, assess tests results, and incorporate lessons learned from one test to the next. Additionally, the limited user test, the last test in the series before the production decision and arguably the most important in terms of demonstrating system interoperability and overall spin out military utility, is planned to conclude at the end of September. That means the Army only has 8 to12 weeks to assess those test results before DOD will make the expected December 2009 production decision. By comparison, the Army needed 8 months to produce its test report on the 2008 technical field test. A DOD testing official told us that, because of the testing schedule, the Army would be unable to analyze test results adequately before making decisions. Army officials acknowledged that the schedule is extremely compressed and noted that any delay in maturity or receipt of hardware and/or software would impact the test schedule. They also indicated that, because of the aggressive schedule, it might be necessary to change the order of the tests and hold the force development test after the limited user test. Army officials informed the Under Secretary of Defense for Acquisition, Technology, and Logistics that they are considering an incremental or block acquisition approach to FCS. Citing the need to set a path to a stable, executable baseline for FCS—one with appropriately scoped requirements—FCS program officials believe that by adopting an incremental or block approach, they may be better able to mitigate risks in four major areas. These areas include: immaturity of requirements for system survivability, network capability, and information assurance; limited availability of performance trade space to maintain program cost and schedule given current program risks (schedule risks, weight/survivability, cost growth); program not funded to Cost Analysis Improvement Group estimates and impact of congressional budget cuts; and continuing challenges in aligning schedules and expectations for multiple concurrent acquisitions (such as JTRS and WIN-T). Subsequent to the mid-2008 Defense Acquisition Board meeting, where the Army presented its case for its consideration of an incremental or block approach for FCS acquisition, the Under Secretary of Defense for Acquisition, Technology, and Logistics issued a memorandum directing the Army to, among other things, pursue this initiative. Moreover, the memorandum stipulated that the incremental approach to acquire FCS must be prioritized to meet the warfighter’s most critical operational needs and present a stable, executable program. The Army has been conducting an analysis to define an incremental approach, which is expected to address organizational structure, platforms, warfighter needs, and unified battle command. This analysis will be coupled with DOD assessments of FCS design maturity (including technology readiness levels, network and platform readiness, and associated risks and costs) and program maturity (including program execution feasibility, program scope, resource availability, and program alternatives). The Army was expected to present the analysis results and incremental FCS program plan to the DOD in late 2008 or early 2009, but that had not occurred at the time of this report. According to a DOD official, the adoption of an incremental approach may affect both the FCS core program and the spin out initiative. For the core FCS program, adoption of an incremental approach may involve a phased development and demonstration of individual FCS performance requirements and/or a phased fielding of individual components of the FCS family of systems. For the spin out initiative, the Army is considering if and when it should spin out FCS capabilities to the Heavy and Stryker Brigade Combat Teams. Restructuring the FCS program around an incremental approach has the potential to alleviate the risks inherent in the current strategy. It also represents an opportunity to apply the policy and thus provide decision makers more information before key program commitments, like production funding, are made. Taking an incremental approach to new acquisitions, versus attempting to acquire full capability in one step, has been preferred by DOD policy and best practices since before FCS began in 2003. The December 2008 policy adds several key features that would benefit a restructured FCS program. These include: establishment of configuration steering boards that are tasked to review all requirements changes and any significant technical configuration changes that have the potential to result in cost and schedule impacts to the program; a post-preliminary design review assessment to be conducted where the results of the PDR and the program manager’s assessment are considered to determine whether remedial action is necessary to achieve the program’s objectives; a critical design review, which is an opportunity to assess design maturity by measures such as completion of subsystem critical design reviews, the percentage of software and hardware product specifications and drawings completed, planned corrective actions to hardware and software deficiencies; adequate developmental testing, the maturity of critical manufacturing processes, and an estimate of system reliability based on demonstrated reliability rates; a post-critical design review, which assesses the program manager’s report on the critical design review to determine whether the program can meet its approved objectives or if adjustments should be made; and before production, a demonstration that the system meets requirements in its intended environment using a production- representative article, manufacturing processes have been effectively demonstrated in a pilot line environment, and industrial capabilities are reasonably available. On the other hand, the newness of the incremental approach could complicate oversight at this important juncture. For example, its approval will lag behind the congressional schedule for authorizing and appropriating fiscal year 2010 funds. Also, a new approach to FCS could affect the scope of the milestone review. Evaluation of the new approach will involve a number of factors, including whether: the incremental approach adequately addresses program risks and unresolved questions on the feasibility of the FCS concept and its information network; the initial increment of FCS capability is justifiable on its own, without being dependent on future increments; each increment, including the first, will comply with current DOD policy as it applies to a new program starting at the preliminary design review stage; and the Army’s overall investment plan and resources for FCS increments, spin outs, and its current forces is sound and affordable. Should an incremental approach to FCS be pursued, one consideration will be the future role of the Army’s contracting relationship with the LSI. We have previously reported the uniquely close relationship that exists between the Army and the LSI. While this has advantages, it also has disadvantages. In the past two years, the role of the LSI, originally limited to development, has grown relative to production. It is expected to be the prime contractor for production of spin outs, the NLOS-C, and at least the low-rate production of the FCS core systems. The specific role the LSI will play in production of spin outs, NLOS-C, and FCS core production remains somewhat unclear. Statements of work for the production contracts have not yet been negotiated. According to the program officials, the LSI will contract with the first tier subcontractors, which will in turn contract with their own subcontractors. Thus, the production role of the LSI is likely to be largely in oversight of the first tier subcontractors versus fabricating systems or subsystems. The LSI is also responsible for defining and maintaining a growth strategy for integrating new technologies into the FCS brigade combat teams. Combined with a likely role in sustainment, the LSI will remain involved in the FCS program indefinitely. Recently, the Under Secretary of Defense for Acquisition, Technology, and Logistics issued a directive to pursue alternate arrangements for any future FCS contracts. The Under Secretary found that the fixed fee was too high and the fee structure allows industry to receive most of the incentive fee dollars prior to demonstrating integrated FCS system-of- systems capability. The Under Secretary also directed that the Army conduct a risk-based assessment to examine contracting alternatives for FCS capability. This assessment is to evaluate opportunities for procurement breakout of the individual platforms/systems that comprise FCS and how the government’s interests are served by contracting with the LSI as compared to contracting directly with the manufacturers of the items. The 2009 milestone review is the most important decision on the Future Combat System since the program began in 2003. If the preliminary design reviews are successfully completed and critical technologies mature as planned in 2009, the FCS program will essentially be at a stage that statute and DOD policy would consider as being ready to start development. In this sense, the 2009 review will complete the evaluative process that began with the original 2003 milestone decision. Further, when considering that the current estimate for FCS ranges from $159 billion to $200 billion when the potential increases to core program costs and estimated costs of spin outs are included, 90 percent or more of the investment in the program lies ahead. Even if a new, incremental approach to FCS is approved, a full milestone review that carries the responsibility of a go/no-go decision is still in order, along with the attendant reports and analyses that are required inputs. In the meantime, establishing a configuration steering board, as suggested in DOD policy, may help bridge the gaps between requirements and system designs and help in the timely completion of the FCS preliminary design reviews. At this point, there are at least three programmatic directions, or some combination thereof, that DOD could take at the milestone review to shape investments in combat systems for the Army, each of which presents challenges. First, the FCS program as currently structured has significant risks and may not be executable within remaining resources. Second, although an incremental approach may improve the Army’s prospects for fielding some capability, each increment must stand on its own and not be dependent on future increments. Third, spin outs to current forces currently rely on a rushed schedule that calls for making production decisions before production-representative prototypes have clearly demonstrated a useful military capability. The role of the LSI in the FCS production phase will be a factor that will have to be considered for any program that emerges from the milestone review. There is no question that the Army needs to ensure its forces are well- equipped. The Army has vigorously pursued FCS as the solution, a concept and an approach that is unconventional, yet with many good features. The difficulties and redirections experienced by the program should be seen as revealing its immaturity, rather than as the basis for criticism. However, at this point, enough time and money have been expended that the program should be evaluated at the 2009 milestone review based on what it has shown, not on what it could show. The Army should not pursue FCS at any cost, nor should it settle for whatever the FCS program produces under fixed resources. Rather, the program direction taken after the milestone review must strike a balance between near-term and long-term needs, realistic funding expectations, and a sound plan for execution. Regarding execution, the review represents an opportunity to ensure that the emerging investment program be put on the soundest possible footing by applying the best standards available, like those contained in DOD’s 2008 acquisition policy, and requiring clear demonstrations of the FCS concept and network before any commitment to production of core FCS systems. Any decision the Army makes to change the FCS program is likely to lag behind the congressional schedule for authorizing and appropriating fiscal year 2010 funds. Because of this, Congress needs to preserve its options for ensuring it has adequate knowledge on which to base funding decisions. Specifically, it does not seem reasonable to expect Congress to provide full fiscal year 2010 funding for the program before the milestone review is held nor production funding before system designs are stable and validated in testing. The Congress should consider taking the following two actions: restricting the budget authority to be provided for FCS in fiscal year 2010 until DOD fully complies with the statutory FCS milestone review requirements and provides a complete budget justification package for any program that emerges, and not approving any production or long lead item funds for the core FCS program until the critical design review is satisfactorily completed and demonstrations using prototypes provide confidence that the FCS system-of-systems operating with the communications network will be able to meet its requirements. We recommend that the Secretary of Defense ensure that the investment program that emerges from the 2009 milestone review be conformed with current DOD acquisition policy, particularly regarding technology maturity, critical design reviews, and demonstrating production-representative prototypes before making production commitments; direct the Secretary of the Army to convene, following the preliminary design reviews and in time to inform the 2009 FCS milestone review, an FCS Configuration Steering Board to provide assistance in formulating acceptable trade-offs to bridge the gaps between the FCS requirements and the system designs; ensure that if an incremental approach is selected for FCS, the first increments are justifiable on their own as worthwhile capabilities that are not dependent on future increments for their value, particularly regarding the order in which the information network and individual manned ground vehicles will be developed; ensure that FCS systems to be spun out to current forces have been successfully tested in production-representative form before they are approved for initial production; and reassess the appropriate role of the LSI in the FCS program, particularly regarding its involvement in production. DOD concurred with all our recommendations and provided comments on two. Regarding our recommendation on testing spin out systems, DOD commented that any production decision for FCS systems going to the current force will be informed by an operational assessment or user test of the systems. Although the Army plans to conduct such testing prior to the spin out low-rate initial production decision in late 2009, that testing will employ surrogate and non-production representative systems. We maintain that any systems planned for production should be production- representative and thoroughly tested in a realistic environment. DOD noted that such testing was more in line with what is required for the full- rate production decision versus the initial low-rate decision. The testing standards we apply reflect the best practice and DOD policy of having production-representative prototypes tested prior to a low-rate production decision. This approach demonstrates the prototypes’ performance and reliability as well as manufacturing processes—in short, that the product is ready to be manufactured within cost, schedule, and quality goals. In fact, current DOD policy states that development “shall end when the system meets approved requirements and is demonstrated in its intended environment, using the selected production-representative article; manufacturing processes have been effectively demonstrated in a pilot line environment; industrial capabilities are reasonably available; and the system meets or exceeds exit criteria and entrance requirements.” Regarding our recommendation about reassessing the role of the LSI, DOD stated that the FCS contractual arrangement is not an LSI contract as defined by law. According to the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009, the FCS prime contractor “shall be considered to be a lead systems integrator until 45 days after the Secretary of the Army certifies in writing to the congressional defense committees that such contractor is no longer serving as the lead systems integrator.” Army officials have stated that they are unaware of the Army preparing any such certification for the defense committees. Regardless of how the prime contractor is characterized, it was originally envisioned by the Army as an LSI, and its unusually close relationship with the Army on the FCS program still warrants additional oversight. Regarding our matters for congressional consideration, DOD expressed concern over the impact to FCS acquisition execution with the fiscal year 2010 budget authority limitations that we suggested Congress consider. We believe a restriction is necessary as congressional committees will be asked to provide funds for fiscal year 2010 before the FCS milestone review, currently scheduled for July 30, 2009, is held. The review will lead to a decision on whether the program should continue as currently structured, continue in restructured form, or be terminated. The scope and significance of those decisions create the possibility that the Army’s fiscal year 2010 budget plans for FCS could differ significantly from the request that Congress will consider. A restriction need not amount to a denial or reduction of funds, but rather creates an opportunity for Congress to review any change in Army plans before releasing funds for FCS for the entire fiscal year. We received other technical comments from DOD, which have been addressed in the report, as appropriate. We are sending copies of this report to the Secretary of Defense; the Secretary of the Army; and the Director, Office of Management and Budget. Copies will also be made available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-4841 if you or your staff has any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The major contributors are listed in appendix VIII. To develop information on to what extent knowledge will likely be available to DOD and the Congress in the key areas of technology, design, demonstrations, network performance, and cost and affordability to support the 2009 milestone review, and the execution challenges that a post-milestone review FCS program presents to DOD and the Congress, we interviewed officials of the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics); the Secretary of Defense’s Cost Analysis Improvement Group; the Secretary of Defense’s Program Analysis and Evaluation; Director Defense Research and Engineering; the Joint Staff; Assistant Secretary of Defense (Networks and Information Integration); the Army’s Training and Doctrine Command; the Director of Operational Test and Evaluation; the Future Force Integration Directorate; the Army Evaluation Task Force, the Army Test and Evaluation Command; the Director of the Combined Test Organization; the Program Manager, Future Combat System (Brigade Combat Team); and the Project Manager, Future Combat System Spin Out. We reviewed relevant Army and DOD documents, including the Future Combat System’s Operational Requirements Document, the Acquisition Strategy Report, the Selected Acquisition Report, critical technology assessments and technology risk mitigation plans, and spin out test results. We attended system-level preliminary design reviews, board of directors reviews, and system demonstrations. In our assessment of the FCS, we used the knowledge-based acquisition practices drawn from our large body of past work as well as DOD’s acquisition policy and the experiences of other programs. We certify that officials from DOD and the Army have provided us access to sufficient information to make informed judgments on the matters in this report. We discussed the issues presented in this report with officials from the Army and the Secretary of Defense and made several changes as a result. We conducted this performance audit from March 2008 to March 2009 in accordance with generally accepted auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Section 214 of Public Law 109-364 mandated that the Secretary of Defense perform a milestone (go/no-go) review of the Future Combat Systems acquisition program. The following depicts that legislation in its entirety as amended by section 211 of Public Law 110-417. (a) MILESTONE REVIEW REQUIRED.—Not later than 120 days after the preliminary design review of the Future Combat Systems program is completed, the Secretary of Defense shall carry out a Defense Acquisition Board milestone review of the Future Combat Systems program. The milestone review shall include an assessment as to each of the following: (1) Whether the warfighter’s needs are valid and can be best met with the concept of the program. (2) Whether the concept of the program can be developed and produced within existing resources. (3) Whether the program should— (A) continue as currently structured; (B) continue in restructured form; or (C) be terminated. (b) DETERMINATIONS TO BE MADE IN ASSESSING WHETHER PROGRAM SHOULD CONTINUE.—In making the assessment required by subsection (a)(3), the Secretary shall make a determination with respect to each of the following: (1) Whether each critical technology for the program is at least Technical Readiness Level 6. (2) For each system and network component of the program, what the key design and technology risks are, based on System Functional Reviews, Preliminary Design Reviews, and Technical Readiness Levels. (3) Whether actual demonstrations, rather than simulations, have shown that the concept of the program will work. (4) Whether actual demonstrations, rather than simulations, have shown that the software for the program is on a path to achieve threshold requirements on cost and schedule. (5) Whether the program’s planned major communications network demonstrations are sufficiently complex and realistic to inform major program decision points. (6) The extent to which Future Combat Systems manned ground vehicle survivability is likely to be reduced in a degraded Future Combat Systems communications network environment. (7) The level of network degradation at which Future Combat Systems manned ground vehicle crew survivability is significantly reduced. (8) The extent to which the Future Combat Systems communications network is capable of withstanding network attack, jamming, or other interference. (9) What the cost estimate for the program is, including all spin outs, and an assessment of the confidence level for that estimate. (10) What the affordability assessment for the program is, given projected Army budgets, based on the cost estimate referred to in paragraph (9). (c) REPORT.—The Secretary shall submit to the congressional defense committees a report on the findings and conclusions of the milestone review required by subsection (a). The report shall include, and display, each of the assessments required by subsection (a) and each of the determinations required by subsection (b). (d) RESTRICTION ON PROCUREMENT FUNDS EFFECTIVE FISCAL 2009.— (1) IN GENERAL.—For fiscal years beginning with 2009, the Secretary may not obligate any funds for procurement for the Future Combat Systems program. (2) EXCEPTIONS.—Paragraph (1) does not apply with respect to— (A) the obligation of funds for costs attributable to an insertion of new technology (to include spin out systems) into the current force, if the insertion is approved by the Under Secretary of Defense for Acquisition, Technology, and Logistics; or (B) the obligation of funds for the non-line-of-sight cannon system. (3) TERMINATION.—The requirement of paragraph (1) terminates after the report required by subsection (c) is submitted. Section 212 of Public Law 110-417 requires the Assistant Secretary of Defense (Networks and Information Integration) to report by September 30, 2009 on its analysis of FCS communications network and software. The specific issues to be addressed are listed below. An assessment of the vulnerability of the FCS communications network and software to enemy network attack, in particular the effect of the use of significant amounts of commercial software in FCS software. An assessment of the vulnerability of the FCS communications network to electronic warfare, jamming, and other potential enemy interference. An assessment of the vulnerability of the FCS communications network to adverse weather and complex terrain. An assessment of the FCS communication network’s dependence on satellite communications support, and an assessment of the network’s performance in the absence of assumed levels of satellite communications support. An assessment of the performance of the FCS communications network when operating in a degraded condition …and how such a degraded network environment would affect the performance of FCS brigades and the survivability of FCS Manned Ground Vehicles. An assessment, developed in coordination with the Director of Operational Test and Evaluation, of the adequacy of the FCS communications network testing schedule. An assessment, developed in coordination with Defense, Operational Test & Evaluation, of the synchronization of the funding, schedule, and technology maturity of the WIN-T and JTRS programs in relation to the FCS program, including any planned FCS spin outs. Technology Readiness Levels (TRL) are measures pioneered by the National Aeronautics and Space Administration and adopted by DOD to determine whether technologies were sufficiently mature to be incorporated into a weapon system. Our prior work has found TRLs to be a valuable decision-making tool because they can presage the likely consequences of incorporating a technology at a given level of maturity into a product development. The maturity level of a technology can range from paper studies (TRL 1), to prototypes that can be tested in a realistic environment (TRL 7), to an actual system that has proven itself in mission operations (TRL 9). According to DOD acquisition policy, a technology should have been demonstrated in a relevant environment or, preferably, in an operational environment (TRL 7) to be considered mature enough to use for product development. Best practices of leading commercial firms and successful DOD programs have shown that critical technologies should be mature to at least a TRL 7 before the start of product development. JTRS Handheld, manpack, small form fit Army, Joint, multinational interface Intrusion detection--IP network Mobile ad hoc networking protocols 15 Multi-Spectral sensors and seekers Air (rotary wing/UAV)--to--ground Air (fixed wing)--to--ground (interim/robust solutions) Ground--to--ground (mounted) Ground--to--air (mounted) In addition to the individual named above, major contributors to this report were Assistant Director William R. Graveline, Marcus C. Ferguson, William C. Allbritton, Noah B. Bleicher, Dr. Ronald N. Dains, Tana M. Davis, John Krump, Carrie W. Rogers, and Robert S. Swierczek. Defense Acquisitions: 2009 Review of Future Combat Systems Is Critical to Program’s Direction. GAO-08-638T. Washington, D.C.: April 10, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Defense Acquisitions: 2009 Is a Critical Juncture for the Army’s Future Combat System. GAO-08-408. Washington, D.C.: March 7, 2008. Defense Acquisitions: Future Combat System Risks Underscore the Importance of Oversight. GAO-07-672T. Washington, D.C.: March 27, 2007. Defense Acquisitions: Key Decisions to Be Made on Future Combat System. GAO-07-376. Washington, D.C.: March 15, 2007. Defense Acquisitions: Improved Business Case Key for Future Combat System’s Success. GAO-06-564T. Washington, D.C.: April 4, 2006. Defense Acquisitions: Improved Business Case is Needed for Future Combat System’s Successful Outcome. GAO-06-367. Washington, D.C.: March 14, 2006. Defense Acquisitions: Business Case and Business Arrangements Key for Future Combat System’s Success. GAO-06-478T. Washington, D.C.: March 1, 2006. Defense Acquisitions: Future Combat Systems Challenges and Prospects for Success. GAO-05-428T. Washington, D.C.: March 16, 2005. Defense Acquisitions: The Army’s Future Combat Systems’ Features, Risks, and Alternatives. GAO-04-635T. Washington, D.C.: April 1, 2004. Issues Facing the Army’s Future Combat Systems Program. GAO-03-1010R. Washington, D.C.: August 13, 2003. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001.
The Future Combat System (FCS) program is the centerpiece of the Army's effort to transition to a lighter, more agile, and more capable combat force. By law, GAO is to report annually on the FCS program. Also, law requires the Department of Defense (DOD) to hold a milestone review of the FCS program, now planned for 2009. This report addresses (1) what knowledge will likely be available in key areas for the review, and (2) the challenges that lie ahead following the review. To meet these objectives, GAO reviewed key documents, performed analysis, attended demonstrations and design reviews, and interviewed DOD officials. The Army will be challenged to demonstrate the knowledge needed to warrant an unqualified commitment to the FCS program at the 2009 milestone review. While the Army has made progress, knowledge deficiencies remain in key areas. Specifically, all critical technologies are not currently at a minimum acceptable level of maturity. Neither has it been demonstrated that emerging FCS system designs can meet specific requirements or mitigate associated technical risks. Actual demonstrations of FCS hardware and software--versus modeling and simulation results--have been limited, with only small scale warfighting concepts and limited prototypes demonstrated. Network performance is also largely unproven. These deficiencies do not necessarily represent problems that could have been avoided; rather, they reflect the actual immaturity of the program. Finally, there is an existing tension between program costs and available funds that seems only likely to worsen, as FCS costs are likely to increase at the same time as competition for funds intensifies between near- and far-term needs in DOD and between DOD and other federal agencies. DOD could have at least three programmatic directions to consider for shaping investments in future capabilities, each of which presents challenges. First, the current FCS acquisition strategy is unlikely to be executed within the current $159 billion cost estimate and calls for significant production commitments before designs are demonstrated. To date, FCS has spent about 60 percent of its development funds, even though the most expensive activities remain to be done before the production decision. In February 2010, Congress will be asked to begin advancing procurement funds for FCS core systems before most prototype deliveries, critical design review, and key system tests have taken place. By the 2013 production decision, Congress will have been asked for over $50 billion in funding for FCS. Second, the program to spin out early FCS capabilities to current forces operates on an aggressive schedule centered on a 2009 demonstration that will employ some surrogate systems and preliminary designs instead of fully developed items, with little time for evaluation of results. Third, the Army is currently considering an incremental FCS strategy--this is to develop and field capabilities in stages versus in one step. Such an approach is generally preferable, but would present decision makers with a third major change in FCS strategy to consider anew. While details are yet unavailable, it is important that each increment be justified by itself and not be dependent on future increments.
Federal agencies are required to have an occupant emergency program that establishes procedures for safeguarding lives and property during According to ISC, an OEP is a emergencies in their respective facilities.critical component of an effective occupant emergency program. Further, these plans are intended to minimize the risk to personnel, property, and other assets within a facility by providing facility-specific response procedures for occupants to follow. Several federal entities—ISC, GSA, and FPS—play a role in protection policy and programs for GSA-owned and -leased facilities. Established by Executive Order 12977, ISC is an interagency organization chaired by DHS to enhance the quality and effectiveness of security in, and protection of, nonmilitary buildings occupied by federal employees for nonmilitary activities in the United States, among other things.agencies, including FPS and GSA. Under the executive order, ISC was directed to develop policies and standards that govern federal facilities’ physical security efforts. As a part of its government-wide effort to develop physical security standards and improve the protection of federal facilities, it also provides guidance on OEPs. In its 2010 standard, ISC ISC includes members from 53 federal departments and lists 10 elements that should be addressed at a minimum in an OEP, and states that the plan must be reviewed annually. As the federal government’s landlord, GSA designs, builds, manages, and maintains federal facilities. Presidential Policy Directive 21 designates DHS and GSA as cosector-specific agencies for the government facilities sector, 1 of 16 critical infrastructure sectors. In 2002, GSA issued its Occupant Emergency Program Guide to provide step-by-step instructions for agencies to use to meet federal regulatory requirements for OEPs. GSA also served as chair and sponsor of ISC’s working group that developed additional guidance for preparing OEPs. The Homeland Security Act of 2002 transferred FPS from GSA to the newly established DHS in March 2003, and required DHS to protect the buildings, grounds, and property that are under the control and custody of GSA and the persons on the property. As part of an agreement between GSA and DHS, FPS provides law enforcement and related security services for GSA’s approximately 9,600 facilities, which include—but are not limited to—responding to incidents and conducting facility security assessments.inspectors to help FPS identify and evaluate potential risks so that countermeasures can be recommended to help prevent or mitigate risks. FPS inspectors are law enforcement officers and trained security experts who perform facility security assessments and inspections and respond to Facility security assessments are conducted by FPS incidents. FPS also assigns an FSL in accordance with the ISC standard and in coordination with the FSC and GSA representative based on a facility’s cumulative rating on five factors established by ISC (plus an adjustment for intangible factors), as shown in figure 1. According to ISC, a facility’s FSL is a key factor in establishing appropriate physical security measures. Further, while the minimum OEP elements in the ISC 2010 standard apply to all FSL facilities, what is appropriate may vary based on facility characteristics. The federal agencies that occupy federal facilities are responsible for preparing and maintaining OEPs; ISC, GSA, and FPS provide guidance or assistance to the agencies in developing OEPs, and FPS can periodically review OEPs. All 20 facilities we visited had written emergency plans in place, the majority of which reflected ISC’s minimum elements for a facility OEP. The OEPs we reviewed varied in length and content based on a number of factors, such as facility security level. Ensuring that each of the approximately 9,600 GSA-owned and -leased facilities protected by FPS has emergency plans to safely evacuate occupants is a complex undertaking. Each agency occupying a facility is responsible for ensuring the safety of its occupants in that facility. Although no one agency accounts for OEPs across the federal government, ISC, GSA, and FPS each provide guidance on what should be included in a plan. FPS also provides a check that plans are in place as part of its periodic facility security assessments. Federal agencies have designated officials to create and oversee emergency plans and duties for the facilities they occupy. According to federal regulations, designated officials are responsible for developing, In the event of implementing, and maintaining the OEP for the facility.an emergency, the designated official is expected to initiate appropriate action according to the OEP, including the evacuation and relocation of facility occupants. The designated official is also to establish, staff, and train an Occupant Emergency Organization, which is to be composed of employees from within agencies designated to perform the requirements established by the plan. We found that all 20 facilities we visited had assigned designated officials to perform these duties. ISC is responsible for issuing policies and standards on facility protection, such as OEPs, but does not review the extent to which federal facilities have OEPs. As previously mentioned, ISC listed 10 minimum elements in its ISC 2010 standard that an OEP should address. In March 2013, ISC issued Occupant Emergency Programs: An Interagency Security Committee Guide, to further assist department and agency officials as they develop and review their occupant emergency programs, including how to develop OEPs that best fit their individual facility and agency needs. According to ISC officials, the guidance was disseminated via e- mail to the full ISC membership, which includes 53 federal agencies and departments. ISC officials said they rely on agencies located in federal facilities to ensure OEPs are in place and shared several reasons why it would not be feasible for ISC to comprehensively review OEPs. First, according to these officials, ISC decided to use broad guidelines that would allow agencies to develop plans that are suited to the unique characteristics of their facilities. As a result, the guidance does not provide specific standards or metrics against which to compare a facility’s plan. Second, although OEPs are an important part of an overall occupant emergency program, ISC officials said that OEPs are a relatively small part of an agency’s overall emergency and security planning, which may not warrant implementing additional monitoring and data-gathering efforts. Last, ISC officials cited staffing constraints and noted that, per Executive Order 12977, they rely on volunteers from member organizations to carry out the committee’s efforts. GSA also plays a role in coordinating directly with facilities to provide guidance on OEPs and participates in emergency planning efforts. According to GSA officials, its tenant agencies, through their designated officials, are responsible for tracking and reviewing OEPs. Further, designated officials are to represent the government’s interests to public safety and emergency response in conjunction with GSA and other key stakeholders. However, GSA officials said that they will assist agencies with OEPs as requested. GSA officials also told us that they participate on facility security committees and in planning drills and exercises, and can provide GSA and other OEP guidance to their tenants. GSA officials also said that they work with tenants, as well as building owners at leased facilities, to ensure that facilities comply with building safety codes, such as having appropriate exits and fire alarms. Presidential Policy Directive 21 jointly assigns FPS and GSA responsibility for critical infrastructure protection of the government facilities sector. According to a GSA Associate Administrator, there is a need for greater visibility of OEPs. Consequently, GSA and FPS officials told us they have initiated discussions on future collaboration to ensure OEPs are in place and updated at GSA facilities. According to GSA officials, as part of a Joint Strategy for Facility Resilience, GSA and FPS will work collaboratively to develop a platform that could serve as a repository for OEPs, facility security assessments, and other data over the next 2 to 4 years. FPS is responsible for assisting federal agencies with guidance, training, exercises, and drills, and also conducts periodic facility security assessments that include checking OEPs. FPS officials in the three cities we visited said that, when requested, they provide agencies with OEP guidance, which includes an OEP template, and advise the designated and other agency officials regarding an emergency plan that is appropriate for their location and circumstances. According to FPS officials, its OEP template (a Microsoft Word file) can be requested from the DHS and GSA websites and can also be made available to agency officials on a DVD. Of the 20 facilities we visited, officials at 14 reported using FPS guidance or feedback on their OEPs, for example, using the FPS template as a base for their OEPs and officials at 5 facilities reported using their own agency guidance for OEP development. FPS officials in one city we visited reiterated that some agencies have their own emergency coordinators and choose not to use FPS materials. Officials at 1 facility reported not using FPS or other agency guidance for OEP development. FPS also provides evacuation training, including awareness training on active shooter and workplace violence incidents, as well as safety and security.training FPS had provided them, primarily active shooter awareness Officials from 5 of the 20 facilities we visited mentioned specific training, and officials at 1 facility stated that they were planning an active shooter exercise with FPS. Additionally, FPS inspectors in the three locations we visited said they make themselves available to participate in facility exercises and emergency drills, and officials at 11 of the 20 facilities we visited told us that FPS had participated, for example, by providing traffic control services or ensuring all occupants have evacuated. Officials at 5 facilities we visited mentioned that FPS had not consistently participated in drills at their facilities, in one case because FPS had not been invited and in another case because FPS arrived after the drill had been completed. According to FPS officials, FPS participation in exercises and drills can be limited if FPS personnel are not nearby, are on duty responding to actual incidents, or were not given advance notice. FPS inspectors also are to check and answer a series of questions about the facility’s OEP during periodic facility security assessments, including whether or not the facility has a written OEP, and consider whether it addresses the 10 minimum elements for an OEP identified by ISC. FPS’s facility security assessments are to occur periodically, every 3 to 5 years, In July 2011, we reported depending on the security level of the facility. that FPS could not complete security assessments as intended because of limitations in its assessment tool, among other reasons. We recommended that the agency evaluate whether other alternatives for completing security assessments would be more appropriate. DHS agreed with the recommendation and has developed a new facility security assessment tool, the Modified Infrastructure Survey Tool (MIST), which DHS officials said was deployed in April 2012. FPS headquarters officials told us that its agency currently has no national data on which agencies have an OEP, and we previously reported that MIST was not FPS headquarters designed to compare risk across federal facilities.officials said as the agency moves forward with enhancing MIST’s capabilities, it would consider whether it was feasible to add a feature that would allow it to aggregate data across facilities, such as the status of OEPs. According to FPS officials, recommendations about OEPs and evacuation processes, such as suggestions to change assembly points in the event of an evacuation, may be made during facility security assessments. For example, one FPS inspector recommended that 1 facility change its assembly point because he determined that it was too close to the evacuated facility. Although officials at this facility expressed some reluctance in changing the assembly location, the inspector told us that facilities generally implement FPS suggestions. FPS inspectors also said that there have been few examples where agencies did not want to comply. Although agencies do not have to comply with their recommendations on OEPs, FPS inspectors stated that they do have enforcement authority related to life safety issues during an actual emergency event, such as moving occupants to different evacuation locations. Further, FPS headquarters officials said recommendations about OEPs may be made at any time, not just during facility security assessments. All 20 facilities we visited had written OEPs, as required by regulation, which included evacuation procedures. Consistent with the ISC 2010 standard that plans should be reviewed annually, officials at 19 of the 20 facilities we visited reported that they review, and update as needed, their emergency plans on at least an annual basis, and some reported reviewing their plans more frequently. For example, officials at 1 FSL-II facility reported that the OEP program manager reviews the plan on a monthly basis, and officials at a FSL-IV facility said their plan was reviewed quarterly. The OEPs we examined had been reviewed by officials in the past year, except for one. Officials at this FSL-III facility reported that they have an emergency plan in place; however, their OEP had not been annually reviewed, and was last updated in 2004. Officials at that facility said that a revision was currently under way. Officials at all 20 facilities told us they conduct at least one annual evacuation drill, as directed in the ISC 2010 standard, with several officials reporting their facility conducts multiple drills each year. We analyzed the extent to which the selected facilities’ OEPs incorporated elements that should be in an OEP according to the ISC 2010 standard, which outlines 10 minimum elements: 1. purpose and circumstances for activation, 2. command officials and supporting personnel contact information, 3. occupant life safety options (e.g., evacuation, shelter-in-place), 4. local law enforcement and first responder response, 5. special needs individuals (e.g., those with disabilities, or who are 7. special facilities (e.g., child care centers), 8. assembly and accountability, 9. security during and after incident, and 10. training and exercises. We found that 13 of the 20 facilities addressed all of the minimum elements that were applicable; in some of these cases, OEP elements were addressed in other emergency documents, such as supplemental child care OEPs. Seven of the facilities did not address at least one OEP element in the ISC 2010 standard in their OEPs or other documents. That an element was not in the plan or in related documents for 7 facilities does not necessarily indicate potential vulnerabilities for these facilities because other procedures or facility services may address the intent of the OEP element. For example, 6 of the 7 OEPs did not specifically describe security during or after an emergency event. Officials in all six cases identified existing security, such as building security guards, as having responsibility. Officials at 2 facilities reported that they were updating their OEPs after our site visit and would identify existing security in the plans. As another example, at 2 facilities where training or exercises were not included in the OEPs, officials at both facilities (which were housed in leased GSA space) said that building management conducts drills and that they participate. The 2010 standard and 2013 ISC guidance both allow for necessary adjustments to be made to a facility’s emergency plan based on specific requirements or needs. Plans at the 20 facilities we reviewed were unique to each facility, and there were differences in how each element was addressed, as the ISC 2010 standard and 2013 guidance allow. Specific details on how OEP elements are expected to be addressed are not included in ISC’s 2010 standard, which we used to review facility OEPs, or in ISC’s 2013 guidance. ISC officials said that there is so much variability among facilities that it is difficult to identify what would be appropriate for all facilities. For example, in one plan, command official information might include multiple contacts and a detailed list of responsibilities for each official, while another plan refers occupants to security services, which would be responsible for contacting command officials. Appendix II provides other examples of variation in how facilities addressed the 10 minimum elements in the plans we reviewed. We did observe some commonality in the 20 facility OEPs we reviewed, based on facility characteristics such as security level, whether the facility was GSA owned or leased, and occupant characteristics, as shown in table 1. Officials at 14 of 20 facilities in our review identified challenges, and all but one reported responding to challenges they encountered in developing and implementing emergency evacuation procedures. Officials at 6 facilities said that they did not identify any challenges. Half of the officials reporting challenges told us that actual emergency events and exercises helped to identify issues and mitigation steps that allowed their facilities to generally carry out effective emergency evacuations. For example, the majority of officials at facilities we visited in Washington, D.C., who experienced the 2011 earthquake said that because of the lack of earthquake procedures or training, emergency teams could not control employees’ evacuation process. They said that many employees essentially self-evacuated, exposing themselves to hazards such as falling debris and, in one case, evacuated to an unsafe assembly area under an overpass. These officials said that they have since researched proper earthquake procedures, and have revised or are in the process of revising their OEPs accordingly. As shown in figure 2, officials at facilities we visited identified several challenges they addressed. The top three challenges cited by officials at the14 selected facilities that identified challenges were (1) participation (2) knowing which employees are present (9 apathy (10 facilities), facilities), and (3) keeping plan information current (7 facilities). The remaining challenges were cited by 6 or fewer of the selected facilities. Officials at all but 1 facility provided additional detail regarding actions they are taking to mitigate facility evacuation challenges. Officials at that facility reported that the OEP was to be updated, but did not describe how they specifically plan to mitigate OEP challenges they identified. For each of the top three challenges, officials at facilities that cited challenges described some of the actions taken to address those challenges. Employee participation apathy. Officials at 10 of the 20 selected facilities cited apathy as a challenge they encountered, such as employees not participating in or responding quickly to drills; not wanting to stop working or leave the building; not reporting to the assembly area (e.g., going for a coffee break during an evacuation drill); and not volunteering for emergency team responsibilities, such as becoming a floor warden. Officials at 9 of the 10 facilities described a variety of actions to address this challenge, Officials at 5 facilities said that leadership plays a role, such as leading by example, or drawing management or supervisory attention to nonparticipants. For example, at 1 facility, officials said supervisors were notified of the lack of participation in emergency drills and training and asked to emphasize the importance of participation. Officials at another facility indicated that senior leaders lead by example, responding quickly and taking emergency drills and participation seriously to encourage employees to take emergency responsibilities seriously. Officials at 3 facilities said they address apathy by using drills, an awareness campaign, or other efforts to promote participation. At the other 2 facilities where this challenge was identified, officials at 1 facility said they were reviewing challenges and action options, and the other did not provide information on any mitigating activities. Officials at the third facility said that they made efforts to make emergency and evacuation training more interesting and interactive to maintain employee interest and attention, such as implementing a game meant to teach about various emergency situations and proper procedures. Knowing which employees are present (accounting for employees). Officials at 9 of the 20 selected facilities reported encountering this challenge, with employees teleworking or working offsite as a contributing factor. Officials at 8 facilities provided various examples of addressing this challenge. At 6 facilities, officials said they relied on supervisors, managers, and sign-in sheets to keep track of employees. Officials at 2 facilities mentioned using or planning to use technology to account for employees in an emergency situation. One facility is developing an emergency notification system that sends emergency information to as many as 10 different electronic devices to contact an individual and determine the individual’s location. Another facility is planning to use an entry scan system that records who is in the building and can provide a list to take roll at the evacuation rally point to account for employees. At 1 facility, where officials reported they are updating their OEP, efforts to mitigate this challenge were not described. Keeping emergency contact information updated. Officials at 7 of the 20 facilities said that it was an ongoing challenge to keep emergency contacts in the OEP current because of changes in an employee’s contact information or status such as a transfer or retirement. To address this challenge, officials at 6 facilities said they review and update contact information at various points, such as when staff leave; before drills; or on a daily, weekly, monthly, or quarterly basis at different facilities. At one facility, officials said that they rely on tenants to provide notice of personnel changes. At another, an official said that the facility’s technology department was able to align its employee finder database with the agency’s separation database to automatically flag when employees have a change in location or status. Information was not available for 1 facility on any efforts to mitigate this challenge. Officials at facilities we visited reported experiencing and addressing other challenges less frequently such as keeping employees trained, evacuating the public and persons with physical handicaps, communicating about an evacuation, and coordinating with other building tenants. Officials who reported encountering these challenges told us that they had mechanisms in place to mitigate the challenges they encountered, such as the use of hand-held radios for communications, so the challenges were not considered an issue that prevented them from carrying out effective emergency evacuations. Other incidents or situations have also prompted facilities to revise their OEPs or for FPS to evaluate emerging threats and revise its training, as discussed in the examples below. Practice drills. During a practice drill evacuation at 1 facility, it was discovered that the path to the evacuation assembly area was up a steep slope and that some of the employees could not make the climb. The assembly area was subsequently changed and the OEP revised. Emerging threats. FPS headquarters officials stated that recent media coverage of active shooter situations has increased the public’s perception of this threat to facility safety and security. A fatal active shooter incident at 1 facility in Los Angeles prompted the revision of safety and evacuation procedures. FPS headquarters officials said that FPS has developed awareness training courses for how to handle an active shooter situation, and has proactively offered this training to facilities. To identify and help agencies address evacuation or OEP challenges, officials at ISC, GSA, and FPS said that they provide initial guidance regarding the OEP, and may provide additional assistance if requested by facilities or agencies. For example, ISC officials stated that they issued their March 2013 OEP Program Guidance in response to concerns raised by ISC’s members for consistency in OEP guidance. Officials said agencies experiencing a challenge regarding their OEPs (or other issues) can ask ISC for specific help such as one-on-one assistance, or referral to other agency officials that have addressed a similar challenge. Also, ISC officials said a working group can be created to identify solutions to an issue, as was the case in developing the 2013 guidance. As discussed earlier, GSA and FPS have published OEP information, and may provide additional information or training assistance in meeting specific challenges on a case-by-case basis. We provided a draft of this report to DHS and GSA for review and comment. GSA had no comments on the report. DHS provided technical comments, which were incorporated as appropriate. DHS also provided written comments, which are summarized below and reprinted in appendix III. In its written comments, DHS reiterated that OEPs are critical in safely evacuating federal facility occupants in an emergency. DHS noted that GAO recognized the complex roles performed by the ISC, GSA, FPS, and agency officials to ensure that the approximately 9,600 GSA-owned and -leased facilities have an OEP. For instance, DHS cited that ISC establishes standards and guidance for developing OEPs that are responsive to individual facility needs, whereas FPS is responsible for coordinating with and assisting department and agency officials in developing facility OEPs, and providing agencies with evacuation training, among other things. DHS also stated that it is committed to working collaboratively with ISC and GSA to identify and mitigate security-related vulnerabilities at federal facilities. We are sending copies of this report to the Department of Homeland Security, the Administrator of the General Services Administration, selected congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact Joseph Kirschbaum at (202) 512-9971 or by e-mail at [email protected] or Mark Goldstein at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. 1. who is responsible for ensuring that federal facilities have occupant emergency plans (OEP) in place and the extent to which selected facilities’ OEPs reflect federal guidance and 2. evacuation challenges, if any, that selected facilities experienced and what actions, if any, they reported taking to address these issues. To describe who is responsible for ensuring that federal facilities have OEPs in place, we reviewed federal laws, regulations, executive orders, and guidance related to the oversight of federal facilities. This included relevant sections of the Homeland Security Act of 2002; the regulations regarding federal property facility management and federal agency requirements for OEPs; Executive Order 12977, establishing the Interagency Security Committee (ISC); and Executive Order 13286, amending it. We reviewed OEP guidance issued by ISC, the Federal Protective Service (FPS), and the General Services Administration (GSA). We also reviewed our previous work on the roles of FPS, GSA, and ISC in protecting federal facilities. We interviewed relevant senior agency officials regarding their agencies’ role in ensuring federal facilities have OEPs in place, including ISC officials in Washington, D.C.; officials from FPS and GSA in their headquarters; and FPS and GSA officials in the three field locations where we conducted site visits to selected federal facilities, as described below. To describe the extent to which the selected facilities’ OEPs reflect federal guidance, we conducted site visits at 20 of the GSA facilities protected by FPS.follows: We selected a nonprobability sample of facilities as We selected three geographically diverse areas with a concentration of GSA facilities from GSA’s top 15 major real estate markets. Specifically, we selected two areas from the top 5 markets in terms of GSA assets (Los Angeles, California, and Washington, D.C.), and one area from a smaller GSA market defined as having fewer than 100 facilities (Kansas City, Missouri). To ensure a subset of facilities would be able to discuss evacuation experiences they have had, we selected 9 facilities total from the three areas that had reported an evacuation incident to an FPS MegaCenter during 2011 or 2012. Each of the four FPS MegaCenters records incidents such as fire alarms, suspicious packages, and evacuation drills that are reported to that center as part of the center’s operations log, with an activity code that can be queried for incidents. Only incidents reported to a MegaCenter are captured, so, for example, if local police respond to a call at a facility and do not call FPS, the incident would not be included in the MegaCenter data. According to discussions with MegaCenter data officials and a review of the data content, we determined that the incident data were reliable for our purposes, as our sample was not intended to be representative of all incidents. We used a list provided by GSA from its Real Estate Across the United States (REXUS) database to select the remaining 11 facilities to provide a mix of owned and leased properties, and a mix of facility security levels. We determined that the REXUS database was reliable for our purposes based on a review of database documents and discussion with relevant GSA officials. See table 2 for a summary of characteristics of the 20 facilities we selected. For all selected facilities, we reviewed the extent to which the OEPs included the 10 minimum elements that should be included based on ISC’s Physical Security Standard (ISC 2010 standard) for federal facilities. For example, 1 element that an OEP should include is information on “Special Needs Individuals (disabled, deaf, etc.).” For each facility in our sample, two team members reviewed the OEP and assessed whether or not each of the elements was addressed. The ISC 2010 standard indicates that the 10 elements should be present; however, it notes that the scope and complexity of the OEP are dependent on the facility’s size, population, and mission, and the standard does not provide a description of, or detail on, what should be included for each element. Further, not all elements may be applicable for a facility, for example, if the facility does not have a child care or other special facility. Because of the general nature of the elements, we assessed whether a particular element was present in a facility’s OEP, not its quality or comprehensiveness. We reviewed additional documents provided by agency officials, such as child care center emergency plans, emergency cards for quick use, and FPS’s facility security assessment protocol, used by FPS inspectors when periodically checking OEPs. We also interviewed GSA property managers and officials from agencies who occupy each facility about the facility’s plan. Further, those officials were those identified by GSA and the tenant agency as most knowledgeable about the OEP, which in some cases was the designated official, and in other cases, the facility official was, for example, a manager involved with facility security. While the findings from our 20 case studies are not generalizable to all GSA-owned and -leased facilities, they provide specific examples of how selected facilities have addressed emergency plan requirements and provide insights from a range of federal facilities. To describe the challenges and evacuation experiences of the 20 selected facilities, we discussed specific evacuation instances with facility and GSA officials, the challenges officials face in planning and executing evacuation plans, and any steps taken to mitigate the challenges. We asked about evacuation challenges in general, and about specific challenges that were identified by a review of the literature and from discussion with FPS. We also asked officials at the facilities we visited about challenges, and they determined whether they perceived an issue to be a challenge or not. Where available, we reviewed after-action reports documenting facility evacuation experiences. We also discussed evacuation experiences and challenges with ISC, GSA, and FPS officials. Our findings regarding what issues presented challenges and how such challenges could be resolved cannot be generalized to all GSA-owned and -leased facilities; however, they provide specific examples of issues encountered and how varying facilities addressed them. We conducted this performance audit from August 2012 to October 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Facility plans we reviewed addressed the ISC minimum 10 elements in a variety of ways, consistent with agency guidance and facility characteristics. Guidance in the ISC 2010 Physical Security Criteria for Federal Facilities notes that an OEP’s scope and complexity will be dependent on a facility’s size, population, and mission. Table 3 presents excerpts from OEPs from the 20 selected facilities we visited in Washington, D.C.; Kansas City, Missouri; and Los Angeles, California; and were selected to show the variation in how plan elements were addressed. Joseph Kirschbaum, 202-512-9971, or [email protected], Mark Goldstein, 202-512-2834, or [email protected]. In addition to the contacts named above, Leyla Kazaz (Assistant Director), Tammy Conquest (Assistant Director), Dorian Dunbar, Eric Hauswirth, Mary Catherine Hult, Monica Kelly, Tracey King, Erica Miles, Linda Miller, and Kelly Rubin made key contributions to this report.
Recent emergencies, such as earthquakes in the nation's capital, have raised concerns about how prepared federal agencies are in the 9,600 facilities owned or leased by GSA and protected by the Department of Homeland Security's FPS to safely evacuate occupants in federal buildings. All federal agencies are required to prepare OEPs for their facilities, which describe actions agencies should take to plan for a safe evacuation during an emergency. GAO was asked to provide information on how prepared GSA-owned and -leased facilities are to evacuate occupants during an emergency. This report describes (1) who is responsible for ensuring that federal facilities have OEPs in place and the extent to which selected facilities' OEPs reflect federal guidance, and (2) the evacuation challenges, if any, selected facilities experienced and what actions, if any, they reported taking to address these issues. GAO reviewed federal regulations and guidance on OEPs, including documents from ISC, GSA, and FPS, which develop governmentwide physical security standards and policies, such as minimum elements for OEPs. GAO also reviewed OEPs and interviewed facility officials at 20 GSA-owned and -leased facilities, selected based on geographic dispersion, recent evacuations, and facility security level. While not generalizable to all GSA-owned and -leased facilities the results provided perspectives of varying facilities. DHS written and technical comments were incorporated, as appropriate. GSA did not have any comments. Federal agencies occupying facilities owned or leased by the General Services Administration (GSA) are responsible for preparing and maintaining occupant emergency plans (OEP), with assistance or guidance from the Federal Protective Service (FPS) and others, and the majority of selected federal facilities' OEPs GAO reviewed reflect federal guidance. As required by federal regulations, all 20 selected facilities had OEPs and had designated officials, who are responsible for maintaining OEPs and initiating action according to the OEP in the event of an emergency, including the evacuation of facility occupants. Consistent with federal guidance, officials at 19 of the 20 selected facilities reported that they review and update OEPs at least annually, and officials at 1 facility said they were in the process of updating their OEP. When requested, FPS provides OEP guidance, such as templates to facility officials. Officials at 14 facilities reported using FPS guidance or feedback for their OEPs, officials at 1 facility reported not using FPS guidance, and officials at 5 facilities said they used their own agency's guidance. FPS also checks OEPs during periodic facility security assessments--conducted at least every 3 to 5 years-- to assess overall facility risk. GSA officials said they have a role in coordinating directly with facilities to provide guidance and feedback on OEPs, and to help facility officials plan drills and exercises. To assist agency officials as they develop OEPs that best fit individual facilities and agency needs, the Interagency Security Committee (ISC), a Department of Homeland Security-chaired policy development organization, in April 2010 identified 10 minimum elements, such as exercises or evacuating occupants with special needs, that should be addressed in an OEP. Thirteen of the 20 selected facilities addressed all 10 minimum elements in OEPs or related documents. Seven facilities' OEPs did not address at least 1 of the 10 elements; however, lack of an element does not necessarily indicate potential vulnerabilities for that facility because the intent of the element may be addressed by other procedures or modified based on facility characteristics. For example, evacuation exercises were not included in OEPs for 2 facilities located in leased GSA space; however, officials said they participate in drills conducted by building management. The 20 selected facility OEPs were unique to each facility and how OEPs addressed particular elements. Officials at 14 of 20 facilities identified evacuation challenges. The most frequently cited challenges included employee apathy toward participating in drills, accounting for employees, and keeping contact information updated. Officials at all but one facility, which was updating its OEP, reported various ways they addressed evacuation challenges, including using technology such as entry scan systems and radios to track and communicate with employees and making evacuation training more interesting to employees. Other incidents and emerging threats also prompted officials to change OEPs or evacuation training. For example, during the 2011 Washington, D.C., earthquake, officials at selected facilities in the D.C. area said that the lack of employee training on earthquake procedures may have exposed employees to potential hazards when they self-evacuated. Officials reported revising their OEPs to include procedures for earthquakes. Recent shootings also prompted facility officials to revise their OEPs and participate in FPS awareness training on active shooter incidents. Officials at 6 facilities did not report challenges.
The government relies on contractors to provide a range of mission- critical support from operating information technology systems to battlefield logistics support. Federal regulations state that prime contractors are responsible for managing contract performance, including planning, placing, and administering subcontracts as necessary to ensure the lowest overall cost and technical risk to the government. Successful offerors for contracts exceeding $650,000 that have subcontracting possibilities are required to submit a subcontracting plan which includes, among other things, a statement of the total dollars planned to be subcontracted, the principal types of supplies and services to be subcontracted, and assurances that the offeror will submit periodic reports so that the government can determine the extent to which it has complied with the plan.pays the contractor for allowable incurred costs, to the extent prescribed in the contract. These costs may include the costs and fees charged to the prime by its subcontractors. Figure 1 depicts how lower tier subcontractor costs become part of the higher tier’s and prime contractor’s overall costs. USAID, State, and DOD varied in their implementation of Section 802. USAID issued a policy directive that restated Section 802 requirements in 2013 and is in the process of updating various tools to assist its contracting officers. More recently, State issued a procurement bulletin that restated Section 802 requirements, but has not taken further steps to update tools to assist its contracting officers. In contrast, DOD has not taken actions, noting that the department intends to wait until a final FAR rule is published in 2015. In general, agencies do not know the extent to which offerors proposed to subcontract 70 percent or more of the total cost of their contracts because agency specific and federal data systems do not provide such information. In lieu of creating new reporting mechanisms, each agency indicated its intent to assess contracting officers’ implementation of Section 802 requirements using its existing procurement management reviews. USAID has initiated revisions of associated review guidance to incorporate these requirements, State indicated that its current review guidance is adequate, and DOD has not determined if changes to its guidance are necessary. USAID and State have taken some initial steps to implement Section 802, but neither have provided contracting officers with guidance on how to perform the required analysis of alternate acquisition approaches or determine the feasibility of contracting directly with proposed subcontractors, specified what documentation is necessary, or identified where the required determination should be recorded. For example, USAID issued a policy directive and State issued a procurement bulletin in June 2013 and July 2014, respectively, that each restate the requirements of Section 802. Specifically, both agencies require that if offerors propose to award subcontracts for more than 70 percent of the total cost of work to be performed, agency contracting officers must consider alternative contracting vehicles, make a written determination of why the contracting approach is in the best interest of the government, and document the basis for the determination. USAID’s Office of Acquisition and Assistance and State’s Office of Acquisition Management communicated the new requirement to its acquisition community electronically and posted the policy to its websites for future reference. USAID and State have not issued guidance or agency-specific regulations, however, to assist contracting officers in carrying out the tasks required by Section 802. USAID officials noted that the agency is currently updating checklists used by its contracting officers when performing cost and price analysis or for drafting memoranda of negotiation to include Section 802 requirements. They expect that these checklists will be revised by the end of the year. USAID officials do not anticipate the need to revise agency regulations once the FAR is updated, noting that its contracting personnel assess proposed contracting arrangements in accordance with the FAR as part of its standard acquisition processes. Officials noted that the updated checklists will remind its personnel of the Section 802 requirements to assess the feasibility of contracting directly with proposed subcontractors, and to document their determination that the accepted approach is in the government’s best interests as well as the basis for that determination. However, it is unclear if checklist revisions will provide contracting officers with the information necessary to perform the analysis of alternative acquisition approaches required by Section 802. State officials do not plan to update guidance to assist contracting officers with information on how to perform the steps required by Section 802. Similar to USAID, State officials do not anticipate revising departmental regulations once the FAR rule is finalized. DOD has not issued any policies nor has it issued guidance or regulations to implement Section 802. DOD officials explained that they are waiting for the FAR rule expected to be issued by March 2015 and did not want to issue policy, guidance, or regulations that may contradict the final FAR rule. DOD officials noted that once the FAR is revised, they will consider whether changes to DOD policy, guidance, and instruction would be necessary. Federal government internal control standards state that control activities, such as policies and procedures, help to ensure that management directives are carried out and actions are taken to address risks. Further, government internal controls note that information should be recorded and communicated to enable completion of internal control and other responsibilities. The lack of additional guidance that identifies approaches for or examples of how USAID, State, and DOD contracting officers may assess alternative contracting approaches, to include the feasibility of contracting directly with proposed subcontractors, or how to document their determination that the selected approach is in the best interests of the government may increase the departments’ risk of not being in compliance Section 802. USAID, State, and DOD officials stated that their contracting and financial management systems do not track the subcontracting that a contractor intends to, or has used, in performing the contracts. Further, while there have been efforts to develop government-wide information on subcontracting awards, these systems do not include data to identify all levels of subcontracting. In that regard, in June 2014, we found subcontract data contained in USASpending.gov to be unverifiable because agencies frequently did not maintain the records necessary to verify the information reported by the awardees. Our analysis found that in fiscal year 2013 DOD reported obligating approximately $129.5 billion on contracts which were of the type of contract and dollar value that would potentially be subject to Section 802, and that had a subcontracting plan, while USAID and State reported obligating approximately $817 million and $222 million, respectively, for contracts that would be subject to the criteria applicable to those agencies. USAID, State, and DOD officials believe that few of the prime contractors subcontract 70 percent or more of the total cost of work performed on the types of contracts subject to Section 802. For example, State acquisition officials believe that such a high level of subcontracting is more likely to be found on State’s contracts for construction, which often use fixed-priced contracts and, therefore, would not be subject to the requirements of Section 802. In lieu of developing new data or reporting requirements to assess whether its acquisition personnel are properly implementing Section 802, USAID acquisition officials noted that they will rely on routine, on-site procurement system and contract reviews to evaluate contracting offices’ compliance with Section 802. An official from USAID’s Office of Acquisition and Assistance, Evaluation Division stated that they completed three procurement management reviews at international locations in October 2014 during which pass-through contracts were discussed, but were not found to be an issue. USAID officials noted that they intend to revise their guidance governing procurement system reviews by the end of 2014. Similarly, both State and DOD officials indicated that they intend to rely on procurement management reviews to determine if contracting officers are properly implementing Section 802, but noted that associated review guidance has not been updated. At the same time, our review of current USAID, State, and DOD procurement management review instructions and checklists found that they do not reflect all Section 802 requirements. State officials believe that pass- through contracting issues will be addressed because its officials use the department’s domestic contract file table of contents as a checklist, which includes sections on solicitation and pre-award documentation, to conduct contract file reviews. State does not currently plan to revise the guidance to specifically reflect Section 802 requirements. DOD acquisition officials noted that they have not yet determined what changes to guidance, if any, would be required. GAO/AIMD-00-21.3.1. specifically incorporating these requirements into review guidance, agencies may not effectively oversee compliance. Over the past 8 years, a number of legislative and regulatory changes have been enacted to enhance the government’s insight into the billions of dollars that prime contractors award to subcontractors. Initially, the focus of these changes had been on requiring prime contractors to notify contracting officers when they intend to rely largely on subcontractors. Section 802 changed that paradigm by requiring contracting officers to consider alternative arrangements, such as directly contracting with subcontractors and to make a written determination that the accepted approach is in the best interest of the government, and to document the basis for such a determination when they receive such a notification. Neither USAID nor State has provided its contracting officers additional guidance to help them implement these new requirements. DOD, having the highest level of obligations for contracts that may be potentially subject to Section 802 requirements, has not taken any actions and is waiting for the issuance of a final FAR rule before deciding what, if any, revisions to its guidance are needed. USAID, State, and DOD intend to use procurement management reviews to monitor implementation of Section 802, but have not updated the related review processes and guidance. Without data on the extent to which agencies make use of contracts that rely largely on subcontractors, it is essential that agencies provide guidance to assist their contracting officers in implementing legislative requirements and develop oversight processes to assess their agencies’ compliance. DOD’s decision to wait until the FAR is revised next year rather than proactively initiating actions has increased the risk that its contracting personnel are not currently acting in compliance with the law and may also have missed opportunities to establish more beneficial contracting arrangements. While State and USAID obligate less for contracts that may be subject to Section 802, their more limited exposure does not obviate the need to provide guidance or management oversight. Federal internal control standards highlight the need for agencies to establish processes and procedures to ensure compliance with statutory provisions. We recommend that the Secretary of Defense, Secretary of State, and Administrator of USAID take the following two actions to help ensure contracting officers carry out the requirements of Section 802: issue guidance to assist contracting officers by identifying approaches for or examples of how to assess alternative contracting approaches to include the feasibility of contracting directly with proposed subcontractors, and documenting a determination that the approach selected is in the best interests of the government; and revise the processes and guidance governing management reviews of procurements to ensure that such reviews assess whether contracting officers are complying with the provisions of Section 802. We provided a draft of this report to DOD, State, and USAID for comment. DOD and State concurred with our recommendations, but USAID did not agree that additional guidance was necessary to assist contracting officers when assessing alternative contracting approaches and noted current management reviews would be used to identify subcontracting issues. The agencies’ comments are summarized below. Written comments from DOD, State, and USAID are reproduced in appendixes II, III, and IV respectively. In response to the recommendation to identify approaches for, or examples of, how to assess alternative contracting approaches, and document a determination that the approach selected is in the best interests of the government, DOD and State concurred. DOD, by way of explaining its delay in issuing guidance, noted it was required to adhere to the regulatory rulemaking process and noted that the final FAR rule to implement Section 802 is expected to be issued in March 2015. Section 802 provided flexibility for DOD to issue guidance and regulations as may be necessary to ensure contracting officers take specific actions for certain solicitations and we continue to believe DOD should do so expeditiously. State noted that it will issue guidance to assist contracting officer to assess alternative contracting approaches and document the determination that the approach selected is in the best interest of the government. USAID did not believe it was necessary to issue additional guidance, stating that the FAR has provided specific actions for a contracting officer to follow to ensure that they identify and adequately reconsider proposed contracts and that to provide additional guidance may limit its contracting officers’ discretion. As we noted in the report, USAID has restated these actions in a policy directive, but it has not provided contracting officers with guidance on how to perform these new requirements. Specifically, USAID has not provided examples of how to conduct an analysis of alternate acquisition approaches or determine the feasibility of contracting directly with proposed subcontractors, described what documentation is necessary, or identified where the required contracting officer’s determination should be recorded. Rather than limit a contracting officer’s discretion, we believe that more specific guidance will help contracting officer’s comply with these new requirements, improve oversight, and align more closely with federal internal control standards. DOD and State concurred with our recommendation to revise their processes and guidance governing management reviews of procurements to ensure that such reviews assess whether contracting officers are complying with the provisions of Section 802. DOD indicated it will provide or revise and update guidance governing management reviews to ensure that such reviews assess whether contracting officers are complying with the provisions of Section 802. In addition, once the final FAR rule is issued, DOD noted it will assess whether it needs to include supplemental information in its Procedures, Guidance, and Information to ensure compliance. State noted that it will also revise its management review guidance to include compliance with Section 802. USAID did not specifically state whether it concurred with the recommendation, noting that it will be reviewing random files during procurement system reviews to see if contracting officers are taking appropriate steps in instances where an offeror proposes to award subcontracts for more than 70 percent of the total award. We agree that such an approach could be an appropriate step when conducting a procurement management review, but, as noted in the report, USAID has not revised the processes and guidance governing management reviews of procurements to do so. Federal internal control standards state that guidance helps to ensure that management directives are carried out and actions are taken to address risks and, accordingly, we continue to believe that USAID should revise its procurement management guidance. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of State, and the Administrator, USAID, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Section 802 of the National Defense Authorization Act (NDAA) for Fiscal Year 2013 directed the Secretary of Defense, Secretary of State, and the Administrator of the United States Agency for International Development (USAID) to issue guidance and regulations as may be necessary to ensure that when an offeror notifies the contracting officer of its intent to subcontract more than 70 percent of the total cost of the work to be performed on certain contracts or orders, contracting officers consider alternative contracting arrangements and make a written determination that the contracting approach selected is in the best interest of the government and document the basis for the determination. The conference report that accompanied the act mandated GAO to report on the implementation of this provision. We assessed the extent to which the Department of Defense (DOD), Department of State (State), and USAID revised their guidance and regulations to address pass-through contracting issues consistent with Section 802. To conduct our work, we compared the provisions of Section 802 to current DOD, State and USAID acquisition policies, guidance, and regulations. Specifically, we assessed whether these agencies issued policies, guidance, and regulations to ensure for certain solicitations when an offeror informs the agency of its intention to award subcontracts for more than 70 percent of the total cost of work to be performed that the contracting officer (1) considers the availability of alternative contracting arrangements and the feasibility of contracting directly with a subcontractor that will perform the bulk of the work, (2) makes a written determination that the contracting approach selected is in the best interest of the government, and (3) documents the basis for such determination. We interviewed acquisition officials at DOD’s Office of Defense Procurement and Acquisition Policy, the military departments, Defense Contract Management Agency (DCMA), and Defense Contract Audit Agency (DCAA); State’s Offices of Acquisition Management and Office of the Procurement Executive; and USAID’s Office of Acquisitions and Assistance in Washington, D.C. and Kabul, Afghanistan, to understand the steps taken and future plans for completing the required guidance and regulation. We also interviewed Federal Acquisition Regulation Council representatives to understand content and timeframes for a government-wide regulation being developed in response to the Section 802 statutory requirements. To determine how agencies identify or monitor pass-through contracts, we interviewed acquisition officials at DOD, State, and USAID and reviewed guidance available to contracting officers and supporting agencies, such as DCAA and DCMA. We used the government’s procurement database—Federal Procurement Data System-Next Generation (FPDS-NG)—to identify DOD, State, and USAID total obligations for fiscal year 2013 on contracts that by type and value could be subject to Section 802 requirements and reported having a plan to use subcontractors. For DOD, these are contracts where obligations exceed $700,000 and exclude certain fixed-price contracts. For State and USAID, contracts include those where obligations exceeded $150,000 and are cost-type contracts. To assess the reliability of FPDS-NG’s data, we (1) performed electronic testing for obvious errors in accuracy and completeness; and (2) reviewed related documentation. We found the prime contract obligation data sufficiently reliable for the purposes of this report. We were unable to use subaward information in USASpending.gov to reliably identify whether these obligations were made to contractors whose subcontracting costs comprised over 70 percent of the total costs of the contract. As noted in our June 2014 report, federal reporting systems used to populate USASpending.gov do not identify all contract awards and that the information is largely inconsistent with agency records or unverifiable. We conducted this performance audit from June 2014 to December 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Penny Berrier (Assistant Director), Marycella Cortes, Thomas Twambly, Julia Kennon, Alyssa Weir, Danielle Greene, and Jena Sinkfield made key contributions to this report.
DOD, State, and USAID collectively spent approximately $322 billion on goods and services in fiscal year 2013. Nearly two-thirds of this dollar amount was awarded to prime contractors reportedly having a plan for using subcontractors. Concerns remain that the government could overpay contractors that provide no, or little, added value for work performed by lower-tier subcontractors. Section 802 of the NDAA for fiscal year 2013, mandated DOD, State, and USAID to issue guidance and regulations as necessary to ensure that contracting officers take additional steps prior to awarding pass-through contracts. The accompanying conference report mandated that GAO evaluate the implementation of these requirements. This report assesses the extent to which DOD, State, and USAID issued guidance and regulations consistent with Section 802. GAO reviewed policies, guidance, and regulations at the three agencies and interviewed acquisition officials. Congress required the Department of Defense (DOD), the Department of State (State), and the United States Agency for International Development (USAID) to issue guidance and regulations as necessary to ensure that contracting officers complete additional analyses prior to awarding pass-through contracts—contracts meeting certain criteria and in which prime contractors plan to subcontract 70 percent or more of the total cost of work to be performed—by July 2013. (See figure.) DOD, State, and USAID varied in their implementation of Section 802. Specifically, GAO's analysis of the agencies' policies and regulations found the following: USAID issued a policy directive in June 2013 restating Section 802 requirements and is updating checklists used by contracting officers. State issued a procurement bulletin in July 2014 that restated Section 802 requirements but has not taken further steps. Neither USAID nor State has provided its contracting officers additional information to help them implement these new requirements, such as by identifying how to assess alternative contracting arrangements or how to document their decisions. DOD has not taken any actions and is waiting for revisions to the Federal Acquisition Regulation—expected to be completed by March 2015—before deciding what, if any, changes to its guidance are needed. As of November 2014, none of the agencies have updated their management review processes to reflect Section 802 requirements. Federal government internal control standards state that control activities, such as policies and procedures, help to ensure that management directives are carried out and actions are taken to address risk. The lack of guidance and updated management review processes limits the agencies' ability to minimize the potential risk of paying excessive pass-through costs. To help ensure contracting officers carry out Section 802 requirements, GAO recommends that DOD, State, and USAID take two actions: issue guidance to help contracting officers perform the additional steps required, and revise management review processes and guidance to verify implementation. DOD and State agreed with GAO's recommendations but USAID did not, stating that additional guidance might limit its contracting officers' discretion. GAO maintains that both recommended actions are still warranted for USAID.
DHS’s primary strategic planning effort in recent years has been the QHSR. DHS approached the 9/11 Commission Act requirement for a quadrennial homeland security review in three phases. In the first phase, DHS defined the nation’s homeland security interests, identified the critical homeland security missions, and developed a strategic approach to those missions by laying out the principal goals, objectives, and strategic outcomes for the mission areas. DHS reported on the results of this effort in the February 2010 QHSR report in which the department identified 5 homeland security missions, 14 associated goals, and 43 objectives. The QHSR report also identified threats and challenges confronting U.S. homeland security, strategic objectives for strengthening the homeland security enterprise, and federal agencies’ roles and responsibilities for homeland security. The QHSR identified five homeland security missions— (1) Preventing Terrorism and Enhancing Security, (2) Securing and Managing Our Borders, (3) Enforcing and Administering Our Immigration Laws, (4) Safeguarding and Securing Cyberspace, and (5) Ensuring Resilience to Disasters—and goals and objectives to be achieved within each mission. A sixth category of DHS activities— Providing Essential Support to National and Economic Security—was added in the fiscal year 2012 budget request but was not included in the 2010 QHSR report. In the second phase—the BUR—DHS identified its component agencies’ activities, aligned those activities with the QHSR missions and goals, and made recommendations for improving the department’s organizational alignment and business processes. DHS reported on the results of this second phase in the July 2010 BUR report. In the third phase DHS developed its budget plan necessary to execute the QHSR missions. DHS presented this budget plan in the President’s fiscal year 2012 budget request, issued February 14, 2011, and the accompanying Fiscal Year 2012-2016 Future Years Homeland Security Program (FYHSP), issued in May 2011. In December 2010, we issued a report on the extent to which the QHSR addressed the 9/11 Commission Act’s required reporting elements. We reported that of the nine 9/11 Commission Act reporting elements for the QHSR, DHS addressed three and partially addressed six. Elements DHS addressed included a description of homeland security threats and an explanation of underlying assumptions for the QHSR report. Elements addressed in part included a prioritized list of homeland security missions, an assessment of the alignment of DHS with the QHSR missions, and discussions of cooperation between the federal government and state, local, and tribal governments. In September 2011, we reported on the extent to which DHS consulted with stakeholders in developing the QHSR. DHS solicited input from various stakeholder groups in conducting the first QHSR, but DHS officials, stakeholders GAO contacted, and other reviewers of the QHSR noted concerns with time frames provided for stakeholder consultations and outreach to nonfederal stakeholders. DHS consulted with stakeholders—federal agencies; department and component officials; state, local, and tribal governments; the private sector; academics; and policy experts— through various mechanisms, such as the solicitation of papers to help frame the QHSR and a web-based discussion forum. DHS and these stakeholders identified benefits from these consultations, such as DHS receiving varied perspectives. However, stakeholders also identified challenges in the consultation process. For example: Sixteen of 63 stakeholders who provided comments to GAO noted concerns about the limited time frames for providing input into the QHSR or BUR. Nine other stakeholders commented that DHS consultations with nonfederal stakeholders, such as state, local, and private-sector entities, could be enhanced by including more of these stakeholders in QHSR consultations. Reports on the QHSR by the National Academy of Public Administration, which administered DHS’s web-based discussion forum, and a DHS advisory committee comprised of nonfederal representatives noted that DHS could provide more time and strengthen nonfederal outreach during stakeholder consultations. By providing more time for obtaining feedback and examining mechanisms to obtain nonfederal stakeholders’ input, DHS could strengthen its management of stakeholder consultations and be better positioned to review and incorporate, as appropriate, stakeholders’ input during future reviews. We recommended that DHS provide more time for consulting with stakeholders during the QHSR process and examine additional mechanisms for obtaining input from nonfederal stakeholders during the QHSR process, such as whether panels of state, local, and tribal government officials or components’ existing advisory or other groups could be useful. DHS concurred and reported that it will endeavor to incorporate increased opportunities for time and meaningful stakeholder engagement and will examine the use of panels of nonfederal stakeholders for the next QHSR. The 9/11 Commission Act called for DHS to prioritize homeland security missions in the QHSR. As we reported in December 2010, DHS identified five homeland security missions in the QHSR, but did not fully address the 9/11 Commission Act reporting element because the department did not prioritize the missions. According to DHS officials, the five missions listed in the QHSR report have equal priority—no one mission is given greater priority than another. Moreover, they stated that in selecting the five missions from the many potential homeland security mission areas upon which DHS could focus its efforts, the five mission areas are DHS’s highest-priority homeland security concerns. Risk management has been widely supported by Congress and DHS as a management approach for homeland security, enhancing the department’s ability to make informed decisions and prioritize resource investments. In September 2011, we also reported that in the 2010 QHSR report, DHS identified threats confronting homeland security, such as high-consequence weapons of mass destruction and illicit trafficking, but did not conduct a national risk assessment for the QHSR. DHS officials stated that at the time DHS conducted the QHSR, DHS did not have a well-developed methodology or the analytical resources to complete a national risk assessment that would include likelihood and consequence assessments—key elements of a national risk assessment. The QHSR terms of reference, which established the QHSR process, also stated that at the time the QHSR was launched, DHS lacked a process and a methodology for consistently and defensibly assessing risk at a national level and using the results of such an assessment to drive strategic prioritization and resource decisions. In recognition of a need to develop a national risk assessment, DHS created a study group as part of the QHSR process that developed a national risk assessment methodology. DHS officials plan to implement a national risk assessment in advance of the next QHSR, which DHS anticipates conducting in fiscal year 2013. Consistent with DHS’s plans, we reported that a national risk assessment conducted in advance of the next QHSR could assist DHS in developing QHSR missions that target homeland security risks and could allow DHS to demonstrate how it is reducing risk across multiple hazards. DHS considered various factors in identifying high-priority BUR initiatives for implementation in fiscal year 2012 but did not include risk information as one of these factors as called for in our prior work and DHS’s risk management guidance. Through the BUR, DHS identified 43 initiatives aligned with the QHSR mission areas to help strengthen DHS’s activities and serve as mechanisms for implementing those mission areas (see app. I for a complete list). According to DHS officials, the department could not implement all of these initiatives in fiscal year 2012 because of, among other things, resource constraints and organizational or legislative changes that would need to be made to implement some of the initiatives. In identifying which BUR initiatives to prioritize for implementation in fiscal year 2012, DHS leadership considered (1) “importance,” that is, how soon the initiative needed to be implemented; (2) “maturity,” that is, how soon the initiative could be implemented; and (3) “priority,” that is, whether the initiative enhanced secretarial or presidential priorities. Risk information was not included as an element in any of these three criteria, according to DHS officials, because of differences among the initiatives that made it difficult to compare risks across them, among other things. However, DHS officials stated that there are benefits to considering risk information in resource allocation decisions. Consideration of risk information during future implementation efforts could help strengthen DHS’s prioritization of mechanisms for implementing the QHSR, including assisting in determinations of which initiatives should be implemented in the short or longer term. In our September 2011 report, we recommended that DHS examine how risk information could be used in prioritizing future QHSR initiatives. DHS concurred and reported that DHS intends to conduct risk analysis specific to the QHSR in advance of the next review and will use the analysis as an input into decision making related to implementing the QHSR. Further, in September 2011, we reported on progress made by DHS in implementing its homeland security missions since 9/11. As part of this work, we identified various themes that affected DHS’s implementation efforts. One of these themes was DHS’s efforts to strategically manage risk across the department. We reported that DHS made important progress in assessing and analyzing risk across sectors. For example, in January 2009 DHS published its Integrated Risk Management Framework, which, among other things, calls for DHS to use risk assessments to inform decision making. In May 2010, the Secretary issued a Policy Statement on Integrated Risk Management, calling for DHS and its partners to manage risks to the nation. We also reported that DHS had more work to do in using this information to inform planning and resource-allocation decisions. Our work shows that DHS has conducted risk assessments across a number of areas, but should strengthen the assessments and risk management process. For example: In June 2011, we reported that DHS and Health and Human Services could further strengthen coordination for chemical, biological, radiological, and nuclear (CBRN) risk assessments. Among other things, we recommended that DHS establish time frames and milestones to better ensure timely development and interagency agreement on written procedures for development of DHS’s CBRN risk assessments. DHS concurred and stated that the department had begun efforts to develop milestones and time frames for its strategic and implementation plans for interagency risk assessment development. In November 2011, we reported that the U.S. Coast Guard used its Maritime Security Risk Assessment Model at the national level to focus resources on the highest-priority targets, leading to Coast Guard operating efficiencies, but use at the local level for operational and tactical risk-management efforts has been limited by a lack of staff time, the complexity of the risk tool, and competing mission demands. Among other things, we recommended that the Coast Guard provide additional training for sector command staff and others involved in sector management and operations on how the model can be used as a risk-management tool to inform sector-level decision making. The Coast Guard concurred and stated that it will explore other opportunities to provide risk training to sector command staff, including online and webinar training opportunities. In November 2011, we reported that the Federal Emergency Management Agency (FEMA) used risk assessments to inform funding-allocation decisions for its port security grant program. However, we found that FEMA could further enhance its risk-analysis model and recommended incorporating the results of past security investments and refining other data inputs into the model. DHS concurred with the recommendation, but did not provide details on how it plans to implement it. In October 2009, we reported that TSA’s strategic plan to guide research, development, and deployment of passenger checkpoint screening technologies was not risk-based. Among other things, we recommended that DHS conduct a complete risk assessment related to TSA’s passenger screening program and incorporate the results into the program’s strategy. DHS concurred, and in July 2011 reported actions underway to address it, such as beginning to use a risk- management analysis process to analyze the effectiveness and efficiency of potential countermeasures and effect on the commercial aviation system. In September 2011, we reported that DHS established performance measures for most of the QHSR objectives and had plans to develop additional measures. Specifically, DHS established new performance measures, or linked existing measures, to 13 of 14 QHSR goals, and to 3 of 4 goals for the sixth category of DHS activities—Providing Essential Support to National and Economic Security. DHS reported these measures in its fiscal years 2010-2012 Annual Performance Report. For goals without measures, DHS officials told us that the department was developing performance measures and planned to publish them in future budget justifications to Congress. In September 2011, we also reported that DHS had not yet fully developed outcome-based measures for assessing progress and performance for many of its mission functions. We recognized that DHS faced inherent difficulties in developing performance goals and measures to address its unique mission and programs, such as in developing measures for the effectiveness of its efforts to prevent and deter terrorist attacks. While DHS had made progress in strengthening performance measurement, our work across the department has shown that a number of programs lacked outcome goals and measures, which may have hindered the department’s ability to effectively assess results or fully assess whether the department was using resources effectively and efficiently. For example, our work has shown that DHS did not have performance measures for assessing the effectiveness of key border security and immigration programs, to include: In September 2009, we reported that U.S. Customs and Border Protection (CBP) had invested $2.4 billion in tactical infrastructure (fencing, roads, and lighting) along the southwest border under the Secure Border Initiative—a multiyear, multibillion dollar program aimed at securing U.S. borders and reducing illegal immigration. However, DHS could not measure the effect of this investment in tactical infrastructure on border security. We recommended that DHS conduct an evaluation of the effect of tactical infrastructure on effective control of the border. DHS concurred with the recommendation and subsequently reported that the ongoing analysis is expected to be completed in February 2012. In August 2009, we reported that CBP had established three performance measures to report the results of checkpoint operations, which provided some insight into checkpoint activity. However, the measures did not indicate if checkpoints were operating efficiently and effectively, and data reporting and collection challenges hindered the use of results to inform Congress and the public on checkpoint performance. We recommended that CBP improve the measurement and reporting of checkpoint effectiveness. CBP agreed and, as of September 2011, reported plans to develop and better use data on checkpoint effectiveness. Further, we reported that U.S. Immigration and Customs Enforcement (ICE) and CBP did not have measures for assessing the performance of key immigration enforcement programs. For example, in April 2011, we reported that ICE did not have measures for its overstay enforcement efforts, and in May 2010 that CBP did not have measures for its alien smuggling investigative efforts, making it difficult for these agencies to determine progress made in these areas and evaluate possible improvements. We recommended that ICE and CBP develop performance measures for these two areas. They generally agreed and reported actions underway to develop these measures. In 2003, GAO designated the transformation of DHS as high risk because DHS had to transform 22 agencies—several with major management challenges—into one department, and failure to effectively address DHS’s management and mission risks could have serious consequences for U.S. national and economic security. This high-risk area includes challenges in strengthening DHS’s management functions—financial management, human capital, information technology, and acquisition management—the impact of those challenges on DHS’s mission implementation, and challenges in integrating management functions within and across the department and its components. Addressing these challenges would better position DHS to align resources to its strategic priorities, assess progress in meeting mission goals, enhance linkages within and across components, and improve the overall effectiveness and efficiency of the department. On the basis of our prior work, in September 2010, we identified and provided to DHS 31 key actions and outcomes that are critical to addressing the challenges within the department’s management functions and in integrating those functions across the department. These key actions and outcomes include, among others, validating required acquisition documents at major milestones in the acquisition review process; obtaining and then sustaining unqualified audit opinions for at least 2 consecutive years on the departmentwide financial statements while demonstrating measurable progress in reducing material weaknesses and significant deficiencies; and implementing its workforce strategy and linking workforce planning efforts to strategic and program- specific planning efforts to identify current and future human capital needs. In our February 2011 high-risk update, we reported that DHS had taken action to implement, transform, and strengthen its management functions, and had begun to demonstrate progress in addressing some of the actions and outcomes we identified within each management area. For example, we reported that the Secretary and Deputy Secretary of Homeland Security, and other senior officials, have demonstrated commitment and top leadership support to address the department’s management challenges. DHS also put in place common policies, procedures, and systems within individual management functions, such as human capital, that help to integrate its component agencies. For example, DHS revised its acquisition management oversight policies to include more detailed guidance to inform departmental acquisition decision making. strengthened its enterprise architecture, or blueprint to guide information technology acquisitions, and improved its policies and procedures for investment management. developed corrective action plans for its financial management weaknesses, and, for the first time since its inception, DHS earned a qualified audit opinion on its fiscal year 2011 balance sheet; and issued its Workforce Strategy for Fiscal Years 2011-2016, which contains the department’s workforce goals, objectives, and performance measures for human capital management. Further, in January 2011, DHS provided us with its Integrated Strategy for High Risk Management, which summarized the department’s preliminary plans for addressing the high-risk area. Specifically, the strategy contained details on the implementation and transformation of DHS, such as corrective actions to address challenges within each management area, and officials responsible for implementing those corrective actions. DHS provided us with updates to this strategy in June and December 2011. We provided DHS with written feedback on the January 2011 strategy and the June update, and have worked with the department to monitor implementation efforts. We noted that both versions of the strategy were generally responsive to actions and outcomes we identified for the department to address the high-risk area. For example, DHS included a management integration plan containing information on initiatives to integrate its management functions across the department. Specifically, DHS plans to establish a framework for managing investments across its components and management functions to strengthen integration within and across those functions, as well as to ensure that mission needs drive investment decisions. This framework seeks to enhance DHS resource decision making and oversight by creating new department-level councils to identify priorities and capability gaps, revising how DHS components and lines of business manage acquisition programs, and developing a common framework for monitoring and assessing implementation of investment decisions. These actions, if implemented effectively, should help to further and more effectively integrate the department and enhance DHS’s ability to implement its strategies. However, we noted in response to the June update that specific resources to implement planned corrective actions were not consistently identified, making it difficult to assess the extent to which DHS has the capacity to implement these actions. Additionally, for both versions, we noted that the department did not provide information on the underlying metrics or factors DHS used to rate its progress, making it difficult for us to assess DHS’s overall characterizations of progress. We are currently assessing the December 2011 update and plan to provide DHS with feedback shortly. Although DHS has made progress in strengthening and integrating its management functions, the department continues to face significant challenges affecting the department’s transformation efforts and its ability to meet its missions. In particular, challenges within acquisition, information technology, financial, and human capital management have resulted in performance problems and mission delays. For example, DHS does not yet have enough skilled personnel to carry out activities in some key programmatic and management areas, such as for acquisition management. DHS also has not yet implemented an integrated financial management system, impeding its ability to have ready access to information to inform decision making, and has been unable to obtain a clean audit opinion on the audit of its consolidated financial statements since its establishment. Going forward, DHS needs to implement its Integrated Strategy for High Risk Management, and continue its efforts to (1) identify and acquire resources needed to achieve key actions and outcomes; (2) implement a program to independently monitor and validate corrective measures; and (3) show measurable, sustainable progress in implementing corrective actions and achieving key outcomes. Demonstrated, sustained progress in all of these areas will help DHS strengthen and integrate management functions within and across the department and its components. Chairman McCaul, Ranking Member Keating, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For questions about this statement, please contact David C. Maurer at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Rebecca Gambler, Acting Director; Ben Atwater; Scott Behen; Janay Sam; Jean Orland; and Justin Dunleavy. Key contributors for the previous work that this testimony is based on are listed within each individual product. Initiatives selected by DHS for implementation in fiscal year 2012 listed in bold. Coast Guard: Security Risk Model Meets DHS Criteria, but More Training Could Enhance Its Use for Managing Programs and Operations. GAO-12-14. Washington, D.C.: November 17, 2011. Port Security Grant Program: Risk Model, Grant Management, and Effectiveness Measures Could Be Strengthened. GAO-12-47. Washington, D.C.: November 17, 2011. Quadrennial Homeland Security Review: Enhanced Stakeholder Consultation and Use of Risk Information Could Strengthen Future Reviews. GAO-11-873. Washington, D.C.: September 15, 2011. Department of Homeland Security: Progress Made and Work Remaining in Implementing Homeland Security Missions 10 Years after 9/11. GAO-11-881. Washington, D.C.: September 7, 2011. National Preparedness: DHS and HHS Can Further Strengthen Coordination for Chemical, Biological, Radiological, and Nuclear Risk Assessments. GAO-11-606. Washington, D.C.: June 21, 2011. Overstay Enforcement: Additional Mechanisms for Collecting, Assessing, and Sharing Data Could Strengthen DHS’s Efforts but Would Have Costs. GAO-11-411. Washington, D.C.: April 15, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C: February 2011. Alien Smuggling: DHS Needs to Better Leverage Investigative Resources to Measure Program Performance along the Southwest Border. GAO-10-328. Washington, D.C.: May 24, 2010. Quadrennial Homeland Security Review: 2010 Reports Addressed Many Required Elements, but Budget Planning Not Yet Completed. GAO-11-153R. Washington, D.C.: December 16, 2010. Aviation Security: DHS and TSA Have Researched, Developed, and Begun Deploying Passenger Checkpoint Screening Technologies, but Continue to Face Challenges. GAO-10-128. Washington, D.C.: October 7, 2009. Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-1013T. Washington, D.C.: September 17, 2009. Border Patrol: Checkpoints Contribute to Border Patrol’s Mission, but More Consistent Data Collection and Performance Measurement Could Improve Effectiveness. GAO-09-824. Washington, D.C.: August 31, 2009. Transportation Security: Comprehensive Risk Assessments and Stronger Internal Controls Needed to Help Inform TSA Resource Allocation. GAO-09-492. Washington, D.C.: March 27, 2009. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington. D.C.: Dec. 15, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act) requires that beginning in fiscal year 2009 and every 4 years thereafter the Department of Homeland Security (DHS) conduct a review that provides a comprehensive examination of the homeland security strategy of the United States. In February 2010, DHS issued its first Quadrennial Homeland Security Review (QHSR) report, outlining a strategic framework for homeland security. In July 2010 DHS issued a report on the results of its Bottom-Up Review (BUR), a departmentwide assessment to implement the QHSR strategy by aligning DHS’s programmatic activities, such as inspecting cargo at ports of entry, and its organizational structure with the missions and goals identified in the QHSR. This testimony addresses DHS’s efforts to (1) strategically plan its homeland security missions through the QHSR, (2) set strategic priorities and measure performance, and (3) build a unified department. This testimony is based on GAO reports issued in December 2010, February 2011, and September 2011. DHS’s primary strategic planning effort in recent years has been the QHSR. In September 2011, GAO reported on the extent to which DHS consulted with stakeholders in developing the QHSR. DHS solicited input from various stakeholder groups in conducting the first QHSR, but DHS officials, several stakeholders GAO contacted, and other reviewers of the QHSR noted concerns with time frames provided for stakeholder consultations and outreach to nonfederal stakeholders. Specifically, DHS consulted with stakeholders—federal agencies; department and component officials; state, local, and tribal governments; the private sector; academics; and policy experts—through various mechanisms, such as the solicitation of papers to help frame the QHSR. DHS and these stakeholders identified benefits from these consultations, such as DHS receiving varied perspectives. However, stakeholders also identified challenges in the consultation process, such as concerns about the limited time frames for providing input into the QHSR or BUR and the need to examine additional mechanisms for including more nonfederal stakeholders in consultations. By providing more time for obtaining feedback and examining mechanisms to obtain nonfederal stakeholders’ input, DHS could strengthen its management of stakeholder consultations and be better positioned to review and incorporate, as appropriate, stakeholders’ input during future reviews. DHS considered various factors in identifying high-priority BUR initiatives for implementation in fiscal year 2012 but did not include risk information as one of these factors, as called for in GAO’s prior work and DHS’s risk-management guidance. Through the BUR, DHS identified 43 initiatives aligned with the QHSR mission areas to serve as mechanisms for implementing those mission areas. According to DHS officials, DHS did not consider risk information in prioritizing initiatives because of differences among the initiatives that made it difficult to compare risks across them, among other things. In September 2011, GAO reported that consideration of risk information during future implementation efforts could help strengthen DHS’s prioritization of mechanisms for implementing the QHSR. Further, GAO reported that DHS established performance measures for most of the QHSR objectives and had plans to develop additional measures. However, with regard to specific programs, GAO’s work has shown that a number of programs and efforts lack outcome goals and measures, hindering the department’s ability to effectively assess results. In 2003, GAO designated the transformation of DHS as high risk because DHS had to transform 22 agencies—several with major management challenges—into one department, and failure to effectively address DHS’s management and mission risks could have serious consequences for U.S. national and economic security. DHS has taken action to implement, transform, and strengthen its management functions, such as developing a strategy for addressing this high-risk area and putting in place common policies, procedures, and systems within individual management functions, such as human capital, that help to integrate its component agencies. However, DHS needs to demonstrate measurable, sustainable progress in implementing its strategy and corrective actions to address its management challenges. GAO made recommendations in prior reports for DHS to, among other things, provide more time for consulting with stakeholders during the QHSR process, examine additional mechanisms for obtaining input from nonfederal stakeholders, and examine how risk information could be used in prioritizing future QHSR initiatives. DHS concurred and has actions planned or underway to address them.
Since people began traveling in pressurized, climate-controlled aircraft more than 40 years ago, questions have arisen about the quality of air inside aircraft cabins and its effect on the health of passengers and cabin crews. In addition, the number of people traveling by commercial aircraft has increased dramatically over the years, with more than 600 million passengers flown by U.S. carriers in 2002 alone. Despite a downturn in air travel following the events of September 11, 2001, FAA expects demand to recover and then continue a long-term trend of 3.6 percent annual growth. As air travel has become more accessible, the flying public mirrors the general population more closely than in years past. Therefore, it includes more young and elderly passengers who can be more susceptible to potential health risks associated with air travel. This diverse group of passengers, as well as the cabin crew, experiences an environment in the aircraft cabin that in some ways is similar to that of homes and buildings but in other ways is distinctly different. The National Research Council (the Council)—the principal operating agency of the National Academy of Sciences—has issued two reports at the request of Congress on the air quality in aircraft cabins, one in 1986 and another in 2001. The 2001 Council report notes that the aircraft cabin is a unique environment in which the occupants are densely confined in a pressurized space. The report goes on to note that airline passengers encounter environmental factors that include low humidity, reduced air pressure, and potential exposure to air contaminants, including ozone, carbon monoxide, pesticides, various organic chemicals, and biological agents that can have serious health effects. The report concluded that there are still many unanswered questions about how these factors affect cabin occupants’ health and comfort and about the frequency and severity of incidents in which heated oils or hydraulic fluids release contaminants into the cabin ventilation system. Figure 1 shows the passenger cabin of a commercial aircraft. As depicted in figure 2, supplying air to modern jet airliner cabins is a complex process that varies somewhat among airplane models but has essential characteristics that are shared by most airliners. Basically, some of the outside air that enters the aircraft engines is diverted and processed for use in the cabin in order to achieve an air pressure and temperature closer to that experienced on the earth’s surface. FAA requires that air supplied to aircraft be designed to maintain a cabin pressure equivalent to that at an elevation of no more than 8,000 feet, which is similar to the elevation of Mexico City (7,500 ft.). Nevertheless, the air pressures inside aircraft cabins are much higher than the extremely low outside air pressures at normal cruising altitudes of 25,000 to 40,000 feet. After flowing through the engines, the air enters an intricate system of cooling devices and ducts and is distributed throughout the cabin and cockpit. Airlines that fly in areas where ozone levels are high are required to take steps to ensure that ozone levels do not exceed prescribed standards (e.g., by having a device that converts the ozone pollutant into oxygen before it enters the cabin and cockpit). The Council reported that unacceptable high ozone levels can occur in passenger cabins of commercial aircraft in the absence of effective controls. On most modern aircraft, an average of about 56 percent of the outside air supplied to the cabin is vented out of the aircraft through valves that help regulate cabin pressure. The remaining air is then recirculated through the cabin; this recirculation allows the engines to use less fuel for air supply and pressurization. In addition to less fuel and pressurization, recirculation also provides the benefit of higher airplane cabin humidity, improved airflow patterns, and minimized temperature gradients. On most large aircraft, the recirculated air typically passes through filters that are designed to remove harmful particulates, such as viruses and bacteria. FAA requires that aircraft ventilation systems for aircraft designs certified after June 1996 be designed to supply at least 10 cubic feet per minute of outside air per person under standard operating conditions. This compares with the standard minimum rate of 15 cubic feet per minute per person for buildings recommended by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE). However, according to FAA officials, there is currently no standard for cabin ventilation rate, and it has yet to be determined if it is appropriate to compare building and aircraft ventilation rates because outside air at altitude is very clean, while air sources for buildings are often contaminated by pollution. Furthermore, in rare instances, oil leaks or other engine malfunctions can cause contaminants such as carbon monoxide to be released into the cabin ventilation system. The 2001 Council report noted that questions about the frequency and significance of such incidents remain unanswered. In February 2002, FAA published a report that discussed many of the issues in the Council report, including an estimate of 416 air contaminant events (or 2.2 events every 1,000,000 aircraft hours) that may have taken place in commercial transports within the United States between January 1978 and December 1999. FAA is responsible for setting design standards for aircraft ventilation systems. To fulfill its responsibilities, FAA requires that manufacturers design and build their large commercial airplanes to meet specific engineering standards, which limit the amounts of certain air quality contaminants (e.g., carbon monoxide, carbon dioxide, and ozone) that can be present in an airliner cabin. Manufacturers comply with these engineering standards in order to have FAA certify their airplanes as airworthy. However, while FAA monitors overall aircraft system operations, it does not require airlines to monitor cabin air quality during their operations to determine if air quality during routine flight operations is meeting the agency’s engineering standards. According to FAA, the certification requirements combined with the monitoring of overall aircraft system operations are sufficient. However, the 2001 Council report stated that because of a lack of data it was not able to answer questions about the extent to which aircraft ventilation systems are operated properly. Passengers and flight attendants have had long-standing concerns about negative health effects from the quality of air in airliner cabins; however, research to date, including two reports by the Council, has not been able to definitively link the broad, nonspecific health complaints of passengers and flight attendants to possible causes, including cabin air quality. In its most recent report, the Council concluded that critical questions about the potential effect of cabin air quality on the health of cabin occupants remain unanswered because existing data are inadequate, and it recommended further research to narrow this knowledge gap. Passengers and flight attendants (cabin occupants) have long complained of acute and chronic health effects during and after flying. Many complaints made by cabin occupants are relatively minor, such as dry eyes and nose, or the onset of colds soon after flying, but others are much more serious. According to the Association of Flight Attendants, its members have reported such health problems as respiratory diseases, nausea, dizziness, muscle tremors, nervous system damage, and memory loss. The association notes that these illnesses are consistent with exposure to carbon monoxide, pesticides, reduced oxygen levels, neurotoxins, and ozone gas, all of which can be present in the cabin itself or in cabin air supplies, depending on the flight. In addition, passengers with certain medical conditions can be at higher risk from the quality of cabin air than the general population due to air contaminants, lowered oxygen levels in the body (hypoxia), and changes in cabin pressure. Such medical conditions include limited lung capacity (e.g., asthma) and cardiovascular and circulatory disorders. Those who fly soon after surgery are particularly vulnerable to changes in cabin pressure. However, according to the Council report, many of the complaints made by cabin occupants are so broad and nonspecific that they could have many causes, and it is difficult to determine a specific illness or syndrome. Although numerous studies have been conducted on cabin air quality issues, there are insufficient data to determine the nature and extent of cabin air quality’s effect on cabin occupants. Council reports published in 1986 and 2001 reviewed the literature on cabin air quality issues and concluded that the studies had not collected data in a systematic manner that would conclusively address many of the questions about potential exposures in aircraft cabins and their health effects. Both reports recommended actions for improving what is known about cabin air quality, including the need to collect better data on the potential effect of cabin air quality on passenger and cabin crew health. The 2001 report concluded that available data on air quality and its possible negative effects on cabin occupant health have left three critical outstanding questions unaddressed and that additional research is needed: Do current aircraft as operated comply with FAA design and operational limits for ventilation rate and for chemical contaminants, including ozone, carbon monoxide, and carbon dioxide, and are the existing air quality regulations adequate to protect health and ensure the comfort of passengers and cabin crew? What is the association, if any, between exposure to cabin air contaminants and reports or observations of adverse health effects in cabin crew and passengers? What are the frequency and severity of incidents when air contaminants enter the cabin due to nonroutine conditions such as oil leaks or other engine malfunctions? Following the 1986 report, the Department of Transportation sponsored a study to evaluate the health risks posed by exposures to contaminants on randomly selected flights. In addition, various researchers conducted a number of studies of cabin air quality issues, including eight investigations of biological agents, such as viruses and bacteria, on commercial aircraft. However, these and other studies were not able to link the broad, nonspecific health complaints that passengers and cabin crew continued to make to possible causes, including cabin air quality. Recognizing the need for more data on the issue, Congress directed FAA, in AIR-21, to request that the Council perform another independent examination of cabin air quality. The Council’s report, issued in 2001, concluded that when operated properly, the environmental control system should provide an ample supply of air to pressurize the cabin, meet general comfort conditions, and dilute or reduce normally occurring odors, heat, and contaminants. However, the Council also found that the design standard for ventilation rates in aircraft required by FAA was less than one-half to two-thirds the rate recommended by ASHRAE for buildings. The Council noted that whether the building ventilation standard is appropriate for the aircraft cabin environment has not been established. Studies have shown that low ventilation rates in buildings have contributed to “sick building syndrome,” which causes fatigue, headache, and throat irritation. However, FAA officials told us that a sick building syndrome comparison is not applicable, in part because HEPA filtration results in much cleaner recirculated air than in a building environment. The 2001 Council report also found that although the environmental control system in aircraft is designed to provide adequate air pressure and minimize the concentration of contaminants in the cabin, passengers and cabin crew are potentially exposed to air quality-related health risks. The Council was particularly concerned about two cabin air characteristics and suggested that they be given high priority for further investigation. The first is reduced oxygen partial pressure, which results from the lower air pressures present in aircraft cabins at cruise altitudes. Most healthy individuals are unaffected by reduced oxygen partial pressure, but those with health problems such as cardiopulmonary disease and infants can experience serious health effects from a lack of oxygen (e.g., respiratory stress). The other concern of the Council was elevated concentrations of ozone, which can occur at high cruise altitudes over certain areas of earth, such as the Arctic. The Council reported that unacceptably high ozone levels could occur in passenger cabins of commercial aircraft in the absence of effective controls. FAA allows aircraft operators to maintain cabin ozone concentrations at or below prescribed limits through flight planning that avoids areas with ozone concentrations exceeding those limits or the installation of devices that convert ozone to oxygen. However, FAA does not have a process in place to ensure that ozone converters are installed in all aircraft that fly routes where ozone may pose a risk or that converters in service are operating properly. The Council also had what it termed moderate concern about several other potential air quality-related exposures on aircraft, but it noted that there were little data available on the frequency at which they occur. For example, according to the Council, infectious agents, such as viruses and bacteria, were likely present on aircraft, and high occupant densities could increase the risk of transmittal. The Council observed, however, that air recirculation did not increase the risk of transmittal, especially in systems using HEPA filters. Likewise, the Council noted that airborne allergens, such as cat dander, could pose problems for passengers with sensitivities. In addition, when aircraft are on the ground, according to the Council, passengers can be exposed to contaminants from engine exhaust, such as carbon monoxide and other outdoor air pollutants, including ozone and particulate matter, when they are pulled into the aircraft through the ventilation system. Also of some concern to the Council were incidents when lubricating and hydraulic fluids seep into the aircraft ventilation system during engine and other system malfunctions. Although such occurrences are rare, and the actual exposure to contaminants resulting from them is unknown, lubricating and hydraulic fluids contain substances that can pose neurological health risks to passengers and cabin crew if they are present in sufficient concentrations and for a sufficient length of time. Finally, the Council was somewhat concerned about exposures to the pesticide spraying that takes place on some international flights, which can cause skin rashes and other health effects. Table 1 summarizes information presented by the Council on the potential air quality-related exposures on aircraft. Since the issuance of the 2001 Council report, some limited studies have examined specific air quality issues, such as infectious disease transmission, but they have raised as many questions as they have answered. For example, according to a revised 2003 WHO report on tuberculosis (TB) and air travel, as of August 2003, no case of active TB has been identified as resulting from exposure while on a commercial aircraft. The report did note, however, that there is some evidence that transmission of TB may occur during long flights (i.e., more than 8 hours) from an infectious source (passenger or crew) to other passengers or crewmembers. In 2002, the American Medical Association did not find any evidence that aircraft cabin air recirculation increases the risk for upper respiratory tract infection (URI) symptoms in passengers traveling aboard commercial jets. However, passengers had higher incidents of URI infections than the general public within a week after completing their trips. One of the study’s authors noted that the research indicated that while flying increases the risk of getting colds or other infections, an aircraft’s ventilation system may not be a key factor. A 2003 study appearing in the New England Journal of Medicine found that SARS transmissions may occur on flights carrying people in the symptomatic stages of the disease. (See app. II for more details on this study.) The December 2001 Council report on airliner cabin air quality made 10 recommendations about air quality standards for the cabins of commercial airliners and the need for more information concerning the health effects of cabin air. Nine of these recommendations were directed to FAA, and it has implemented them to varying degrees. The Council report’s 10 recommendations focused on five aspects of cabin air quality and its environment: (1) the establishment of cabin air quality surveillance and research programs, (2) FAA’s oversight of the operation of aircraft ventilation systems, (3) exposures on aircraft due to the transport of small animals in aircraft cabins, (4) distribution of health related information, and (5) recommended procedures as a result of a ventilation system shutdown. Although one recommendation asked Congress to designate a lead federal agency for conducting airliner cabin air quality research, most of the recommendations were directed at or involved FAA. Table 2 describes each of the Council report recommendations and FAA’s response. FAA formed the Airliner Cabin Environment Report Response Team to review the findings of the NRC report on airliner cabin air quality and published a planned response in February 2002. However, many of the actions included in this plan were contingent on the formation of an aviation rulemaking advisory committee, on which the agency has deferred action. FAA subsequently updated its plans, as reflected above. We reviewed FAA’s approach for addressing the recommendations and found that the agency has made progress on implementing some of them, including those relating to making information available on potential health issues related to cabin air quality and the risks posed to sensitive people by allergens from small animals transported in aircraft cabins; however action on others is pending. For example, recommendations to improve FAA oversight of aircraft ventilation systems are pending until completion of the ASHRAE study in late 2006 or early 2007. In implementing the Council report recommendations, FAA is attempting to balance the need to conduct additional research on the healthfulness of cabin air quality with other research priorities, such as improving passenger safety. Our prior work on airliner cabin safety and health has underscored the importance of setting risk-based research priorities, in part by establishing cost and effectiveness estimates to allow direct comparisons among competing research priorities. In commenting on this prior work, FAA cautioned that if too much emphasis is placed on cost/benefit analyses, potentially valuable research may not be undertaken. We concur in that caution. Similarly, we found that many members of the Council committee on airliner cabin air quality question FAA’s approach to implementing some of the recommendations it made, particularly those related to the committee’s principal finding that more comprehensive research on the health effects of cabin air quality is needed. Specifically, some in the aviation community have raised concerns that FAA’s planned actions for implementing the Council recommendations on cabin air quality, including its research and surveillance efforts, will not be adequate to answer long-standing questions about the nature and extent of potential health effects posed by cabin air. To address the need for more information on the health effects of cabin air quality, the 2001 Council report made three recommendations regarding the establishment of cabin air quality surveillance and research programs. FAA, in coordination with ASHRAE, has begun to develop a program to monitor air quality on some flights and correlate this information with health data collected from passengers and cabin crews. Although this effort can provide a foundation for future research, members of the committee that produced the report are concerned that its scope is too limited to adequately answer long-standing questions concerning the association between cabin air quality and health effects. According to a committee member, the Council report’s most important recommendations are those pertaining to the establishment of cabin air quality surveillance and research programs. The report concluded that available air quality data are not adequate to address three critical questions on aircraft cabin air quality and its possible effects on cabin occupant health: Do current aircraft, as operated, comply with FAA design and operational limits for ventilation rate and for chemical contaminants, including ozone, carbon monoxide, and carbon dioxide, and are the existing air quality regulations adequate to protect the health and ensure the comfort of passengers and the cabin crew? What is the association, if any, between exposure to cabin air contaminants and reports or observations of adverse health effects in cabin crew and passengers? What are the frequency and severity of incidents when air contaminants enter the cabin due to nonroutine conditions such as oil leaks or other engine malfunctions? To answer these questions, the Council report recommended a dual approach that includes a routine surveillance program and a more focused research program. The report said that the surveillance program should continuously monitor and record chemical contaminants, cabin pressure, temperature, and relative humidity in a representative number of flights over a period of 1 to 2 years. Thereafter, the program should continue to monitor flights to ensure accurate characterization of air quality as existing aircraft equipment ages or is upgraded. In addition to air quality monitoring, the report said the surveillance program should also include the systematic collection, analysis, and reporting of health data, with the cabin crew as the primary study group. The report said a detailed research program to investigate specific questions about the possible association between air contaminants and reported health effects should supplement the surveillance program. Among the subjects suggested for research are the factors that affect ozone concentration in cabin air and the adequacy of outside air ventilation flow rates. In order to implement the surveillance and research programs, the report recommended that Congress designate a lead federal agency and provide sufficient funding to conduct or direct the research program to fill the major knowledge gaps. It also called for an independent advisory committee with appropriate scientific, medical, and engineering expertise to oversee the programs to ensure that the research program’s objectives are met. In response, as a part of FAA’s reauthorization, Congress designated FAA as the lead federal agency. Prior to this, FAA acted in this capacity and allocated limited funding for this effort, although, according to FAA officials, Congress provided no additional funding through fiscal year 2003 for air quality surveillance and research; however, pending legislation for fiscal year 2004 would provide $2.5 million for this effort. In addition, on March 4, 2003, FAA announced the creation of a voluntary program for air carriers, called the Aviation Safety and Health Partnership Program. Through this program, the agency intends to enter into partnership agreements with participating air carriers, which will, at a minimum, make data on their employees’ injuries and illnesses available to FAA for collection and analysis. According to FAA officials, this program has a reporting system and database available to capture air quality incidents. In taking the lead for implementing the recommendations for surveillance and research programs, FAA has undertaken a joint effort with ASHRAE. According to FAA, this joint effort will build on a previous study conducted for FAA by NIOSH, which identified and characterized potential health issues, including respiratory effects, related to the aircraft cabin environment, but did not link the health issues to cabin conditions. The joint effort includes a surveillance and research initiative whose principal aim is to relate perceptions of discomfort or health-related symptoms that flight attendants and passengers have had to possible causal factors, including cabin and outside air quality and other factors, such as reduced air pressure, jet lag, inactivity, humidity, flight attendant duty schedule and fatigue, disruptions to circadian rhythm, stress, and noise. While FAA’s fiscal year 2004 appropriation in the research and development budget includes $2.5 million for cabin air research—including identifying bacterial and pesticide contamination and monitoring air quality incidents—it is unclear which of the cabin air quality projects outlined in the FAA reauthorization bill will be funded. Additionally, ASHRAE officials stated that the surveillance and research initiative would support ASHRAE’s ongoing efforts to develop air quality standards for commercial aircraft. According to FAA, the surveillance and research program is to be carried out in two parts; the first started in December 2003 and the second will start in December 2004 and end in late 2006 or early 2007. In part I, air quality data will be collected on four to six flights on a minimum of two different types of aircraft, and the data will then be compared with health information gathered from surveys of passengers and crew on the flights. According to FAA and ASHRAE, the protocol and procedures developed in part I of the study will be the basis for conducting on-ground and in-flight monitoring in part II of the initiative. In part II, air quality monitoring will be conducted on different models of commercial jet airplanes representing a large section of the world fleet and will include a minimum number of flights that has not yet been determined. However, according to FAA officials, the level of funding that will be available for part II is uncertain. FAA and ASHRAE have assembled a committee which is responsible for selecting a contractor to conduct the monitoring and health surveillance in part I and overseeing the contractor’s performance. The committee consists of aircraft, health, and air quality experts, including five members of the Council committee, as well as representatives from FAA, the Association of Professional Flight Attendants, and the Boeing Commercial Airplane Group. In September 2003, the committee chose a contractor for part I, and work began in December 2003. FAA and ASHRAE have not yet selected a contractor for part II, although the estimated completion date for the entire program is late 2006 or early 2007. ASHRAE officials stated that to date FAA, Boeing, and two major U.S. airlines are supporting this effort. FAA has provided $50,000 of the estimated $250,000 it will cost to conduct air quality surveillance on two aircraft. Boeing is the major source for the balance of the funding for the surveillance program. FAA had previously reported that it was seeking a $500,000 contract with the Johns Hopkins University Applied Physics Laboratory (APL) to develop devices to monitor the aircraft cabin environment as part of the research and surveillance program. However, the contract was not finalized because APL determined that the project would cost significantly more than $500,000 and FAA reprogrammed the funds. FAA said that it has not yet funded part II, while ASHRAE officials noted that they are planning to solicit the part I contributors again for part II once part I is under way. Despite FAA’s efforts to date, we found that the agency has not developed a detailed plan for the research and surveillance program, including key milestones and funding estimates, in keeping with generally accepted practices for oversight and independence. In addition, the agency has not created an independent panel of experts in the areas of aircraft ventilation, air quality, and public health to help plan and oversee this effort. Furthermore, FAA’s plans do not explicitly include leveraging the findings of international research on cabin air quality. Members of the committee that produced the 2001 Council report are concerned that the FAA/ASHRAE surveillance and research program, as designed, will fall short of answering the long-standing questions about the effect of cabin air quality on passenger and cabin crew health and comfort. We contacted the 13 members of the committee, and 8 agreed to comment on FAA’s response to their recommendations on cabin air quality surveillance and research. We refer to these 8 individuals from here forward as commenting committee members. Although 5 of 8 commenting committee members said that the initiative should shed some light on cabin air quality’s effects on health, all said that it was much more limited than the committee had envisioned. Two of the 8 commenting committee members thought that the air quality and health surveillance initiative should be a continuous undertaking in which air quality and health information is taken from a representative sample of commercial aircraft and flight routes. They also said that it appears the FAA and ASHRAE program will not include a broad enough cross-section of aircraft and flights to determine the full range of air quality problems and relate them to health effects. Two commenting committee members said that part I of the FAA and ASHRAE program will extensively monitor cabin air quality on two aircraft types; however, part I will not provide information that is generalizable to the U.S. commercial airliner fleet. According to Boeing officials involved in this study, part I research is designed to validate test equipment and study protocols and is not designed to be generalized to the airliner fleet. One committee member said that although more aircraft are to be included in part II, it is doubtful that enough information will be collected to adequately answer the key questions the agency’s research and surveillance program was designed to address. According to Boeing officials, part II includes plans for information collection to address the key question of the agency surveillance and research program, provided sufficient funds are available. Another commenting committee member said that the FAA and ASHRAE program would also yield little or no information on air quality incidents that occur when cabin air is contaminated by oil or hydraulic fuel leaks. According to the member, these incidents are rare and can be monitored only if simple, inexpensive equipment (e.g., devices that can “grab” samples) is available to cabin crew on a large number of flights to use in the event that an incident occurs. FAA officials said that issues of sampling adequacy and specimen handling could complicate the grab sample approach. These officials also noted that a voluntary injury and illness reporting system that it has in place could capture air quality incidents if it were made mandatory. Seven of the eight commenting committee members also noted that FAA has not adequately addressed the Council report’s recommendations regarding cabin air surveillance and research programs. FAA has indicated that its program responds to the report’s recommendations calling for surveillance and research efforts. However, these committee members believe that the program focuses only on surveillance and does not include in-depth research of air quality issues as outlined in the committee’s recommendation calling for a separate comprehensive research program. One of the commenting committee members said that a cabin air quality study currently under way in Europe contains many of the elements that the committee had hoped to see in the U.S. surveillance and research efforts. As part of the ongoing surveillance and research study, the European cabin air study is currently coordinated by Building Research Establishment, Ltd. (BRE). The study focuses on three major goals: (1) advancing the industry’s understanding of what is known about air quality issues by assessing the current level of air quality found in aircraft cabins; (2) identifying the technology (i.e., environmental control systems including filtration and air distribution) that is available to improve cabin air quality; and (3) assessing and determining potential improvements to existing standards and performance specifications for the cabin environment. (The scope and methodology for Europe’s cabin air study is found in appendix IV). The cabin air study partnered (to various degrees) with 16 organizations, including Boeing, Airbus Deutschland, Honeywell (manufacturer of environmental control systems), Pall Aerospace (filter manufacturer), British Airways, United Kingdom’s Civil Aviation Authority (CAA), European Joint Aviation Authorities (JAA), and other organizations representing Austria, France, Germany, Greece, Norway, Poland, and Sweden. The European cabin air study began on January 2001 with an estimated cost of $8 million and is expected to disclose its findings in 2004. Of the eight commenting committee members, three addressed the funding of the FAA and ASHRAE surveillance and research programs. These members said that the amount of funding available for U.S. efforts might be insufficient to conduct surveillance and research programs of the scope they envisioned in their recommendations. For example, one of the committee members stated that to conduct a surveillance and research program of the scope the Council had in mind, Congress would have to provide funding levels comparable to that of the European cabin air study. One commenting committee member, National Institute for Occupational Safety and Health (NIOSH) officials, and airline flight attendant representatives we interviewed expressed concern that the extensive involvement of aircraft manufacturers and airlines in the design and implementation of the FAA and ASHRAE program could threaten the independence of the effort. However, with the exception of the flight attendant representatives, they agreed that any surveillance and research programs require participation by these groups. Nonetheless, they point to the fact that much of the available funding for the initiative ($200,000 of the $250,000) is coming from the aviation industry, which has a stake in the outcome, and that this might give the impression that the study lacks the necessary objectivity. The commenting committee member suggested that the research money provided by the aviation industry be placed in a special fund that would be managed by FAA or an independent research group. According to ATA officials, due to a lack of public funding on a scale comparable to what has been provided for Europe’s cabin air study, the financial support and cooperation of aircraft manufacturers and airlines is essential if FAA is to conduct this research. In addition, Boeing officials stressed that the project funding is currently controlled by ASHRAE and the project oversight committee is led by the chairman of the Council study. Five of the commenting committee members also discussed the status of their recommendation concerning the need for Congress to designate a lead federal agency and advisory committee for the air quality research effort. Although Congress designated FAA as the lead agency in November 2003, FAA had already assumed responsibility for implementing the research and surveillance-related recommendations. In commenting on the Council recommendation to designate a lead federal agency, several members said they thought that the lead agency should be one that is experienced in conducting scientific research on air quality and environmental health issues. Some noted that the Environmental Protection Agency (EPA) has supported a large body of research into air quality issues, and another pointed out that NIOSH has performed studies of air quality in buildings and the workplace. Several commenting members indicated that although it is FAA’s mission to promote aviation safety, they had reservations about whether the agency was well suited to oversee a large air quality research program on its own. Several members thought that, as an alternative, FAA might be part of a cooperative federal effort to perform airliner cabin air quality research. In addition, another committee member believes that although FAA has a committee to oversee the selection of the contractor for the program, it has not assembled an advisory committee to review the research design and monitor the implementation of the program. Four of the Council recommendations pertain to FAA’s oversight of the operation of aircraft ventilation systems. These recommendations call for FAA to (1) demonstrate in public reports the adequacy of its regulations related to cabin air quality and establish operational standards for ventilation systems, (2) ensure that standards for ozone levels are met on all flights, (3) investigate the need for and feasibility of installing equipment to clean the air supplied to aircraft ventilation systems, and 4) require carbon monoxide monitors in air supply ducts to passenger cabins and establish procedures for dealing with elevated carbon monoxide concentrations. According to FAA officials, the agency originally planned to have an aviation rulemaking advisory committee assess whether current standards were appropriate for ensuring that aircraft ventilation systems adequately prevent contamination of cabin air. However, FAA decided to defer this action until data is available from the surveillance and research study, as well as the European cabin air study. Additionally, FAA believes that data from this study will aid in the reconsideration of air quality standards for commercial aircraft. However, most of the commenting committee members questioned the need for delay in addressing some of the recommendations. Four of the eight commenting committee members said that they recommended that FAA demonstrate, in public records, the rationale for the established design standards for carbon monoxide (CO), carbon dioxide (CO), ozone (O), ventilation, and cabin pressure because FAA was unable to explain the reasoning for these standards. For example, FAA has not documented the reasons for setting the ventilation rate standard for aircraft cabins of new aircraft types at .55 pounds of outside air per minute per occupant. The American Society of Heating, Refrigerating and Air- Conditioning Engineers (ASHRAE) recommends that ventilation rates inside a building environment be at least 1.1 pounds of outside air per minute per occupant, which is about 50 percent more than the current FAA requirement for aircraft. In addition, FAA has not documented the reasons for requiring a design for cabin air pressure altitude of not more than 8,000 feet air pressure, which is about three-fourths of the air pressure found at sea level. Members of the research community, including the Aerospace Medical Association (AsMA) and CAA, state that the loss of air pressure and oxygen may pose serious health risks for infants whose lungs have not fully developed and for older adults who may have upper respiratory problems. In response to the committee members’ comments, FAA provided us the following explanations for the design standards in question. The ventilation rate standard was based on a regulatory value established decades ago, which has been shown to be acceptable, and ASHRAE has formed a subcommittee to develop a standard specifically for airplanes. The limit for carbon monoxide concentration of 1 part in 20,000 parts air (0.005 percent) was adopted from the Occupational Safety and Health Administration (OSHA) and ASHRAE standards. The limit of maximum allowable carbon dioxide concentration in occupied areas of transport category airplanes was reduced to 0.5 percent in part due to a recommendation from the National Academy of Sciences to review the carbon dioxide limit in airplane cabins; it provides a cabin carbon dioxide concentration level representative of that recommended by some authorities for buildings. The ozone limits were based on studies conducted by the FAA Civil Aerospace Medical Institute and are comparable to standards adopted by the Environmental Protection Agency and the Occupational Safety and Health Administration. The cabin pressure altitude standard was based on the accepted industry practice of maintaining the health and safety of occupants while considering the structural limitations of the aircraft. A commenting committee member also expressed concern that FAA certifies aircraft ventilation systems that are designed to meet certain standards, such as those for ventilation rates, but it does not require that systems operate in accordance with these standards. The practical effect is that aircraft are not monitored to determine if they meet the design standards. According to another commenting committee member, FAA did not need data from the planned research project to provide a rationale for ventilation system standards, or to require that ventilation systems operate according to standards. Some committee members also said that FAA could begin to take steps to ensure that ozone standards are met on all flights regardless of altitude and require monitors for dangerous carbon monoxide vapors in air supply ducts to passenger cabins before the completion of the planned research study. FAA officials said that although it does not conduct recurrent system design compliance checks, the agency uses various reporting systems to monitor aircraft system performance and takes appropriate mandatory action when an unsafe condition is found. Because of the potential for serious health effects for people sensitive to allergens, the 2001 Council report also recommended that FAA investigate the need to prohibit the transport of small animals in aircraft cabins and provide training to cabin crews to deal with allergic reactions. However, FAA does not think that prohibiting animals in the cabin would be effective because it believes that most animal allergens are brought onboard aircraft on the clothes of passengers rather than by the animals themselves. Instead, the agency issued an advisory circular highlighting the effective procedures that passengers can use when carrying animals and guidance on how to train crewmembers to recognize and respond to in-flight medical events that result from exposure to allergens. Additionally, FAA will enhance its Internet site to provide general information related to FAA and air carrier policy concerning the transport of animals in aircraft cabins. Commenting committee members generally supported FAA’s approach to this recommendation. In response to the Council report recommendation calling for FAA to increase efforts to provide cabin crew, passengers, and health professionals with information on health issues related to air travel, FAA modified the general information section of its Web site; however, we found that the traveler health information is not easy to access. FAA created hyperlinks to other Web sites, such as those of the Aerospace Medical Association and Centers for Disease Control and Prevention, which include information on potential health risks of flying, particularly for health- challenged individuals. However, we found it difficult to locate the section of the FAA Web site that deals with traveler health information and when we did, it required several steps to reach the hyperlinks. Some commenting committee members also noted how difficult it is to access health-related information on the FAA Web site. In addition to citing the need for FAA to increase the accessibility of health-related information on its Web site, six of the eight committee members also mentioned that FAA should take further steps to make health information available to the flying public. Suggestions included having airlines include health related information on their Web sites and establishing a program to provide flying-related heath risk information to physicians that they could then share with their patients (e.g., through brochures). FAA responded to the 2001 Council report recommendation that it establish a regulation to require removal of passengers from an aircraft within 30 minutes after a ventilation failure or shutdown on the ground by issuing an advisory circular to airlines. Some commenting committee members viewed this action as insufficient. This recommendation reiterated one made in the 1986 Council report, which FAA did not act on. The committees that produced both the 1986 and 2001 reports noted that environmental conditions in an aircraft cabin respond quickly to changes in ventilation system operation. The committees felt that the ventilation system should not be shut down for a long period when the aircraft is occupied, except in the case of an emergency, because excessive contaminant concentrations and uncomfortably high temperatures can occur quickly. Several commenting committee members told us that they felt strongly that FAA should require passenger removal in the event of ventilation system shutdown of more than 30 minutes and that advising airlines that this should be done was insufficient to accomplish the committee’s objective. FAA, on the other hand, said that airlines pay close attention to advisories. The agency decided against issuing a regulation because there are situations when an evacuation within 30 minutes is not possible due to operational necessity, such as when a ventilation system breakdown occurs on a taxiway far from a gate. Several technologies exist today that could improve cabin air quality, but opinions vary on whether requiring the use of improved technologies in commercial airliner cabins is warranted. We found one of these technologies, HEPA filters, is strongly endorsed by cabin air quality and health experts as providing the best possible protection against one cabin air problem—the presence of particulates, bacteria, and viruses in recirculated air. While FAA does not currently require HEPA filters, some health experts believe these filters should be required, given their demonstrated effectiveness in cleansing cabin air. Figure 3 illustrates a typical HEPA filter for commercial passenger aircraft. According to many in the aviation community, several technologies are available today, and more are in the planning stages, that could improve the air quality in commercial airliner cabins. However, some in the aviation industry question whether requiring their use is warranted. Filtering particulates, bacteria, viruses, and gaseous pollutants and removing ozone can improve the healthfulness of cabin air, and increasing cabin humidity and absorbing more cabin odors and gasses can increase the comfort of passengers and cabin crews. While aircraft manufacturers acknowledge that a few technologies are available today that could further improve air quality and comfort in airliner cabins and that more are possible in the future, they believe that unless future research proves otherwise, the ventilation systems in the aircraft they have produced provide ample amounts of relatively clean air. One technology with proven effectiveness is HEPA filtering of recycled cabin air. All new large commercial airliners in production with ventilation systems that recirculate cabin air come equipped with these filters, which, when properly fitted and maintained, are effective at capturing airborne contaminants such as viruses that enter the re-circulation system. However, some regional jets, which have fewer than 100 seats, are not equipped with filters, and some older large aircraft still use less efficient filters. FAA does not require the filtration of recirculated air, but health experts and members of the committee that produced the 2001 report on cabin air quality believe that given their proven effectiveness, HEPA filters should be required for all aircraft that recirculate cabin air. In addition, airflow rates could be increased in some aircraft by adjusting settings on the ventilation system, thereby dissipating the effects of some contaminants. However, this would be done at the expense of higher fuel consumption, increased engine emissions, and lower cabin humidity. HEPA filters are a readily available and affordable technology for providing the best possible protection against one cabin air problem—the presence of particulates, bacteria, and viruses in recirculated air. However, HEPA filters will not filter gaseous contaminants. These filters have become widely available for aircraft since the late 1990s. According to EPA, HEPA filters can remove nearly all particulate contaminants, such as airborne particles and infectious agents including bacteria and viruses, from the recirculated air that passes through them. A manufacturer of HEPA filters, as well as health authorities such as CDC, NIOSH, and WHO, believe that HEPA filters are highly effective in preventing the transmission of bacteria and viruses through aircraft ventilation systems. However, they emphasize that HEPA filters clean only the air that is recirculated through aircraft ventilation systems, so transmissions from an infected person to others nearby are still possible. HEPA filters are available for most large commercial airliners in the U.S. fleet, but some aircraft with recirculation systems are equipped with less effective filters. However, not all commercial aircraft recirculate air through their ventilation systems. For example, some smaller jets, such as the Boeing 717 and Bombardier CRJ-200s, which typically fly shorter routes, as well as older models of some longer-range aircraft, such as the Boeing 737-200 and the DC-10, provide 100 percent outside air to the passenger cabins instead of recirculating air and, therefore, would not need HEPA filters. Nevertheless, most commercial airliners in use today recirculate between 30 and 55 percent of the air provided to the passenger cabin. Officials from Boeing and Airbus, the world’s two largest manufacturers of commercial aircraft, told us that all their aircraft with recirculation systems currently in production are equipped with HEPA filters. The ventilation systems in many older commercial aircraft were designed to use the less effective filters available at the time, and some of these aircraft still use these types of filters. However, according to Boeing and Airbus officials, HEPA filters can be used on these older aircraft with little or no retrofitting required. According to a filter manufacturer, a HEPA filter costs about twice as much (e.g., $400 to $600 for the smaller narrow-body aircraft) as the non-HEPA models that are less effective in trapping particulates. Some regional jets, such as the Embraer ERJ-145 recirculate air but are not equipped with filters. In fact, FAA does not require the filtration of recirculated air on aircraft. However, when manufacturers voluntarily equip their aircraft models that recirculate cabin air with HEPA or other filters when they are certified for flight by FAA, as most do, the aircraft are required to continue operating with the filters. The schedule for changing the filters is also included in the FAA certification process. Airlines typically change HEPA filters after 4,000 to 12,000 hours of service to maintain good airflow and in accordance with manufacturers’ recommendations. Little information has previously been available on the extent of HEPA filter usage in commercial aircraft ventilation systems, though the Council report and many in the health community have pointed to the importance of HEPA filters in preventing the spread of bacteria, viruses, and other contaminants in aircraft cabins. As noted earlier in this report, the 2001 Council report recommended that FAA investigate and publicly report on the need for installing equipment to clean the air supplied to aircraft cabin ventilation systems. In the report, the committee did not determine how many larger aircraft were equipped with HEPA filters, and regional jets were not within the scope of its study. However, the report concluded that HEPA filters are highly effective in removing all airborne pathogens and other particulate matter that pass through them. The report further stated that the use of recirculated air in aircraft cabins when combined with effective HEPA filtration does not contribute to the spread of infectious agents. Members of the research community, including those from NIOSH, as well as the Association of Flight Attendants, have noted that given the proven effectiveness of HEPA filters in capturing contaminants such as infectious viruses and bacteria, FAA should require their use on all aircraft with recirculation systems. To determine the extent of HEPA filter usage in the United Stares, we surveyed the largest 14 airlines in the United States that had Airbus, Boeing, or McDonnell Douglas aircraft that recirculate cabin air, and we received responses from 12 airlines. Of the 3,038 aircraft for which we were able to obtain survey results, 15 percent (454 aircraft) did not use HEPA filters. All of the aircraft that did not use HEPA filters were older out-of-production models that used less effective filters. One airline has plans to retrofit a small number of these aircraft with HEPA filters. We were also able to obtain some information on HEPA filter usage in the U.S. regional aircraft fleet by contacting the manufacturers of these aircraft. We found that 69 percent of these regional aircraft recycle cabin air (1,087 of 1,584), and only a handful of these aircraft are equipped with HEPA filters. The manufacturer of a new regional jet model offers HEPA filters as an option. Information we obtained from two airlines that had 29 of these aircraft indicated that about half (14 of 29) were equipped with HEPA filters. We also found that 90 percent of the regional aircraft (973 of 1,087 aircraft) that recycled cabin air would require modifications to be retrofitted with HEPA filters. Most of these aircraft (73 percent) had no provision for installing filters in their air ducts. Consideration has also been given to filtering outside air entering an aircraft’s ventilation system. Outside air at cruise altitudes is mainly free of pollutants, except for ozone. However, in the event of an engine or hydraulic system malfunction, outside air can become contaminated before it enters the ventilation system. In addition, when an aircraft is at the gate or taxiing, the available outside air contains pollutants normally present around the airport, including exhaust from other aircraft on the runway. For these reasons, the 2001 Council report recommended that FAA investigate the need for and feasibility of installing air-cleaning equipment for removing particles and vapors from the air supplied to the ventilation system. As previously noted, FAA has put off consideration of this recommendation until the completion of FAA’s and ASHRAE’s air quality research and surveillance program in 2006 or 2007. One manufacturer did begin installing outside air filtering equipment on one of its models in 1992. British Aerospace began equipping its BAe 146 aircraft (now out of production) with outside air filters as part of an effort to reduce cabin odors. Other manufacturers, including Boeing and Airbus, contend that outside air filtration is not necessary unless U.S. and European research indicates a problem with the quality of air entering aircraft ventilation systems. Technologies are currently available for removing ozone from outside air. Ozone is present in the air at high altitudes on some routes, particularly those over the polar regions, and FAA requires that the airlines that fly these routes take measures to maintain cabin ozone levels at or below prescribed limits (e.g., using devices that convert ozone to oxygen). According to ATA officials, nearly all commercial aircraft that fly on these routes are so equipped. However, the Council report said that although FAA requires that ozone concentrations in aircraft cabins be maintained within specified limits, surveillance programs with accurate and reliable equipment are needed to ensure compliance and that the ozone converter equipment works properly. One study attributed elevated ozone levels that exceeded FAA limits to temporary ozone plumes that can appear unexpectedly. In November 2000, the British House of Lords, in a study of health issues in aircraft cabins, made a recommendation that airlines fit their aircraft that fly on routes where these plumes occur with ozone converters to minimize potential health problems. The Council report also identified the need for FAA to take effective measures to ensure that ozone does not exceed levels specified in FAA regulations, regardless of altitude. As noted earlier, FAA plans to monitor ozone levels in selected aircraft as part of its surveillance and research program. However, some committee members told us that the effort will be too limited to enable FAA to determine if ozone is present on aircraft not fitted with converters or whether ozone converters are working properly. Increasing ventilation rates on aircraft to levels approximating those currently required in buildings would pose technological challenges, and aircraft manufacturers believe such increases are not necessary. Raising ventilation rates would reduce the effects of some airborne contaminants by diluting their concentration. According to Boeing and Airbus officials, airflow rates on their aircraft could be slightly increased by adjusting settings on the ventilation systems, but such adjustments would increase fuel consumption and result in higher operating costs. According to Boeing officials, to achieve the same airflow rates recommended for buildings, aircraft ventilation systems, and possibly the aircraft themselves, would have to undergo expensive modifications. Boeing and Airbus believe that unless the U.S. and European research and surveillance initiatives prove otherwise, ventilation rates in commercial aircraft are sufficient to sustain passenger and cabin crew comfort and health. Boeing and Airbus officials told us that they are always seeking to improve the aircraft they build, but they believe that the ventilation systems in the aircraft they produce provide a healthy and relatively comfortable environment for passengers and cabin crew. Nevertheless, Boeing is considering increasing the air pressure and humidity levels on the 7E7, its proposed long-range, high-altitude aircraft. Airbus will also offer an improved air ventilation system on its new large aircraft, the A380. Because of the competitive nature of the aircraft manufacturing industry, few details are available on the 7E7 and A380 ventilation systems. Boeing and Airbus officials noted that if current research and surveillance efforts indicate problems with any aspects of the ventilation systems in their aircraft, they would work toward developing the necessary technologies to deal with these problems. The combined research efforts of FAA and ASHRAE on cabin air quality will provide a foundation of knowledge, according to some members of the committee that produced the 2001 Council report on cabin air quality. However, as currently designed and funded, these efforts may not answer many long-standing questions about the effect of air quality on cabin occupants’ health and comfort. FAA is attempting to balance the need to conduct additional research on the healthfulness of cabin air quality with other research priorities, such as improving passenger safety. Our prior work on airliner cabin safety and health has underscored the importance of setting risk-based research priorities, in part by establishing cost and effectiveness estimates to allow direct comparisons among competing research priorities. In commenting on this prior work, FAA cautioned that if too much emphasis is placed on cost/benefit analyses, potentially valuable research may not be undertaken. We concur in that caution. However, information on the nature and extent of health effects from cabin air is needed in order to identify potential health threats so that it can be determined if action is warranted to improve cabin air quality and to target research and development accordingly. Moreover, committee members recommended more study of these issues, and others in the industry have concerns about FAA’s surveillance and research program as currently conceived. Committee members were particularly concerned about FAA’s decision to delay action on ensuring that air quality regulations are adequate or being met on all flights. In addition, the agency’s current plan to monitor cabin air quality on only two aircraft types during part I of its program will not provide FAA with information that is generalizable to the U.S. commercial airliner fleet. Thus, key questions that the agency’s research and surveillance program were designed to address will remain unanswered if part II of FAA’s program is not properly designed and adequately funded. Such information is also needed to guide the development of new technologies. Given the importance of this research and surveillance effort, the program needs to be well designed, properly funded, coordinated with international cabin air quality research efforts such as those ongoing in Europe and Australia, and conducted in accordance with accepted standards for independence and oversight. The Council in its 2001 report recommended that Congress designate a federal agency to conduct or direct the cabin air quality research program and recent legislation assigned FAA as the lead federal agency for this effort. FAA has begun a surveillance and research program on its own. Furthermore, FAA has not taken steps to ensure that HEPA filters, which are a proven technology for eliminating some contaminants such as viruses and bacteria from recirculated cabin air, are used as widely as possible on commercial aircraft. FAA does not currently require the use of filters on recirculated air. Nevertheless, we found that a number of aircraft manufacturers and airlines voluntarily install them and that the vast majority of larger commercial aircraft are equipped with HEPA filters. However, we also found that only a few smaller regional jets that recirculate cabin air have HEPA or any other type of filters. FAA has decided to delay addressing the 2001 Council report recommendation calling for the agency to investigate the need for air cleaning equipment on aircraft ventilation systems until it completes its cabin air quality surveillance and research program in 2006 or 2007. FAA needs to determine the costs and benefits of requiring HEPA filters on commercial aircraft that recirculate air. Finally, although FAA has made some progress in implementing the Council’s recommendation regarding the need to increase the availability of information on health issues related to air travel, more needs to be done. Creating links on the FAA Web site to pertinent information on the CDC and WHO Web sites is a good start, but navigating the FAA’s Web site to reach these links is difficult. In addition to improving the user friendliness of the FAA Web site links, some commenting committee members suggested that FAA should consider other methods for disseminating information on the health risks of flying, such as providing brochures for physicians to use when discussing these issues with patients. To help ensure that FAA’s research and surveillance efforts on airliner cabin air quality answer critical outstanding questions about the nature and extent of potential health effects of cabin air quality on passengers and flight attendants, GAO recommends that the Secretary of Transportation direct the FAA Administrator to develop a detailed plan for the research and surveillance efforts, including key milestones and funding estimates, in accordance with generally accepted practices for oversight and independence; appoint a committee of acknowledged experts in the fields of aircraft ventilation and public health, including representatives of EPA and NIOSH, to assist in planning and overseeing the research and surveillance efforts recommended by the National Research Council in 2001; leverage the findings of international research on airliner cabin air quality to inform FAA’s surveillance and research efforts; and report to Congress annually on the progress and findings of the research and surveillance efforts and funding needs. In order to help improve the healthfulness of cabin air for commercial aircraft passengers and cabin crews, the FAA Administrator should assess the costs and benefits of requiring the use of HEPA filters on commercial aircraft with ventilation systems that recirculate cabin air. If FAA chooses to require the use of HEPA filters, it should also ensure that the regulation covers the maintenance requirements for these filters. In addition, to increase access to information on the health risks related to air travel, the FAA Administrator should direct the staff responsible for the FAA Web site to improve the links to other Web sites containing this information. The Administrator should also consult with medical associations and health organizations, such as CDC, on other ways to increase the dissemination of this information. We provided copies of a draft of this report to the Department of Transportation for review and comment. FAA generally agreed with the report’s contents and its recommendations. The agency provided us with oral comments, primarily technical clarifications, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Transportation, and the Administrator, FAA. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please call me at (202) 512-2834 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix VI. The Ranking Democratic Member of the Subcommittee on Aviation, House Committee on Transportation and Infrastructure, asked us to provide information on steps that the aviation community is taking to address concerns about cabin air quality. Specifically, our research focused on the following questions: (1) What is known about the major potential health effects of air quality in commercial airliner cabins on passengers and flight attendants? (2) What actions has the National Research Council recommended to improve cabin air quality, and what is the status of those actions? (3) What technologies are available today to improve the air quality in commercial airliner cabins, and which, if any, should be required? To answer the first question, we reviewed the December 2001 National Research Council report on aircraft cabin air quality, which was the most current and comprehensive examination of the existing literature on this issue and made recommendations for potential approaches for improving cabin air quality. We also independently reviewed many of the studies on issues related to cabin air quality, paying particular attention to those issued after the publication of the 2001 Council report. We also gathered information from the governments of Australia, Canada, and the United Kingdom and airlines. We also interviewed officials representing the Federal Aviation Administration (FAA), the World Health Organization (WHO), the Centers for Disease Control and Prevention (CDC), the National Institute for Occupational Safety and Health (NIOSH), the Aerospace Medical Association (AsMA), the Air Transport Association (ATA), the Association of Flight Attendants (AFA), the International Airline Passengers Association (IAPA), aircraft and air filter manufacturers, as well as experts on cabin air quality issues, including members of the committee that produced the 2001 Council report on cabin air quality. To address the second question, we interviewed Council committee members about their views on how FAA was addressing the recommendations they made in their report. Before conducting the interviews, we provided the committee members with information from FAA on its plans for addressing the Council’s recommendation. We then asked them for their views on the approach for addressing each of the recommendations. We conducted interviews with 11 of the 13 committee members; we were unable to contact 2 members. Of the 11 members we interviewed, 8 agreed to provide their views on at least some of the recommendations. Three members declined to address any of the recommendations, saying that they were outside their fields of expertise and that they had not followed the progress of FAA’s implementation of the recommendations. To address the third question, we interviewed representatives of aircraft manufacturers, filter manufacturers, FAA officials, and experts on aircraft ventilation systems, including members of the committee. To determine HEPA filter usage, we first identified the 28 airlines that account for 99.94 percent of the revenue passenger miles (RPM) flown by U.S. airlines as reported in Aviation Daily for May 2003. A revenue passenger mile is a standard unit of passenger demand for air transport, defined as one fare-paying passenger transported one mile. Our primary focus with the larger aircraft was to determine the HEPA filter usage for the 3,422 larger aircraft that recycled cabin air. To obtain this information, we surveyed the 14 airlines that had aircraft in this category and obtained responses from 12 (covering 3,038 of the 3,422 aircraft in this category). Our survey form, which we administered by e-mail, asked the airlines to provide the following information: the number of active aircraft by model type as of June 30, 2003; the number of active aircraft with HEPA filters; the number of active aircraft without HEPA filters; the reasons why HEPA filters are not used; and, if applicable, the types of filters used if other than HEPA filters. Our primary focus with the regional aircraft was to determine what percentage of these aircraft recycled air, and, for those aircraft that did recycle air, what percentage would require major modifications to be retrofitted with a HEPA filter. We were able to make this determination on the basis of information provided by the manufacturers. Because only a small portion of the regional aircraft that recycle air are capable of being fitted with HEPA filters, we did not survey the 13 airlines that had only regional aircraft. In the cases where returned surveys also included information on regional aircraft that could use HEPA filters with little or no retrofitting, we found that only a small portion were doing so. Professor of environmental medicine and director of the Center for Particulate Matter Health Effects Research and of the Human Exposure and Health Effects Research Program at New York University School of Medicine. Associate professor of environmental microbiology at the Harvard School of Public Health. Dr. Burge’s current area of research is on the role of environmental exposures in the development of asthma and evaluating exposure to fungi, dust mite, cockroach, and cat allergens in three separate epidemiology studies assessing risk factors for the development of asthma. Associate dean for Research and Graduate Programs and director of the Engineering Experiment Station at the College of Engineering, Kansas State University. Dr. Jones’s research interests are in heat and mass transfer, human thermal systems simulation, and thermal measurements and instrumentation. Air pollution research specialist with the Division of Environmental and Occupational Disease Control of the California Department of Health Services. Her research has focused on the evaluation of methods to collect and identify airborne biological material and on engineering measures to control airborne infectious and hypersensitivity diseases. Professor in the Department of Environmental Health, Industrial Hygiene and Safety Program of the University of Washington and director of the Northwest Center for Occupational Health and Safety. His research is focused on human response to inhalation of air contaminants, including the products of combustion and volatile solvents, and has encompassed both ambient air contaminants and occupational environmental health hazards. Professor of environmental engineering in the Department of Civil and Environmental Engineering of the University of California, Berkeley. His main research interest is indoor air quality, with emphasis on pollutant- surface interactions, transport/mixing phenomena, aerosols, environmental tobacco smoke, source characterization, exposure assessment, and control techniques. Executive director of the Aerospace Medical Association in Alexandria, Virginia, retired from the U.S. Air Force in 1989 with the rank of colonel after a military medical career. The Akira Yamaguchi Professor of Environmental Health and Human Habitation and director of the Environmental Science and Engineering Program at the Harvard School of Public Health. Dr. Spengler’s research is focused on assessment of population exposures to environmental contaminants that occur in homes, offices, schools, and during transit, as well as in the outdoor environment. Professor of epidemiology in the Division of Public Health, Biology, and Epidemiology at the University of California, Berkeley, and is codirector and principal investigator for the Center for Family and Community Health. Dr. Tager’s research includes the development of exposure assessment instruments for studies of health effects of chronic ambient ozone exposure in childhood and adolescence, the effects of ozone exposure on pulmonary function, and the effects of oxidant and particulate air pollution on cardio-respiratory morbidity and mortality and morbidity from asthma in children. Associate professor in the Department of Health Care and Epidemiology at the University of British Columbia and head of the Division of Occupational and Environmental Health. Dr. Van Netten’s research interests include environmental toxicology and the use of electrodiagnostics to monitor worker exposure to agents that affect the peripheral nervous system. Professor of environmental medicine and pediatrics at the University of Rochester School of Medicine and Dentistry. His special interest and publications lie primarily in areas that involve chemical influences on behavior, including the neurobehavioral toxicology of metals such as lead, mercury, and manganese. Adjunct professor in the Department of Environmental and Community Medicine at the University of Medicine and Dentistry of New Jersey, Robert Wood Johnson Medical School/Rutgers. His research interests, among others, include chemical interactions among indoor pollutants and the chemistry of the outdoor environment as it impacts the indoor environment. Professor of toxicology and associate director of the Institute for Toxicology and Environmental Health at the University of California, Davis. Dr. Witschi’s research interests include experimental toxicology, biochemical pathology, and the interaction of drugs and toxic agents with organ function at the cellular level. Aboard aircraft, cabin occupants are confined in close quarters for extended periods and can be exposed to infectious diseases carried by other occupants. Because air travel is rapid, people can complete their journeys before the symptoms of a disease begin. Consequently, there has been much concern regarding the in-flight transmission of contagious diseases, particularly tuberculosis and, more recently, severe acute respiratory syndrome (SARS). As part of our review of airliner cabin air quality, we tracked the status of SARS and air travel. SARS is a serious respiratory illness that has affected persons in Asia, North America, and Europe. According to the World Health Organization (WHO), as of September 26, 2003, there were an estimated 8,098 probable cases reported in 27 countries, including 29 cases in the United States. There have been 774 deaths worldwide, none of which have occurred in the United States. The Centers for Disease Control and Prevention (CDC) believes SARS is caused by a previously unrecognized coronavirus. The symptoms of SARS can include a fever, chills, headache, other body aches, and a dry cough. SARS appears to be transmitted by close personal contact, which includes touching the eyes, nose, or mouth after touching the skin of infected individuals or objects that have been contaminated with infectious droplets released by an infected individual while coughing or sneezing. People with SARS pose the highest risk of transmission to household members and health care personnel in close contact. Most cases of SARS involved people who cared for or lived with someone with SARS or had direct contact with objects contaminated with infectious droplets. Information to date suggests that people are most likely to be infectious when they have symptoms such as fever or cough. However, it is not known how long before or after their symptoms begin that people with SARS might be able to transmit the disease to others. Most of the U.S. cases of SARS have occurred among travelers returning to the United States from other parts of the world affected by SARS, such as China. According to WHO, as of September 26, 2003, the latest probable case of SARS reported in the United States was on July 13, 2003. However, there is no evidence that SARS is spreading in the United States. WHO has reported that although the global outbreak of SARS has been contained, considerable uncertainty surrounds the question of whether SARS might recur, perhaps according to a seasonal pattern. Several respiratory illnesses occur much less frequently when temperature and humidity are high and then return when the weather turns cooler. WHO has also requested all countries to remain vigilant for the recurrence of SARS and to maintain their capacity to detect and respond to the reemergence of SARS, should it occur. The CDC has conducted broadcasts over the Internet for healthcare providers on preparing for the return of SARS. WHO has reported that as of May 23, 2003, there have been 29 probable cases of in-flight SARS transmissions on four flights worldwide. Out of the 29 cases, 24 were on one flight, and 4 of the 29 cases were flight attendants. WHO has stated that since then there have been no reported cases of in- flight SARS transmissions. The WHO Director of Communicable Diseases stated there is a very low risk of catching SARS on an airplane through the airplane’s ventilation system. He noted that nearly all of the in-flight transmissions occurred between passengers who were sitting near each other. This official also stated that airport screening procedures have been effective in keeping individuals displaying SARS symptoms from boarding aircraft. In October 2003, WHO issued a report in which it did not find evidence that SARS is an airborne disease. This report further stated that at all outbreak sites the main route of transmission was direct contact, via the eyes, nose, and mouth, with infectious respiratory droplets. According to the study, laboratory confirmed SARS developed in 16 persons, 2 others were given diagnosis of probable SARS and four were reported to have SARS but could not be interviewed by the study team. WHO reported that as of May 23, 2003, 24 probable SARS transmissions occurred on this flight. The study does not indicate the reason for the discrepancy. 22 people with illness, the mean time from the flight to the onset of symptoms was four days, and there were no recognized exposures to persons with SARS before or after the flight. The study found that illness in passengers was related to the physical proximity to the person with SARS on the flight. Illness was reported in 8 of the 23 passengers seated in the three rows in front of the person with SARS, as compared to 10 of the 88 passengers seated elsewhere on the aircraft. The study noted however, that 90 percent of the passengers who became ill on the flight were seated more than 36 inches from the person with SARS, which had been the cutoff used to define the spread of SARS droplets in other investigations. The study authors speculated that “airborne, small particle, or other remote transmission may be more straightforward explanations for the observed distribution of cases.” The study concluded that SARS transmissions may occur on flights carrying people in the symptomatic stages of the disease and that measures to reduce the risk of transmission are warranted. In November of 2003, more than 50 leading SARS researchers from 15 countries concluded that a safe and effective vaccine would be an important complement to existing SARS control strategies. Most of the experts agreed, however, that a SARS vaccine will not be available in time, should an epidemic reoccur in the near future. A WHO official stated that the licensing and commercialization of a SARS vaccine could probably not be realized in 2004. According to the International Air Transport Association (IATA), passengers are not at risk from being infected with the SARS virus from the cabin crew, who must be medically fit, without SARS symptoms, and physically capable to fly and fulfill their duties. CDC has stated that there is currently no evidence that a person can be infected with SARS from handling baggage or goods, because the primary means of infection is close personal contact. CDC has also stated the transmission of SARS has been associated with close contact with people with SARS symptoms, such as passengers on an aircraft. The CDC has issued travel alerts and advisories for travel to areas affected by SARS. A travel advisory recommends that nonessential travel be deferred; in contrast, a travel alert informs travelers of the health concern and provides advice about specific precautions. The CDC recommends that if SARS is suspected in an outpatient setting, healthcare providers should provide and place a surgical mask over the person’s nose and mouth. The CDC further recommends that if this is not feasible, the person with SARS should be asked to cover his/her mouth with a disposable tissue when coughing, sneezing, or talking. WHO has urged airport officials in countries affected with SARS outbreaks to take precautionary screening measures, such as asking passengers if they have had contact with anyone who has had the disease. U.S. airlines that fly to Asia report that they are following CDC and WHO guidelines. FAA has links to the CDC and WHO guidelines on its Web site. U.S. airlines that do not fly internationally are not modifying their procedures because they see no SARS risk to cabin occupants. According to ATA officials, U.S. airlines that do not fly internationally were not advised by CDC to modify procedures because there was no evidence of community transmission of SARS in the United States. However, all ATA-member airlines cooperated fully with CDC in instances where there was a possible person with SARS who might have transferred from an international to a domestic flight. In 2001, Building Research Establishment, Ltd. (BRE) initiated a study on cabin air quality that was estimated to cost $8 million. The following link provides the official description of the effort as posted on BRE’s Internet site: http://projects.bre.co.uk/envdiv/cabinair/work_programme.html To further the industry’s understanding of what is known about air quality issues by assessing the current level of air quality found in aircraft cabins, BRE will monitor four generic aircraft types in flight and assess cabin air quality and ventilation system performance, including the effects of passenger density and flight duration. A total of 50 such flights are planned. The findings will identify current best practice and will be used to improve understanding of (1) what constitutes good cabin air; (2) the impact on the safety, health, and comfort of passengers and cabin crew; and (3) the effects on operating costs, fuel energy use, and the external environment. To identify the technology (i.e., environmental control systems including filtration and air distribution) that is available to improve cabin air quality, BRE will develop new designs to address various air quality issues, including the control of carbon dioxide, humidification, outside air supply, and the recirculation and filtration of air. Operating costs and energy consumption will be analyzed in relation to environmental impacts. New designs must be suitable for retrofitting to existing aircraft, either as complete environmental control systems or as subsystems within existing units. The overall intention is to make environmental control systems flexible and easy to operate. For example, improved systems might enable the crew to match the system to the passenger load factor, reduce bleed air, or provide additional comfort in different areas of the cabin. BRE will seek to improve the performance of filtration systems and then develop new technologies and systems. It will assess existing filtration systems and consider how the installation process and activities such as maintenance, lifting, and cleaning affect performance. A technology demonstrator rig will be developed to test new filtration systems. New and enhanced features will be developed to mitigate such problems as the recirculation of pollutants, bacteria, and viruses. Other major factors include the compatibility of the filtration systems with the overall environmental control system, operational costs, and energy consumption. The effectiveness of current air distribution systems will be gauged through in-flight monitoring. New design strategies and technologies, such as personal controls, will be developed with the goal of maximizing the effectiveness of cabin ventilation. The study will also look at ways of making the distribution system more easily integrated with aircraft design. To assess and determine potential improvements to existing standards and performance specifications for the cabin environment, BRE will assess existing standards. Potential improvements to existing standards and specifications will be determined. Checks will be carried out to ensure the feasibility of the performance specifications and costs and to identify any environmental implications. New performance indexes and comfort criteria will also be defined, and BRE will develop a model to be tested. Key recommendations of the Council report were to establish surveillance and research programs to determine effects of cabin air quality on aircraft occupants’ health and comfort. Surveillance Program The following is a detailed description of these programs as stated in the Council report. The following is a detailed description of the research program, including long-standing questions regarding air quality, objectives, and program approach. How is the ozone concentration in the cabin environment affected by various factors (e.g., ambient concentrations, reaction with surfaces, the presence and effectiveness of catalytic converters) and what is the relationship between cabin ozone concentrations and health effects on cabin occupants? What is the effect of cabin pressure altitude on susceptible cabin occupants, including infants, pregnant women, and people with cardiovascular disease? Does the environmental control system (ECS) provide sufficient quantity and distribution of outside air to meet the FAA regulatory requirements, and to what extent is cabin ventilation associated with complaints from passengers and cabin crew? Can it be verified that infectious disease agents are transmitted primarily between people who are in close contact? Does recirculating cabin air increase cabin occupants’ risk of exposure? What is the toxicity of the constituents or degradation products of engine lubricating oils, hydraulic fluids, and de-icing fluids, and is there a relationship between exposures to them and reported health effects on cabin crew? How are these oils, fluids, and degradation products distributed from the engines into the ECS and throughout the cabin environment? What are the magnitudes of exposures to pesticides in aircraft cabins, and what is the relationship between the exposures and reported symptoms? What is the contribution of low relative humidity to the perception of dryness, and do other factors cause or contribute to the irritation associated with the dry cabin environment during flight? To investigate possible association between specific air quality characteristics and health effects or complaints. To evaluate the physical and chemical factors affecting specific air quality characteristics in aircraft cabins. To determine whether FARS for air quality are adequate to protect health and ensure the comfort of passengers and crew. To determine exposure to selected contaminants (e.g., constituents of engine oils and hydraulic fluids, their degradation products, and pesticides) and establish their potential toxicity more fully. Research program approach Use continuous monitoring data from surveillance program when possible. Monitor additional air quality characteristics on selected flights as necessary (e.g., integrated particulate-matter sampling to assess exposure to selected contaminants). Identify and monitor “problem” aircraft and review maintenance and repair records to evaluate issues associated with air quality incidents. Collect selected health data (e.g., pulse-oximetry data to assess arterial oxygen saturation of passengers and crew). Conduct laboratory and other ground-based studies to characterize air distribution and circulation and contaminant generation, transport, and degradation in the cabin and the ECS. In addition to the individuals named above, Kevin Bailey, Jim Geibel, David Ireland, Bert Japikse, Stanley Kostyla, Edward Laughlin, Donna Leiss, and Maria Romero made key contributions to this report. American Society of Heating, Refrigerating and Air-Conditioning Engineers. Standard 62-2001, Ventilation for Acceptable Indoor Air Quality. Atlanta, GA: 2001. Barnas, Gary P. Altitude Sickness: Preventing Acute Mountain Sickness. Milwaukee, WI: Medical College of Wisconsin, June 4, 1997. http://healthlink.mcw.edu/article/907195877.html (accessed June 19, 2003). California Department of Health Services, Occupational Illness Among Flight Attendants Due to Aircraft Disinsection, California Department of Health Services, http://www.dhs.ca.gov/ohb/OHSEP/disinsection.pdf (accessed Nov. 10, 2003). Centers for Disease Control. Updated Interim Domestic Infection Control Guidance in the Health-Care and Community Setting for Patients with Suspected SARS, Centers for Disease Control, http://www.cdc.gov/ncidod/sars/infectioncontrol.htm (accessed May 13, 2003). Environmental Protection Agency. Air Pollution Technology Fact Sheet on High Efficiency Particulate and Ultra low Penetration Air Filters. Research Triangle Park, NC: July 15, 2003. Environmental Protection Agency. Indoor Air Facts Number 4 (revised): Sick Building Syndrome. Washington, D.C.: February 10, 2003. Federal Aviation Administration. Report to the Administrator on the National Research Council Report “The Airliner Cabin Environment and the Health of Passengers and Crew.” Washington, D.C.: February 6, 2002. Gratz, Norman G., Robert Steffen, William Cocksedge. “Why Aircraft Disinsection?” Bulletin of the World Health Organization 78 (8) (2000): 995-1004. Hocking, Martin B. “Indoor Air Quality: Recommendations Relevant to Aircraft Passenger Cabins.” American Industrial Hygiene Association Journal 59 (1998): 446-454. Hocking, Martin B. “Trends in Cabin Air Quality on Commercial Aircraft: Industry and Passenger Perspectives.” Reviews on Environmental Health 17, 1 (2002): 1-49. Maresh, Carl M., Lawrence E. Armstrong, Stravos A. Kavouras, George J. Allen, Douglas J. Casa, Michael Whittlesey, and Kent E. LaGrasse. “Physiological and Psychological Effects Associated with High Carbon Dioxide Levels in Healthy Men.” Aviation, Space, and Environmental Medicine 68, 1 (1997): 41-45. MedicineNet.com. Definitions of Bacteria, Virus, and Coronavirus, MedicineNet.com, http://www.medterms.com/script/main/art.asp? ArticleKey=13954 and http://www.medicinenet.com/script/main/art.asp? li=MNI&ArticleKey=5997&pf=3 (accessed May 12, 2003) and http://www.medterms.com/script/main/art.asp?ArticleKey=22789 (accessed October 14, 2003). Military Specification. MIL-E-5007D, General Specifications for Aircraft Turbojet and Turbofan Engines. 1973. Nagda, Niren L., Harry E. Rector, Zhidong Li, David R. Space. “Aircraft Cabin Air Quality: A Critical Review of Past Monitoring Studies,” Air Quality and Comfort in Airliner Cabins, ASTM STP 1393, N. L. Nagda, Ed., American Society for Testing and Materials (2000): 215-239. National Research Council. The Airliner Cabin Environment: Air Quality and Safety. National Academy Press. Washington, D.C.: 1986. National Research Council. The Airliner Cabin Environment and the Health of Passengers and Crew. National Academy Press. Washington, D.C.: Distributed electronically December 2001; bound report copyrighted 2002. Olsen, Sonja J. et al. “Trasmission of Severe Acute Respiratory Syndrome on Aircraft.” The New England Journal of Medicine 349; 25 (2003): 2416- 2422. Parliament of the Commonwealth of Australia. Rural and Regional Affairs and Transport References Committee. Australian Senate Air Safety and Cabin Quality in the BAe 146 Aircraft. Canberra: 2000. Rayman, Russell B. “Cabin Air Quality.” Aviation, Space and Environmental Medicine 73 (2002): 211-215. Society of Automotive Engineers. ARP 4418, Procedure for Sampling and Measurement of Engine Generated Contaminants in Bleed Air Supplies from Aircraft Engines Under Normal Operating Conditions. Warrendale, PA: 1995. Society of Automotive Engineers. ARP 1270, Aircraft Cabin Pressurization Control Criteria. Warrendale, PA: 2000. The House of Lords, Select Committee on Science and Technology. Air Travel and Health,5th Report HL Paper 121-I. Session 1999-2000. London: 2000. U.S. Department of Transportation, Office of the Inspector General. Further Delays in Implementing Occupational Safety and Health Standards for Flight Attendants Are Likely. AV-2001-102. Washington, D.C.: September 26, 2001. U.S. General Accounting Office. SARS Outbreak: Improvements to Public Health Capacity Are Needed for Responding to Bioterrorism and Emerging Infectious Diseases. GAO-03-769T. Washington, D.C.: May 7, 2003. U.S. General Accounting Office. Serve Acute Respiratory Syndrome: Established Infectious Disease Control Measures Helped Contain Spread, But a Large Scale Resurgence May Pose Challenges. GAO-03-1058T. Washington, D.C.: July 30, 2003. U.S. General Accounting Office. Aviation Safety: Advancements Being Pursued to Improve Airliner Cabin Safety and Health. GAO-04-33. Washington, D.C.: October 3, 2003. World Health Organization. International Travel and Health, World Health Organization, http://www.who.int/ith/chapter02_01.html (accessed July 24, 2003). World Health Organization. Tuberculosis and Air Travel: Guidelines for Prevention and Control World Health Organization, http://www.who.int/gtb/publications/aircraft/ (accessed Oct. 1, 2003). Zitter, Jessica, Peter Mazonson, Dave Miller, Stephen Hulley, and John Balmes. “Aircraft Cabin Air Recirculation and Symptoms of the Common Cold.” Journal of the American Medical Association 288 (2002): 483-486. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e- mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Over the years, the traveling public, flight attendants, and the medical community have raised questions about how airliner cabin air quality contributes to health effects, such as upper respiratory infections. Interest in cabin air quality grew in 2003 when a small number of severe acute respiratory syndrome (SARS) infections may have occurred on board aircraft serving areas that were experiencing outbreaks of the disease. In 2001, a National Research Council report on airliner cabin air quality and associated health effects recommended that additional research be done on the potential health effects of cabin air. GAO reviewed what is known about the health effects of cabin air, the status of actions recommended in the 2001 National Research Council report, and whether available technologies should be required to improve cabin air quality. Despite a number of studies of the air contaminants that airline passengers and flight attendants are potentially exposed to, little is known about their associated health effects. Reports on airliner cabin air quality published by the National Research Council in 1986 and 2001 concluded that more research was needed to determine the nature and extent of health effects on passengers and cabin crew. Although significant improvements have been made to aircraft ventilation systems, cabin occupants are still exposed to allergens and infectious agents, airflow rates that are lower than those in buildings, and air pressures and humidity levels that are lower than those normally present at or near sea level. The 2001 National Research Council report on airliner cabin air quality made 10 recommendations, 9 of which directed the Federal Aviation Administration (FAA) to collect more data on the potential health effects of cabin air and to review the adequacy of its standards for cabin air quality. FAA has addressed these 9 recommendations to varying degrees as it attempts to balance the need for more research on cabin air with other research priorities (e.g., passenger safety). However, some in the aviation community, including some of the committee members who produced the report on cabin air, do not feel that FAA's planned actions will address these recommendations adequately. For example, most members were concerned that FAA's plan for implementing the report's key recommendations on the need for more comprehensive research on the health effects of cabin air was too limited. FAA plans to address these recommendations in two parts--the first, which started in December 2003, and the second, which will start in December 2004 and end in late 2006 or early 2007. However, FAA lacks a comprehensive plan, including key milestones and funding needs. In addition, most committee members thought that FAA's response to a recommendation for it to improve public access to information on the health risks of flying was inadequate. We also had difficulty accessing this information on FAA's Web site. Several technologies are available today that could improve cabin air quality, (e.g., increasing cabin humidity and pressure or absorbing more cabin odors and gasses); however, opinions vary on whether FAA should require aircraft manufacturers and airlines to use these technologies. GAO found that one available technology, high-efficiency particulate air (HEPA) filtering, was strongly endorsed by cabin air quality and health experts as the best way to protect cabin occupants' health from viruses and bacteria in recirculated cabin air. While FAA does not require the use of these filters, GAO's survey of major U.S. air carriers found that 85 percent of large commercial airliners in their fleets that recirculate cabin air and carry more than 100 passengers already use these filters. However, the use of HEPA filters in smaller commercial aircraft that carry fewer than 100 passengers is much lower. The cost to retrofit the smaller aircraft to accept the HEPA filter, if it were made mandatory, could be expensive.
The number of persons in the United States over age 55 will grow substantially over the next two decades. Due to the aging of the baby boom generation, older persons are becoming an increasingly significant proportion of all persons and workers. In 2002, the U.S. Census Bureau estimates that there were 61 million people over age 55 and their numbers are projected to grow to 103 million in 2025. This growth will increase the percentage of the population that is over 55 from 22 percent to 30 percent. This shift in population age will affect the composition of the labor force. The number of older workers in the United States is projected to grow substantially over the next two decades, and they will become an increasingly significant proportion of all workers. In June of 2002, there were 19.2 million workers over age 55 and their numbers are projected to increase to 31.8 million by 2015. This growth is projected to increase the percentage of the workforce that is over 55 from 14 percent in 2002 to nearly 20 percent in 2015. The projected growth in the percentage of the labor force over 55 will occur among both men and women. This would shift an earlier trend among older men, whose labor force participation declined from the 1950s until the mid-1990s. Since the mid-1990s, labor force participation among older men has been relatively constant at 67 percent for men age 55 to 64 and 17 percent for men age 65 and older. The Bureau of Labor Statistics now projects these levels to rise to 69 percent and nearly 20 percent by 2015. The expected growth in labor force participation rates among older women would continue the current long-term trends of increases in their participation. For example, the percent of women age 55 to 64 in the labor force has steadily increased since the mid-1980s, from 42 to 52 percent in 2000, while rates among women 65 and older have grown from 7 to 9 percent in 2000. BLS projects these numbers to increase to 61 percent and 10 percent by 2015. There are many factors that influence a person’s decision to work at older ages. One key factor is financial incentives created by the rules regarding eligibility for benefits from the national pension system—Social Security in the United States. The decision to continue working is primarily related to the trade-off between earnings and leisure time. The availability of Social Security benefits allows workers to substitute non-labor income for their earnings and to enjoy more leisure. Depending on the eligibility rules and schedule of benefits, it can be more or less advantageous for workers to retire at an earlier age rather than to continue employment. The eligibility age for full Social Security benefits is currently 65 years and 8 months and rising, with reduced benefits available at age 62. If a person elects to start receiving benefits at age 62, 63, or 64, the total lifetime benefits they receive will be roughly equivalent. Even though delaying receipt of benefits for 1 year is on average “actuarially equivalent or neutral,” data from the mid-1990s show that most people (60 percent) elect to start benefits at age 62. These benefits can be reduced if the beneficiary has earnings above the income threshold when they are age 62-64. There are no earnings limitations on Social Security benefits above age 65. Another important retirement incentive is eligibility for employer-provided pension benefits. In the United States, about half of the labor force has some type of employer-provided pension coverage. Employer-provided pensions are customarily classified into two major categories: defined benefit and defined contribution plans. A defined benefit plan promises a retirement benefit amount that is usually expressed as an annual payment, derived from a formula based on a worker’s years of employment, earnings, or both. In the United States, benefits in defined benefit plans are insured by the Pension Benefit Guaranty Corporation (PBGC). Under a defined contribution plan, the retirement benefit is expressed as an account balance for the individual employee. This balance results from contributions that the employer, the worker, or both make, as well as from subsequent investment returns on the assets in the account. Under a defined contribution plan, retirement benefits are not guaranteed by the PBGC, and employees bear the risks of investment. As different types of pension plans, defined benefit and defined contribution plans provide workers with different incentives for either retiring or continuing work. Defined benefit plans often provide incentives for early retirement because they often do not increase retirement benefits in-line with additional years of work with the firm after the early retirement age. Under defined contribution plans, benefits can continue to increase, consistent with continued contributions and positive rates of return on assets. Since workers’ accounts increase in size proportional to the amounts that are contributed by them or by their employers, they do not create incentives to retire based on the benefit formula. In the past, a greater percentage of pension-plan participants were covered by defined benefit plans. In 1998, according to the Employee Benefits Research Institute, 20 percent of households had defined benefit coverage only; 57 percent had defined contribution coverage only; and 23 percent had both types of coverage. Health status and occupation are other important factors that influence the decision to work at older ages. As people age, they tend to encounter more health problems that make it more difficult to continue working. Thus, jobs that are physically demanding, usually found in the blue-collar and service sectors of the economy, can be difficult for many people to perform at older ages. Moreover, health status and occupation are often interrelated since health can be affected by work environment. Blue-collar and service workers, such as construction workers and janitors, often face physically demanding work environments that may affect their health status; these consequently lead to health impairments that affect their ability to work to older ages. Although this group continues to face problems, there is evidence that the health of older persons generally is improving. This suggests that, compared with previous generations, today’s older age population has an increased capacity to work to older ages. Although the Age Discrimination in Employment Act (ADEA) protects workers in the United States age 40 and older from employment discrimination, labor force participation is not solely an older worker’s decision, as there must also be a demand for their labor. Employers’ perceptions of older people may form barriers to older workers’ retaining their current jobs, finding new jobs if they are laid off, or re-entering the labor force after retiring if their retirement income is inadequate. For example, some employers believe that older workers have lower productivity than younger workers, generate higher costs for employee benefits such as health care and pensions, and represent higher costs for recruitment and training since they have less potential time to recoup these up front costs compared with a younger worker. Encountering these obstacles could discourage older workers and influence their decision to retire. The labor force decisions of older persons are also influenced by the availability of alternative employment arrangements. In the United States, there has been interest among older workers who wish to work longer in seeking employment arrangements that result in “phased retirement” or “bridge employment.” Phased retirement usually refers to staying with a career job on a part-time or part-year schedule while phasing out employment over a number of years to complete retirement. Bridge employment usually refers to leaving a career job and moving to part-time work with another firm in the same or different industry, prior to complete retirement. In the United States, nearly half of all workers age 55 to 65 utilize a bridge job before completely retiring. Older Americans receive income through a variety of sources, with the Social Security program constituting the largest share for most persons. In 2000, 90 percent of households with a person age 65 or older received Social Security benefits. These benefits constitute more than 50 percent of total income for 64 percent of these households. Social Security benefits, on average, replace about 40 percent of a program-covered individual’s pre-retirement income, if benefits are taken at age 62. Other major sources of income for older Americans are asset income (received by 59 percent of households), retirement benefits other than Social Security (41 percent), and earnings (22 percent). Social Security represents 41 percent of aggregate income, earnings represent 23 percent, retirement benefits other than Social Security are 18 percent, and asset income is 17.5 percent. In the United States, the Disability Insurance (DI) program provides compensation for the reduced earnings for individuals who have worked long enough and recently enough to become insured and have lost their ability to work because of a severe, long-term disability. DI provides benefits to persons who are not able to perform substantial gainful activity due to a physical or mental impairment. DI is not a major source of income for a significant portion of older persons in the United States. In 2000, 7 percent of the population age 50-59 received DI benefits. The recent and projected labor force participation and population aging trends for most other high-income nations will be less pronounced in the United States, but the aging of the population will nevertheless pose a challenge to retirement income programs. ILO data for 2000 show that the labor force participation rates for older U.S. workers, though not as high as in previous decades, will be higher than in most other high-income nations. It is expected that, because of higher fertility and immigration rates, the U.S. population will also age more slowly than other high- income nations. However, even though the population of the United States is not aging as rapidly as other countries, the old-age dependency ratio— the number of people over the age of 60 for every 100 working age people (ages 15-59)—is projected to rise from 19 in 2000 to 35 in 2050. This near doubling of the old-age dependency ratio will strain the resources of programs that pay for retirement. Even though the labor force participation of workers age 50 to 64 is expected to decline in most high-income nations, including the United States, between 2000 and 2010 (see fig.1), the United States has and will continue to have higher rates of labor force participation for older workers than most other high-income nations. In some high-income nations, such as France, Germany, and Italy, about 2 to 4 percent of persons age 65 and older participated in the labor force. In contrast, the labor force participation rate in 2000 among U.S. workers age 65 and over was 10 percent (see fig. 2), the second highest labor force participation rate among key high-income nations and 1.4 percentage points higher than the aggregate for all 23 nations the World Bank has designated as high-income. Labor force participation among U.S. workers age 50-64 was 66 percent (see fig. 1). This trails only Sweden’s (79 percent) and Japan’s (73 percent) rates for this age group. The relatively high rate of labor force participation by older U.S. workers is being sustained by an increasing percentage of older women working. In the United States, as in other high-income nations, labor force participation among older men has declined since 1950 and, for the most part, is projected to continue declining through 2010 (see fig. 3). During that same period, however, labor force participation among older women is projected generally to rise (see fig. 4). In the United States, labor force participation among women age 50-64 will nearly double from 1950 to 2010, increasing from 31 percent to 58 percent. The size of the baby boom generation, rising life expectancy and declining fertility are expected to contribute to a rising median age in high-income nations. Because the baby boom generation is large in number, a growing proportion of the populations in high-income nations will be over 60. In the United States, for example, this will be the case for about a quarter of the population. Moreover, as this generation has grown older, life expectancy has increased in all high-income nations. From 1955 to 2000, life expectancy in the United States increased from 70 to 77 years and is projected to increase to 80 by 2040. As a result of these trends, the median age of the U.S. population, like that of other high-income nations, is projected to steadily increase in the coming decades, but it will still be lower than that of most high-income nations. Specifically, the median age of the U.S. population in 2030 is expected to be comparable to the current median ages in some high- income nations. For example, the median age of the U.S. population rose from 30 to 36 years from 1980 to 2000 and is projected to increase to 40 in 2030 (see fig. 5). In contrast, the median age of the populations of high- income countries was 38 years in 2000 and is projected to rise to 45 in 2030. Germany, Italy, Japan, and Sweden have the current and projected oldest populations with median ages ranging from 40 to 41 years in 2000 and projected increases to 51 to 54 years in 2050. Two factors will slow the trend toward an older population in the United States compared with most other OECD nations: fertility and immigration rates. Although fertility rates in high-income nations have declined overall since 1980, during the same time they have increased from 1.8 to 2.0 in the United States. The United States also has an immigration rate more than four times as high as Sweden and Japan, almost three times as high as the United Kingdom, and higher than that of most high-income nations. The consequences of these demographic trends are most evident in the elderly dependency ratio. In most high-income nations, this ratio has been rising throughout the last 50 years and is projected to grow at a faster rate in the next half century (see fig. 6). The ratio in the United States is relatively low compared with other high-income nations. For every 100 people of working age (15 to 59) in the United States, approximately 19 people were in or nearing retirement age (60 or above) in 2000 compared with a ratio of 22 for the aggregate of 23 nations the World Bank has designated as high-income. This difference is projected to grow. By 2050, this ratio for other high-income countries is projected to be 47 in comparison with 35 for the United States. Even though the United States ratio will be smaller than that of other high-income nations in 2050, it represents an increase of over 75 percent from the 2000 ratio. The recently enacted retirement policy reforms in Japan, Sweden, and the United Kingdom are expected to lead to higher labor force participation of older workers. Reforms adjusting benefits in the national pension systems of each of these nations provide incentives for older workers to extend their working lives. National and employer-provided pension reforms that introduce defined contribution features that do not link benefits to a specific age are also expected to encourage greater labor force participation of older workers. Other reforms that seek to limit the use of disability benefits as a route to early retirement will also influence the older worker labor force participation. Acknowledging that improving the employment opportunities of older workers is an important consideration, each of these nations is studying or has enacted reforms that address the issues of older worker’s employment more generally. Such reforms include loosening or eliminating mandatory retirement age standards, encouraging the elimination of age discrimination in employment, improving older worker training, providing employment earnings incentives, and exploring quality-of-work life issues such as the flexibility of work arrangements. Reforms in the United Kingdom, Japan, and Sweden that increase the age at which workers are eligible for benefits or allow flexibility in when and how pension benefits can be taken are some of the policy changes that may encourage older workers to stay in the workforce. The United Kingdom will phase in an increase in the age at which women become eligible for national pension benefits, so that, beginning in 2020, men and women will no longer be able to draw benefits before age 65. Japan has also enacted reforms that will gradually increase the full eligibility age for its earnings-based national pension system. In Japan, by 2025 for men and 2030 for women, the earliest age when this pension can be claimed will have risen from 60 to 65. Rather than increasing the age for benefit eligibility, pension reforms in Sweden allow older workers to take a full or partial national pension (i.e., one-fourth, one-half, or three-fourths of a full pension) at age 61 or later with no upper age limit and continue working. This flexibility may make it easier to retire gradually with a mix of pension benefits and earnings. Additional pension reforms that change benefit calculations so they reward continued work or discourage early retirement may also promote continued labor force participation by older workers. Sweden changed its benefit calculation to reward those who work longer. Under the new pension system in Sweden, pensions are based on lifetime earnings, instead of the highest 15 out of 30 years of earnings as they were under the old system. The United Kingdom adjusted its benefit calculation formula to increase the reward for those who defer drawing benefits from the national pension system. For example, by 2010, individuals who defer drawing their pension benefits will receive benefits that are 10.4 percent, rather than 7.5 percent, larger for each year deferred. In Japan, reforms have changed how pensions are calculated, reducing the level of benefits for future retirees through lower accrual rates. The expected effect of these changes is a 20-percent reduction in lifetime benefits by 2020, thereby making early retirement less affordable. Finally, reforms in Sweden and the United Kingdom, in changing how pension benefits are indexed, may discourage early retirement. The new pension system in Sweden indexes pension benefits to life expectancy. With increasing life expectancy, different generations of individuals with similar work and earnings histories will have to work longer to maintain a comparable standard of living in retirement. This benefit adjustment provides incentives for increased labor force participation by requiring individuals to bear the cost of increased life expectancy, either through additional work or lower benefits. The United Kingdom also revised the index it used to adjust benefits in the portion of its pension that provides flat-rate benefits. Prior to the reform, the United Kingdom adjusted benefits using either the higher of increases in average prices or average wages as an index. Now the United Kingdom uses only average price increases. Since prices tend to increase more slowly than wages, this reform has effectively reduced benefits relative to earnings. Each of the nations we studied implemented reforms that included defined contribution features in their national and employer-provided pension systems, although this shift was more pronounced in Sweden and the United Kingdom than in Japan. Defined contribution pensions are more retirement age neutral than traditional defined benefit pension plans. As part of its recent national pension reform, Sweden instituted a pay-as-you- go pension system with defined contribution features, including among other things a fixed contribution rate and notional individual accounts (the “notional defined contribution pension”). The new Swedish pension system also includes a smaller, funded defined contribution plan with an account for each individual worker (the premium pension). Reforms introduced in the United Kingdom in 1988 and 2001 permitted individuals to opt out of part of the national pension plan by participating either in employer-sponsored defined contribution plans or defined contribution individual pension plans called “personal pensions.” To participate in the individual plans, workers obtain an account from a financial institution and make contributions into their account or are provided access to a pension by their employer. Japan implemented legislation permitting employer-provided and personal defined contribution pension plans in 2002. In Sweden and the United Kingdom, the inclusion of defined contribution features in the national pension system has prompted complementary changes among the employer-provided pensions. In Sweden, three of the four major employer-provided pension plans converted from defined benefit plans to pure defined contribution plans or plans with a mix of both features following the national pension reform. In the United Kingdom, many employers have closed their defined benefit plans to new workers and replaced them with defined contribution plans. For Japan, where defined contribution pensions were only recently introduced, there is currently little data on the number of individual or employer-provided plans being formed or on the degree to which employers are substituting defined contribution plans for existing defined benefit plans. The inclusion of defined contribution features in national and employer- provided pension systems is expected to encourage greater labor force participation of older workers. Because workers will have a greater responsibility for ensuring retirement through contributions and the returns they can earn on them, it will be in their best interest to make contributions for as long as they can. In addition, because defined contribution plans often have greater portability than defined benefit plans, older workers may have greater ability to shift to jobs that suit their leisure and health needs rather than retiring. Both Sweden and the United Kingdom, where disability insurance has traditionally been an avenue to early withdrawal from the labor force, have introduced reforms in recent years that will tighten eligibility of disability benefits. In efforts to reduce the amount of early retirement financed through disability pensions, throughout the last decade Sweden has implemented successive reforms to tighten the eligibility requirements for disability insurance. This has included eliminating the ability of older workers to take a disability pension solely on the basis of long-term unemployment or a combination of unemployment and medical reasons. Medical reasons now provide the only valid criteria for granting a disability pension in Sweden. As part of its efforts, the United Kingdom, since the mid-1990s has tightened eligibility requirements, reduced paid benefits, and provided more support for returning to the workforce after an absence. For example, the government now reviews claims of incapacity to work every 3 years compared to the previous policy of not reviewing claims after the initial application, reduces or offsets disability benefits if the recipient also receives an employer-provided pension over a certain minimum level, provides services such as job search assistance to the disabled as a way to enable their return to work, and will test a policy allowing recipients to keep a portion of their wages if they return to work. Each nation we studied has enacted, or is considering, policies that address barriers to older workers’ continued employment such as mandatory retirement and age discrimination. In conjunction with its national pension reform, Sweden has already passed legislation giving employees the right to remain in employment until the age of 67, prohibiting the widespread practice of collective bargaining agreements prescribing mandatory retirement at age 65. As members of the European Union, both Sweden and the United Kingdom must legislatively prohibit employment discrimination based on age by 2006. It is unknown how the European Union requirement will affect mandatory retirement ages in specific industries or occupations in the United Kingdom. In the absence of legislation, both the United Kingdom and Japan have encouraged employers to voluntarily end age discrimination. The United Kingdom, for example, has publicized the benefits of an age-diverse workforce, and issued best practices for eliminating age discrimination. Like the United Kingdom, Japan has also encouraged firms to voluntarily modify employment practices and retirement policies. The government has programs that subsidize the wages of workers who take jobs at reduced pay after mandatory retirement and subsidizes companies that modify their employment practices to accommodate older workers. Each of the nations we studied has also made some efforts to provide older workers with access to training, job search assistance, and workplace flexibility. In the United Kingdom, for example, one government program provides job search assistance for people age 50 and older when they have been out of work 6 months or longer and also offers training opportunities and a wage enhancement. As part of its efforts, Japan has employment job assistance centers (called “Silver Human Resource Centers”) that provide older workers temporary jobs or volunteer opportunities. The Japanese government has also promoted a program to match older workers with suitable employers. In Sweden, efforts include the creation of a commission to explore policies to promote increased flexibility in working arrangements, such as granting older people a legal right to work part-time, and adjusting the public financing of education to promote skill development among older workers. The experiences of other nations suggest that the scope and comprehensiveness of reforms, the transparency and availability of information, and the strength of the economy play important roles in encouraging labor force participation by older workers. According to government officials in Japan, Sweden, and the United Kingdom, reforms have a better chance of succeeding if they are comprehensive and complementary. In addition, they said that education and transparent information is important for helping workers understand what the reforms will mean for their retirement income. Officials also agreed that a strong economy was important for success. Officials from each of the nations we studied said that the success of national pension reform, including those elements that influence older workers’ labor force participation, depends, in part, on the scope and purpose of the reforms. Officials from all three nations noted that reforms are most successful when they are comprehensive in scope. Both Sweden and the United Kingdom, for example, in reforming their pension systems also made changes to both their disability insurance programs and labor market policies. Some officials also stressed that reforms should be designed so that the intent of a particular reform is not thwarted by countervailing policies in other areas. For example, Swedish pension experts and other officials have acknowledged that the continued presence of mandatory retirement ages in collective bargaining agreements and labor regulations can work at cross-purposes with features in the new national pension system that now relate benefit levels to retiree life expectancy and that essentially have no upper retirement age. They noted that to increase the effectiveness of the work incentives in their national pension reforms, these impediments will have to be resolved, as well as the need to establish complementary reform policies that foster alternative work arrangements and quality of work-life issues generally. Other nations also acknowledged the importance of complementary reforms. Japan has supplemented its national pension reforms with wage subsidies to encourage older employees to continue to work. Japan and the U.K. also support their national pension reforms by committing additional resources to organizations and services that provide job search assistance to older workers. Officials in each nation that we studied emphasized that access to information and public education about how the reforms will affect retirement income would also be needed if the reforms were to have their intended effect. There is concern in these nations that many workers are currently unaware of the implications of the reforms. For example, surveys conducted by the Swedish government and advocates for senior citizens indicate that many individuals do not yet have a detailed understanding of the new pension system. U.K. government officials expressed concern that their citizens could have similar difficulties understanding the implemented reforms. To help their citizens understand that they may need to work longer or save more in order to ensure an adequate retirement income, each of the nations we studied has taken steps to educate workers. In Sweden, the government has launched several large information campaigns since the new pension system’s implementation. In addition, participants receive annual statements of their account balances in both the notional defined contribution (NDC) and premium pensions. To help educate its workers, the U.K. government has created a pension forecast tool that will present workers with estimates of pension income from both government and nongovernment sources. In Japan, because defined contribution pensions are very new and offer both advantages and disadvantages to participants, employers are required to provide information to employees about defined contribution plan features and management. In addition to the importance of information and education, government officials and pension experts agreed that a strong national economy is necessary for the success of pension and labor market reforms that may contribute to higher labor force participation by older workers. A strong economy eases the implementation of pension reform by offering increased employment opportunities for older workers. High unemployment and low economic growth will limit older workers’ ability to remain employed, forcing them into complete retirement. Experts we spoke with believe that the low growth of the Japanese economy during the last decade has been a factor limiting the scope of pension and labor market reform, for example, in the area of mandatory retirement ages. Fiscal constraints also preclude more fundamental pension reform of system financing and structure. In contrast, the currently strong U.K. economy acts as an incentive for employers to retain their older workers and there will likely be an increased need for older workers in the long term, particularly as the workforce ages between now and 2020. The current tight labor market also makes it easier for job search assistance programs to find jobs for clients. In many nations, despite their numerous differences, increasing the labor force participation of older workers is an element of the policies chosen to reform national pension systems. Officials from each of the three nations we studied emphasized this issue should be considered in their own nation’s reform efforts. Encouraging workers to stay in the labor force longer can help alleviate the fiscal and budgetary stress induced by rising national pension expenditures and can potentially enhance economic growth. In those nations where national pension reform has included benefit reductions, working longer can also enable older persons to avoid serious reductions in their standard of living in retirement. An important result from the experience of other nations is that to effectively foster greater labor force participation among older workers, individual reform components should be comprehensive in design so that employment incentives operate in a mutually reinforcing manner. Thus, in the nations we studied, changes in the national pension system were often matched by corresponding complementary initiatives affecting employer- provided pensions and the operation of the national labor market. Officials from each of the three nations we studied also identified other critical labor market policies that needed to be harmonized with these changes, particularly in the area of employment age discrimination and regarding mandatory retirement ages. Such comprehensive reforms can face formidable challenges to their design and implementation. In Sweden, for example, where prospects may be more favorable because the national pension system accounts for a large proportion of retirement income, comprehensive reform remains a work in progress with continuing discussion and debate. Nevertheless, the returns from a comprehensive approach could far outweigh the risks of failure. The reform policies chosen by other nations should be evaluated within the context of their societies and institutions, however. For example, benefit payments from the national pension system in Sweden currently replaces a much larger percentage of pre-retirement income than in the United States. Therefore, benefit reductions in these nations will have significantly different effects on retirement income than similar actions taken in the United States. In addition, reforms that more closely link benefits to life expectancy, such as those implemented in Sweden could have significantly different distributional effects in the United States. For example, American subpopulations with lower average life expectancies, such as African Americans, would be more adversely affected by this policy change since they would collect benefits for shorter time periods relative to other racial groups. African American men have shorter life expectancies at birth and at age 65 compared to males of other ethnicities. Finally, the focus on extending the labor force participation of older workers has also led to a reconsideration of the traditional definition of retirement where a person is either considered working or retired, to one that is more flexible or continuous in nature. The long-term trend of improved health and age longevity of older persons throughout the high- income nations now permits a range of options beyond the traditional career employment/out-of-the-workforce retirement tradeoff. Acknowledging this development, experts and officials from the nations we studied noted the importance of quality of work-life issues, including the wider use and availability of part- time employment and other alternative employment arrangements, and fostering “lifelong learning,” as other key components in a long-term strategy to extend the labor force participation of older workers. In some ways, the United States has already forged ahead in this area, through its prohibition of age employment discrimination, broad elimination of mandatory retirement ages, and through its public discussion of bridge employment, phased retirement, and other alternative employment arrangements. However, opportunities exist to do more. In recent work, we found that few U.S. employers have focused on making such options available to their older employees on any widespread basis, and numerous economic and regulatory obstacles remain that can discourage the employment of older workers. This led us to recommend to the Secretary of Labor that an interagency task force be established to develop legislative and regulatory proposals addressing the issues raised by the aging of the labor force. The challenge of how to extend the work-lives of older employees, given the new demographic realities of the 21st century, presents real opportunities, not only to bolster economic growth but to help secure retirement income adequacy for millions of working Americans and their families. We provided copies of this report to the Secretary of Labor and Commissioner of Social Security. They provided technical comments which have been incorporated where appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to appropriate congressional committees and other interested parties. Copies will also be made available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215, Charles Jeszeck at (202) 512-7036, or Jeff Petersen at (415) 904-2175, if you have any questions about this report. Other major contributors to this report are listed in appendix V. To determine how the United States fares internationally regarding current and projected trends in key demographic and labor force characteristics, we compiled and analyzed data from the United States and seven other high-income Organisation for Economic Cooperation and Development (OECD) nations. In conducting this comparison, we examined the fertility rate, median age of the population, life expectancy at birth, old-age dependency ratio, population by sex and age group, labor force participation rate by sex and age group, and the unemployment rate by sex and age group for each of the eight nations. We also compared the United States to an aggregate for these characteristics that we constructed for a broader group of 23 OECD nations. In order to identify the dynamics of labor force participation rates of older workers, particularly with regard to the key incentives that influence work/retirement decisions, and to identify those nations that might be most appropriate to illustrate the role of extending the labor force participation in national pension system reform, we reviewed literature and interviewed experts. We conducted an extensive review of the international retirement literature, literature on the impact of aging societies, and previous analyses of conditions in other countries. Our review included research from organizations such as the OECD, International Labour Organization (ILO), the Center for Strategic and International Studies (CSIS), and the World Bank, as well as government agencies such as Japan’s Ministry of Health, Labor, and Welfare; Sweden’s Ministry of Industry, Employment and Communications; and the United Kingdom’s Department for Work and Pensions. We also consulted with experts from many of these organizations, including the CSIS, the OECD, the World Bank, as well as the U.S. Departments of Labor and the Treasury, and the U.S. Social Security Administration. In addition, we conferred with individual experts affiliated with major retirement research institutes at universities or other research institutes such as the National Bureau of Economic Research, the Urban Institute, and the Brookings Institute both for background information on national pension policy as well as their recommendations on which countries to use as case studies. On the basis of these interviews and our research review, we selected three nations for intensive study that had high rates of labor force participation for older workers and had enacted pension reforms within the last decade: Japan, Sweden, and the United Kingdom. We chose Japan for in-depth study for its extremely high labor force participation rates of older workers, because it already has a large aged population that will continue to grow significantly in the near future, and because it has taken steps over the last decade to address the consequences of an aged society through the reform of its pension system and labor market policies. We selected Sweden for its high rates of labor force participation of older workers, substantive reform to the national pension system, as well as its tradition of extensive labor market and social welfare policies. We selected the United Kingdom for its national and employer-provided pension reforms that reversed a previous trend of rising national pension expenditures (as a percent of gross domestic product) and for its active labor market policies regarding older workers. We performed a focused examination of these three nations’ pension systems and labor market policies through site visits to each country. During our on-site study of each nation, we met with key government officials concerning pension and labor market policy, representatives from employer organizations and labor unions, advocacy groups, and well- known scholars whose research has direct relevance to understanding pension systems and older worker behaviors. In addition to speaking with us, many interviewees provided relevant written materials and statistics for our use. Retirement income for the majority of Japanese is mainly derived from the national pension system. Income from work constitutes the next largest share of retiree’s income. Employer-provided pensions, or the earnings derived from lump-sum retirement benefits, are a small portion of this income. Disability pensions are used by a very small percentage of the working age population. The Japanese national old-age pension system consists of two tiers, both of which are financed on a pay-as-you-go basis. The first is the Old-Age Basic Pension, which covers all workers and their dependents. The premium for the basic pension is paid by the self-employed directly and by employees through their employers. The second tier is the Old-Age Employees Pension that covers salaried workers and their dependents, about 70 percent of the workforce. The premium for the employee’s pension is paid equally by the employee and employer. This is a relatively small pension financed primarily by a fixed premium paid either directly if self-employed or through one’s employer. One-third of the financing comes from general revenues. Current premiums are about $111 per month. Premiums are projected to almost double by 2020. Benefit amounts are determined based on the number of months of contributions, with 25 years of contributions required for full eligibility. The average monthly benefit in 2001 was about $417. Full pensions begin at age 65 and are not offset by other income. Pensions may be received as early as age 60 at a reduced rate. This is an earnings-based pension financed by premiums paid equally by the employer and employee. The total premium rate is currently 17.35 percent of payroll and is projected to rise to 27.35 percent by 2020. Benefits are calculated based on the year of birth, the number of months in the system and average monthly earnings. There are two major components of the employees pension. The first is the flat rate portion, which is based on the year of birth multiplied by the number of months of contributions. The second component is calculated using earnings and months of contributions. The growth rate of benefits for each additional year of work for this portion is determined by year of birth. The average monthly employees pension was about $1,467 in 2001. Full pensions can currently be received from age 61; eligibility ages are scheduled to gradually rise to age 65 by 2030. Benefits are reduced until age 70 if earnings while receiving this pension exceed a certain level. Employer-provided pensions are primarily defined benefit in nature and may be received as (a) lump sums, (b) annuities, or (c) a combination of the two. The smaller the company, the more likely it is to use the lump- sum option. Benefits are available at mandatory retirement, usually age 60. About 90 percent of companies offer some type of retirement benefit, with about one-third of full-time employees receiving benefits through employee pension funds (EPF) and nearly the same level through what are known as tax-qualified pension plans (TQPP). In 2001, two laws were enacted that affect employer-provided pensions. The first, the Defined Benefits Pension Law will impact EPF and TQPP. The second, the Defined Contribution Pension Plan Law will allow the creation of new pension plans for employees and individuals. EPF are responsible not only for the occupational pension, but also a carve-out or “substitution” portion of the national old-age employees pension. EPF offer the remuneration portion of the old age employees’ pension and provide added benefits. According to the Pension Fund Association these benefits should be 30 percent or more of the substitution portion of the employees’ pension. About 43 percent of employees in EPF take their benefits in the form of a lump-sum payment. TQPP participants receive pension benefits that are contracted between companies and financial institutions like banks or life insurance companies. This arrangement is more likely to be used by smaller companies, and benefits are likely to take the form of lump-sum payments or fixed term annuities. The Defined Benefits Pension Law requires that no new TQPP be created and all existing TQPP be transferred into new contract- or fund-type pension arrangements. New defined contribution pensions were recently introduced in Japan for corporations and individuals. Contributions are managed by the beneficiaries themselves, and pension benefits are paid as old-age benefits, disability benefits, or lump-sum death benefits. Disability insurance pensions are relatively rare in Japan. Like the two-tier Old-Age Pension System, there are two tiers of disability pension. Slightly more than 1 percent of the Japanese population receives one of these pensions. The Disability Basic Pension has benefits similar to that of the Old-Age Basic Pension with additional benefits for those with dependent children. Approximately one million disabled individuals aged 20-64 receive this pension. The Disability Employees Pension benefits, like the Old-Age Employees’ Pension benefits, are based on earnings and months of contributions with additional benefits when a spouse is present. Less than 200,000 disabled employees aged 20-64 received this pension in 2001. Concern about the balance between benefits, retirement ages, and contribution levels in the old-age pension system have been growing over the last two decades. Proposals to raise the normal retirement age first surfaced in 1980 and were finally agreed to for the flat-rate portion of the employees' old age pension in 1994 and the earnings-based portion in 1999. These eligibility age increases will be phased in over several decades. The pension system undergoes actuarial reevaluation at least every 5 years to balance premiums and benefits with existing socioeconomic conditions. The Japanese have used Pension Councils coordinated by the Ministry of Health and Welfare to facilitate pension reforms. Pension Councils are composed of representatives of employers’ groups and labor unions, as well as academic researchers and government officials. While the Ministry of Health and Welfare undertook the role of gathering and supplying information, employer organizations and labor unions were able to participate in the debate by means of participation in the Pension Council as official members and by publicizing their views concerning pension reform through editorials. The normal retirement age for the flat-rate portion of employee pension will rise to 65 for men in 2013 and 2018 for women. Eligibility ages for the earnings-based portion of the employee pension will rise to 65 in 2025 for men and 2030 for women. The premiums for the basic (fixed) and employees’ (earnings-based) systems are revised at least every 5 years based on the projected health of the systems. Current premiums on the earnings-related pension total 17.35 percent of payroll. Both premium rates are expected to grow in the future. The base of earnings covered by premiums to the employee pension was expanded to include income from bonuses, but the total percentage of earnings that will be needed to support the system has been reduced. Employee pension benefit levels for those born prior to April 1, 1941, are higher than for those born after that date. This is true for both the flat-rate and the earnings-related portions of the pension. Pension benefit levels were reduced by about 5 percent in 2000. It has been estimated that lifetime employee pension benefits will be reduced by about 20 percent by 2020. Outside of the Pension Council process, changes were made to employer- provided pensions through the passage of the Defined Benefits Pension Law and the Defined Contribution Pension Plan Law. Enactment of the new Defined Benefits Corporate Pension Law requires that existing TQPP be abolished by 2012 and their funds must transfer either to the EPF or to new contract or fund-type corporate pensions. The primary purpose is to improve protection of beneficiaries’ pensions. The new Defined Contribution Pension Plan Law was designed to help those unable to participate in a defined benefit pension system through the creation of corporate and private defined contribution pensions. The target group for this pension is the self-employed and employees of small companies. Portability of pension benefits was also a factor in the creation of the Defined contribution Pension Plan Law. The Japanese labor market has a number of features and practices that affect the labor force participation of older workers. These practices may enhance labor force participation up to a certain age but decrease it after that point. During the period of high economic growth in the 1960s, companies instituted long-term or lifetime employment policies that effectively guaranteed employment until a worker reached mandatory retirement age. These policies offered companies a way to ensure that their investment in training achieved a positive return and provided workers job security. This policy is limited in that it applies only to full-time workers who are predominantly men. Promotions and wages are highly related to seniority in Japanese companies. Under such a system, wages rise as longevity with a firm increases. This is in part to reflect the greater knowledge and experience gained with greater tenure, but also to reflect the rise in cost of living as employees aged. Typically, the wage of a male employee rises until about age 50-55, after which it falls sharply. For many workers at large companies wages are 30-50 percent lower at age 65 than at age 55. This reduction makes older workers more attractive to employers by making them cost competitive with younger workers. Training in Japanese companies usually takes the form of on-the-job training. The training is usually highly company-specific and is closely related to the seniority system. Training opportunities decrease once an employee reaches a high level of seniority. There is no law in Japan that prohibits discrimination on the basis of age. However, recent legislation encourages employers to voluntarily not discriminate against older workers in the hiring process. Japanese law permits mandatory retirement, but the mandatory retirement age can be set no lower than age 60. The Japanese government has enacted some initiatives to encourage the employment of older workers. Rather than legislate an increase in the mandatory retirement age in the current weak economy, the government has urged companies to extend the employment of older workers voluntarily and to offer some employment services and grants that are expressly targeted toward older workers. The policies for accomplishing this are either employment extension programs whereby workers who have retired are rehired or the company may increase its mandatory retirement age. To date, the preferred course of employers has been to offer reemployment or extended employment where such action is beneficial to the company. Companies are provided subsidies—called Promotion Grants to Secure Continued Employment— for this purpose. Japanese officials advised us that there are no job training programs geared specifically toward older workers. However, the Silver Human Resource Centers offer skills training and job-matching services are offered. These centers also provide seniors with temporary or short-term community-related jobs. Other programs provide grants to employers to develop the skills of the middle-aged and older workers to improve their employability. There are also employment programs that focus on providing information to employers on the benefits older workers offer and how they can be accommodated. Through the unemployment insurance system, subsidies are available directly to employees 60 to 64 years old who are working full-time and earning less than 85 percent of their former wage. This subsidy pays up to 25 percent of the employees’ wage after age 60 until age 65. Utilization of this subsidy is low, however, due to the requirements placed on participation. For example, recipients must be working full-time and must be nominated by their employer. In addition, beneficiaries must have been paying into the unemployment insurance system for at least 5 years and the benefit period is reduced if they received unemployment payments following mandatory retirement. This program will become less generous and participation requirements will become more stringent in the future. We were advised that participation in these programs, while small, is expected to grow as the retirement eligibility ages of the national pension system increases. Retirement income for the majority of Swedes is mainly derived from the public national pension system. Employer-provided pensions, on average, account for roughly 20 percent of total pension entitlements. These pensions, however, are relatively more important to higher income individuals (with incomes above the ceiling in the national old age pension). Private (individual) pensions account for an even smaller amount, but have increased in importance over the last decade. Disability pensions have also been an important source of income for many who leave the labor force prior to becoming eligible for an old-age pension. The major components of the Swedish old age and disability pension systems are undergoing important structural changes. With legislation passed in 1998, Sweden began implementing a fundamental reform of its national old-age pension system. In addition, there have been changes to the structure of employer-provided pensions complementing the national pension reform. Changes in the eligibility criteria and administrative structure of disability pensions have also accompanied old-age pension reform. The old national pension system was a pay-as-you-go defined benefit plan, combining a flat rate universal benefit (the “basic pension”) with an earnings-related supplement (the “ATP”). The basic pension, introduced in 1913, was the first compulsory old age pension in the world to cover all citizens regardless of occupation. This flat rate pension was paid in full to everyone with at least 40 years of residence in Sweden between the ages 16 and 65, or with 30 years of work. ATP was introduced in 1960. Under ATP, a full earnings-related benefit could be obtained with 30 years of covered earnings at age 65, based on the average of the best 15 years. The normal (or statutory) eligibility age was 65 years for both the basic pension and the ATP, but both pensions could be drawn from the age of 61 (60 prior to 1998) with a life-long reduction or postponed to the age of 70 with a life-long increase. The national pension reform was the result of discussions that began in the 1980s in response to concerns about demographic trends and accelerated in the early 1990s in response to a serious economic crisis and a change in the government. The old national pension system was seen to be unfair in that it favored those who had short working careers or variable earnings. In addition, the old system faced severe problems with financial sustainability and was expected to require large increases in contribution rates in the future. The new national old-age pension system consists of an earnings-related pension and a minimum guaranteed pension. The new earnings-related pension is a defined contribution scheme with two components: a pay-as-you-go “notional defined contribution” plan and a fully funded financial defined contribution plan (the “premium pension”). The total contribution rate for both plans is 18.5 percent of earnings, paid by both employers and employees. Unlike the old system, the final pension in both plans is based on lifetime earnings. In addition, there is no normal eligibility age in either plan. Benefits may be drawn, in full or in part (i.e., one-fourth, one-half, or three-fourths of a full pension) at age 61 or later with no upper age limit. Earnings may be combined with a full or partial pension and will continue to generate pension contributions. In both plans, individuals receive pension rights from earnings as well as income replacement transfers (i.e., disability and sickness benefits) and special credits (i.e., for time spent in child rearing, university studies, or compulsory military service). Pension credits from income replacement transfers and special credits are funded by general revenues. The notional defined contribution (NDC) plan is the larger of the two earnings-related pensions, accounting for 86 percent of total contributions. The NDC plan is financed according to pay-as-you-go principles, but the system also includes pension reserve or “buffer” funds. The contribution rate for the NDC plan is 16 percent, with contributions credited to individual “notional” accounts. These accounts are notional in that pension rights (i.e., claims to future pension income) and not financial assets, are credited to them. During the accumulation period, the pension rights credited to the notional individual accounts are indexed by average wage growth. At retirement, the accumulated “notional capital” is converted to an annuity related to estimated life expectancy at the age of retirement and an assumed “norm” real rate of return (a 1.6-percent increase in real average wages). This means that with increasing life expectancy over time, other things being equal, individuals will have to work longer or accept lower pensions. After retirement, benefits are indexed by average wage growth minus the assumed growth norm of 1.6 percent. Thus, if real wage growth falls below the norm, the real value of pensions will fall (and vice versa). An automatic balancing mechanism places a further “brake” on the upward indexation of pensions and pension rights if the balance in the pension reserve or buffer fund falls below a certain level. The new Swedish national pension also includes an earnings-related, funded defined contribution plan, the “premium pension,” in which contributions are made to individual financial accounts. Participation in the premium pension is mandatory, but the contribution rate (2.5 percent) is low relative to that of the pay-as-you-go NDC plan. Individuals have a great deal of investment choice (currently about 600 funds), and a default fund is provided for those who decline to make investment fund choices for their accounts. At retirement, these individual accounts may be converted into either fixed or variable annuities. In addition to the two earnings-related pensions, the new national pension system also includes a minimum “guaranteed” pension for those with no or low earnings. Unlike the universal basic pension of the old national pension system, the new guaranteed pension provides a means-tested benefit. The guaranteed pension is funded separately from the earnings- related pension; it comes completely out of general revenues. Unlike the earnings-related pensions, individuals may claim benefits under this plan no earlier than age 65. Benefits under this plan are indexed to the Consumer Price Index (CPI), not to average wage growth as in the earnings-related NDC plan. Roughly 90 percent of all workers in Sweden are covered by an employer- provided pension plan based on collective agreements between central employer and union organizations. There are four different collectively bargained, employer-provided pension plans: two separate plans for workers in the private sector — one for white-collar workers and one for blue-collar workers and two separate plans for public sector employees, one for central government workers and one for municipal (and county) workers. Traditionally, these pensions have been defined benefit plans, closely linked to and supplementing the national pension. And, typically, benefits in these plans have been based on final salaries. During the 1990s, however, the plans for private sector blue-collar workers and municipal workers converted to defined contribution plans with contributions paid by the employers. In 2003, central government workers will also have a new pension plan containing defined contribution features. White-collar workers in the private sector, however, continue to have a largely defined benefit plan. Although private pensions remain a relatively small source of retirement income in Sweden, there has been an increase in individual retirement saving in recent years. Some analysts attribute this to the increased attention to retirement income associated with the pension reform discussion (and perhaps an increased awareness of financial markets associated with the introduction of the premium pension), and/or concerns about the impact of pension reform on future pensions (especially among women). The most common way for Swedish workers to leave the labor force before the age of 65 has been with a disability pension. Prior to 1991, it was possible to be awarded a disability pension for three reasons: first, on medical grounds, for those aged 16-65; second, on medical and labor market grounds (i.e., due to long-term unemployment), for those aged 60-65; or third, on labor market grounds only, for those aged 60-65 who had exhausted their unemployment benefits. Disability pensions granted exclusively for labor market reasons were called “58.3 pensions”: If a worker aged 58 years and 3 months were laid off, he or she could claim unemployment benefits up to age 60, and then claim a disability pension for labor market reasons (until claiming an old-age pension at age 65). Eligibility requirements for disability benefits have been tightened in successive reforms throughout the 1990s. The granting of disability pensions exclusively for labor market reasons, for example, was discontinued in 1991, and since 1997 medical reasons are the only valid criteria for granting a disability pension. In addition, as part of the national pension reform, disability insurance was separated administratively from the old age pension system. Active labor market programs for the unemployed play an important role in Sweden’s labor market policy. These programs have, for the most part, always been open to workers of all ages. The Activity Guarantee program, for example, ensures the long-run unemployed placement in job training and re-employment programs. One-third of the participants are between the ages 55 and 64. Swedish experts and officials argue, however, that existing labor laws, workplace practices, and attitudes may create barriers to continued employment among older people. They also argue that reducing these barriers is increasingly important in Sweden in light of pension reforms that encourage increased labor force participation. Some policy changes have already been implemented in Sweden to reduce these barriers, such as increasing the mandatory retirement age from 65 to 67 years. In other areas of concern–such as seniority rules, age discrimination, employment and skills training, quality of work life and attitudes toward older workers—possible policy changes are currently under discussion. Prior to the national pension reform, collective agreements in Sweden established a mandatory retirement age of 65. That is, seniority rules (see discussion below) were not applied to workers aged 65 and older, and employers were permitted to terminate employment at that age. In May 2001, the Swedish Parliament added a new compulsory rule to the Employment Protection Act, giving all employees the right, but not the obligation, to remain in employment until the age of 67. Collective agreements prescribing a mandatory retirement age of 65 are now prohibited, although agreements covering the age at which employees have the right to leave employment and receive a pension are still allowed. Existing agreements prescribing mandatory retirement at age 65 were allowed to remain in place until they expired but no later than December 31, 2002. According to Swedish seniority rules, firms engaged in downsizing their work force must follow the “first-in-last-out” rule, giving priority for continued employment to those who have been employed the longest. Some analysts argue that this protection for older workers helps to explain the relatively high rate of employment among the 50-64 year old age group. It is also argued, however, that this rule may promote early pensioning in times of downsizing and possibly limit the mobility of older workers. Ways to make seniority rules more flexible, while still offering employment protection, are currently under discussion in the context of policies to encourage older workers. Advocates for older persons in Sweden view age discrimination as a serious impediment facing workers who wish to remain employed later in life. Sweden currently has no legislation against age discrimination in employment. As a member of the European Union, however, Sweden will be required to pass such legislation. In 2000, the European Union established a general framework for equal treatment in employment and requires all member countries to introduce legislation prohibiting discrimination at work on the grounds of age, sexual orientation, religion and belief, and disability. The directive gives member states until 2006 to implement the provisions on age and permits considerable latitude in how the directive is to be implemented in practice. Overall, the incidence of employment and skills training is relatively high in Sweden, and older workers receive nearly the same amount of training as younger workers. Some researchers and officials, however, believe that the skills of older workers need to be enhanced, particularly those skills necessary for adapting to changing work environments and pursuing second careers. They have concerns that older workers may be disadvantaged in acquiring such skills. It is difficult, for example, for people over 40 years of age to acquire public loans for university studies. There are various proposals to address these problems, including the establishment of special higher education savings accounts to enhance the financing of higher education for people of all age groups. Swedish researchers report that surveys of older employees in Sweden find many would like to work longer but would prefer different types of jobs and/or fewer hours of work. Thus, government officials and advocates for older workers see “working life” issues—such as the need for more flexible work time arrangements, the ability to switch to more appropriate types of work, and management practices that create a positive environment for older workers—to be critically important. Recommendations to provide older persons with a right to work part-time, to revise labor laws to allow for short-term contracts for older workers, and to devise systems to make it easier to change job duties while employed are under discussion. Negative attitudes toward older workers are also a concern and are seen to create barriers to the employment of older people. Surveys find that the majority of employers do not want to hire older workers. This attitude is attributed to prejudice and misinformation and/or work rules and practices that make older workers more expensive. Government officials and advocates argue that addressing these negative attitudes must be part of any comprehensive policy change to promote the labor force participation of older people and must be undertaken in the context of broader policies to promote the overall well-being of older people in Sweden. Retirement income in the United Kingdom is made up of both government and private sources. In the late 1990s, national pension benefits made up 38 percent of the national average wage. The government allows workers to substitute employer-provided pensions or individual pension accounts for the earnings-related portion of national pension benefits, and in the late 1990s, about 75 percent of workers did so. About 60 percent of current pensioners receive benefits from an employer-provided pension, typically defined benefit and providing them with two-thirds of their final salary after 40 years of service. With recent reforms, future pensioners are likely to have more of their retirement income derive from defined contribution employer-provided pensions or individual pension accounts. For low- income pensioners with little or no private pension income, the government provides a larger earnings-related benefit, as well as means- tested benefits. Disability pensions have also been an important source of income for many who leave the labor force prior to becoming eligible for an old-age pension. The United Kingdom’s national pension system consists of three tiers—the state basic pension, an earnings-related pension, and discretionary savings vehicles. The Basic State Pension provides a flat-rate benefit of £75.50 per week for a single pensioner (about $120 per week or about $6,200 per year). It is adjusted annually for price inflation. To qualify for the full benefit, male pensioners and female pensioners turning 65 after 2020 must have made 44 years of National Insurance Contributions. Female pensioners turning 60 before 2010 must have made 39 years of contributions. Workers who are not paying contributions because they are unemployed, disabled, caring at home for a child or relative, on state maternity benefits, or are taking training courses may receive credits toward the Basic State Pension. Workers with less than the required number of years of contributions have their state basic pension reduced proportionately, but workers must usually have made at least 10 years of contributions to receive any benefits. In practice, the Basic State Pension is near universal for men. It is becoming universal for women since the enactment of the Home Responsibilities Protection Act in 1978, which allowed those caring for children and people who are sick or disabled to earn credits toward the Basic State Pension so long as they have made 20 years of contributions. Low-income older individuals are eligible for means-tested benefits, one of which is the Minimum Income Guarantee (MIG). The MIG tops up incomes of those age 60 and over with income less than £98.15 per week for a single person (about $158 per week or about $8,200 per year) to those levels. MIG benefits are increased annually with average growth in wages. The MIG does have an earnings test. Each $1 of benefit is withdrawn for each $1 in earnings above the MIG level. In October 2003, the government will replace the MIG with the Pension Credit, in part to reduce the MIG’s earnings test. With the Pension Credit, each $1 of income will lead to a $0.40 decrease in benefits, increasing the worker’s overall income by $0.60. The Pension Credit consists of two elements: (1) a guarantee credit which tops up the income of a single person to £102 per week (about $163 per week or about $8,500 per year) and (2) a savings credit which provides a benefit to those with modest savings, pension income, or earnings. The savings credit is designed to taper away as an individual’s income rises and phases out for individuals with £134.80 of income per week or more (about $216 per week or about $11,200 per year). The eligibility age for the savings credit is currently 65. The eligibility age for the guarantee credit is currently 60 but will rise in line with the increase in women’s eligibility age for national pension benefits, so that it will be 65 by 2020. Aside from the MIG, other means-tested benefits include assistance with housing costs and local taxes. About 51 percent of U.K. households over age 60 are eligible for means-tested benefits, although in 2000-2001 only 64 percent to 78 percent of eligible households claimed these benefits. The earnings-related pension, now called the State Second Pension, supplements the Basic State Pension but may be substituted by several types of private pensions. From its creation in 1978 until 1988, the earnings-related pension (then called the State Earnings Related Pension System) provided benefits of about 25 percent of workers’ average annual earnings for the best 20 years. From 1988, benefits have been reduced to a target of 20 percent of average lifetime earnings. In 2000, a technical adjustment in computing benefits led to a further reduction. In April 2002, another reform was implemented which resulted in low- and moderate-income workers receiving higher benefits as a proportion of wages, while benefits for those with annual income above £24,600 (about $40,000) benefits remain about the same. In the late 1990s, about 75 percent of covered workers substituted private pensions for the state earnings-related pension. These private pension plans are either employer- provided pensions or individual pension accounts called personal pensions. Personal pensions may be provided through financial institutions or offered by employers. Employers with five or more employees who do not offer an employer-sponsored pension must give workers access to a type of personal pension called a stakeholder pension. Stakeholder pensions have additional legal requirements, such as a maximum administrative charge of 1 percent of the pension fund’s value. In return for forgoing future State Second Pension benefits, individuals who opt out of the state earnings-related pension for private pensions pay lower National Insurance Contributions. Those who opt out for individual pension accounts pay full National Insurance Contributions but receive part of their contributions back as a rebate that is deposited into their individual account. Those who opt out of the State Second Pension for employer-provided pensions pay national insurance contributions at a reduced rate. Similar to the Basic State Pension, the State Second Pension allows individuals who are looking after a child under age 6 or an ill or disabled person to qualify for State Second Pension benefits. The third tier consists of additional forms of voluntary savings. First, individuals can choose to make additional voluntary contributions into their employer-sponsored pension plan. Second, individuals can choose to make additional contributions into their personal pensions or stakeholder pensions. Individuals will receive tax relief for these contributions up to a certain ceiling. And finally, individuals can choose to make contributions to a variety of other tax-relieved instruments, such as annuities and life insurance. Eligibility Age: For the basic and earnings-related pensions, benefits may be drawn at age 65 for men and 60 for women. In response to a European Union directive requiring gender equality in member countries’ pensions policies, women’s eligibility age will also become 65, with the change gradually taking place between 2010 and 2020. The U.K.’s national pension system does not have an early eligibility age. Earnings Test: The U.K.’s national pension system does not have an earnings test, meaning that benefits are not offset by the wages earned by pensioners. Deferral of Pension Benefits: If pension benefits are drawn past eligibility age, benefits are increased by an increment of 7.5 percent per year of deferral. In 2010, the increment will increase to 10.4 percent. The government is considering pushing up implementation of the increase to 2006. It is also considering allowing individuals a choice between taking the benefit increase as a lump-sum payment or as increases in each benefit payment. Funding: The Basic State Pension and State Second Pension are funded on a pay-as-you-go basis by National Insurance Contributions shared by employers and employees. General revenues may also be used to fund pension benefits. Employers pay 11.8 percent of earnings above a threshold for employees in the State Second Pension and between 8.3 percent to 10.8 percent for employees who have opted out. Employees in the State Second Pension pay 10 percent of earnings above a threshold up to an earnings limit, while employees who have opted out pay 8.4 percent. From April 2003, National Insurance Contribution rates will increase by 1 percent for employers and employees on earnings above a threshold. Employees will also pay 1 percent of earnings above the earnings limit, and the earnings limit will be raised in line with inflation. National Insurance Contributions also fund other benefits, including disability, unemployment, and survivors’ benefits. Sustainability: Changes to the U.K. national pension system, including raising women’s eligibility age, increasing Basic State Pension benefits in line with average price increases rather than the higher of increases in average prices or wages, and making survivors’ benefits less generous, are helping to maintain the long-term fiscal sustainability of the system. In 1984, the U.K. government projected that the National Insurance Contribution rate needed to pay for national pension benefit payments would be 23 percent by the 2020s. Currently, the contribution rate needed to pay for benefits is estimated to be 18.2 percent by 2020. Pension costs as a percentage of gross domestic product are projected to remain about 5 percent through 2050. Almost half of U.K. workers are members of either defined benefit or defined contribution pensions provided by their employers. Employers who offer pensions to their workers may pay lower National Insurance Contributions by having their employees forego rights to benefits from the state earnings-related pension. Employees in these plans also pay lower National Insurance Contributions. Until 1986, employers could only use defined benefit pension plans as a basis for their employees to opt out of the state earnings-related pension, and employers could require employees to join their defined benefit pension plan. The Social Security Act of 1986 allowed employees with either defined benefit or defined contribution employer-provided pensions to opt out and allowed workers to choose whether to join the pension plan provided by their employer. Since then, the U.K. has experienced a movement of employer-provided pensions from defined benefit to defined contribution. In 2000, about 81 percent of employees accruing benefits in employer-provided pension plans were in defined benefit plans provided primarily by large employers. However, of the plans open to new members, about 70 percent were defined contribution. On average, employers and employees make lower rates of contributions to defined contribution plans than defined benefit plans. In 2000, employers contributing to employees’ defined benefit plans contributed 11.1 percent of earnings on average, while employers contributing to defined contribution plans contributed 5.1 percent of earnings on average. Employees contributed 5.0 percent of earnings on average to defined benefit plans, while they contributed 3.4 percent of earnings on average to defined contribution plans. Employers who opt out of the State Second Pension by offering defined benefit pensions are required to provide benefits that are broadly equal to or better than the benefits employees would receive in the State Second Pension. There is no such requirement for defined contribution pensions. Typical defined benefit pensions offer benefits equal to one-eightieth or one-sixtieth of final salary per year of membership in the plan, or half or two-thirds of final salary for workers who have been in the plan for 40 years. Typical ages at which workers may draw full pension benefits are 60 and 65. The earliest age at which benefits may be drawn is 50. All private sector defined benefit pensions are fully funded, while some public sector defined benefit pensions are financed on a pay-as-you-go basis. The government has recently announced several proposals for encouraging employers to establish pension plans that encourage longer labor force participation. For example, the government plans to increase the earliest age when workers may begin drawing employer-provided pension benefits to 55 by 2010. The government has also specified some best practices for defined benefit employer-provided pension plans. These include allowing those who work past normal retirement age to continue earning pension rights and to receive a fair benefit increase. In addition, the government is encouraging employers to calculate benefits from the best year’s salary out of the last few years of employment so as not to penalize people who change positions to reduce responsibilities at the end of their career. Disability insurance in the U.K. pays a bi-weekly benefit to adults who cannot work. Currently, recipients age 50 and older number approximately 1,175,000 compared with approximately 160,000 recipients of unemployment insurance. During the 1970s and 1980s, the U.K. expanded and improved the coverage of benefits for disabled adults, then tightened them in accordance with the current government’s overall “welfare to work” strategy. Prior to 1971, those unable to work due to disability received means-tested benefits. In 1971–2, the Invalidity Benefit (IVB) was introduced. For those who had a sufficient National Insurance contribution history, IVB provided an age-related income to those who were disabled and not working. Claimant’s age and qualifications could be taken into account in the criteria determining incapacity for work. Cash benefits were not taxed and were indexed to earnings. Benefits generally expanded and improved during the 1970s. However, in 1980, IVB (and all other long-term benefits) indexation was changed to prices rather than earnings, which ultimately provided recipients with smaller benefits. In 1995, IVB was replaced by the Incapacity Benefit (IB), at least in part because these benefits were perceived to be subsidizing unemployment and early retirement. The National Insurance contribution element for IB requires contributions from work within the last 2 years, rather than in any previous year. IB also has stricter eligibility criteria, testing if there is any work the claimant could perform regardless of the likelihood of obtaining such a job or its suitability. The assessment of capacity to work can be ongoing rather than a once-only assessment. IB cash benefits, unlike IVB, are taxable. Finally, income over £85 per week from an occupational or personal pension, or certain types of health insurance payment will reduce by 50 percent the amount of IB paid to the claimant. The U.K. government has begun a series of reforms that are designed to increase employment opportunities and increase both incentives and abilities to take on paid work for older workers. These reforms address mandatory retirement and age discrimination, lack of training or skills, and inflexible work and retirement options. Although the U.K. currently allows employers to set mandatory retirement ages for their employees, there have been efforts by the government to encourage employers to extend employment opportunities for older workers voluntarily. The government continues its campaign begun in 1993 to promote the benefits of an age-diverse workforce to employers. In 1999, the government issued a voluntary Code of Practice on Age Diversity, which sets good practice standards for employers on eliminating age discrimination in their business, in hopes that employers would retain and hire older workers voluntarily. In 2000, the Cabinet Office issued a report describing the economic and social reasons for promoting active aging, a concept of improving people’s opportunity to contribute to society and to the economy in their later working years, and laying out a plan of action to encourage active aging. In response, one public employer has raised and some private employers have eliminated their mandatory retirement ages. For example, civil servants in the U.K. were subject to a mandatory retirement age of 60, but now the majority of employees have the option of staying on until age 65. Finally, the U.K. is moving toward enacting legislation against age discrimination in employment, which is required by a recent European Union Council Directive by 2006. Although the government has proposed abolishing mandatory retirement ages, it is not yet known if the legislation will address current legal provisions permitting mandatory retirement policies. The U.K. government offers several types of employment assistance for older workers, including assistance with job searching, training, and employment subsidies through the Department for Work and Pensions (DWP). The DWP was created after the last election with the principal aim of implementing the government’s welfare-to-work strategy. In 2002, the government units responsible, separately, for disability insurance and unemployment insurance were combined into the new government unit called Jobcentre Plus, effectively placing services for recipients of disability and unemployment benefits in a single office within the DWP. Jobcentre Plus provides services through local employment assistance centers where all unemployed clients go for assistance seeking jobs. For example, personal advisors assist clients to search for jobs. The New Deal 50 Plus program is one government program specific to older people who want to work. Run through the Jobcentre Plus locations, it offers assistance with job searching, training, and through employment subsidies. This program was piloted in 1999 and went national in 2000. Since 1999, over 4,000 people ages 50-64 have received the training grant and over 80,000 have received the employment subsidy. Until April 2003, New Deal 50 Plus participants receive the employment subsidy as a cash wage supplement of up to £60 per week if their wages are under £15,000. In April 2003, the cash employment subsidy will become a tax credit for working individuals. Flexible working options for older workers is acknowledged as an important issue by government officials, representatives of union and employer organizations, and advocates. Despite the interest, employer organizations and government officials contend that a tax rule prohibits the receipt of employer-provided pension benefits while working for that employer. The government has recently proposed changes to this rule. Other contributors to this report include Anthony DeFrank, Anna Laitin, Katharine Leavitt, Janice Peterson, and Yunsian Tai.
In recent years, the challenges of aging populations have become a topic of increasing concern to the developed nations. These challenges range from the fiscal imbalance in national pension systems caused by fewer workers having to provide benefits for greater numbers of retirees, to potential economic strains due to shortages of skilled workers. Part of the solution to these challenges could be greater older worker labor force participation. GAO identified three nations--Japan, Sweden, and the United Kingdom--that had displayed high levels of older worker labor force participation in the past and were now implementing policy reforms that continued to emphasize the importance of older workers. The experiences of these nations suggest that the nature of the reforms, the public availability and transparency of information on the reforms, and the strength of the national economy play key roles in extending older worker labor force participation. The retirement policy reforms in Japan, Sweden, and the United Kingdom are expected to lead to higher labor force participation of older workers. Japan is facing the most severe aging trend of the nations GAO studied, as its median population age is projected to be 28 percent higher than the United States in the coming decades. In response, Japan has enacted substantial benefit cuts to its national pension system by raising the eligibility age and reducing benefit levels to maintain fund solvency. Due to these changes, some Japanese workers will have to work to later ages. Sweden undertook the most significant reform by changing the structure of its national pension system from a traditional pay-as-you-go defined benefit plan, like the U.S. Social Security program, to a system where participants' benefits are more in line with their contributions. These reforms are expected to extend workers' careers by rewarding longer labor force participation with higher benefits. The system also incorporates flexibility by automatically adjusting benefits to changes in the economy and life expectancy to preserve financial stability. The United Kingdom will phase-in an increase in the women's national pension eligibility age so that it will be equal to the higher male age of 65. It also revised its benefit formula to raise the annual incremental increase for those who defer drawing their pension benefits. These changes either reward continued employment or discourage earlier retirement, and thus may promote continued labor force participation. However, although incentives to work to later ages have been created through reforms to their national and employer provided pension systems, officials from each nation stressed that these policy changes must be accompanied by labor market reforms and economic growth to provide job opportunities to older workers if they are to be effective.
In our body of work on federal user fees we have reported on principles for designing federal user fees and evaluated several individual fees. Our 2008 User Fee Design Guide described principles for setting, collecting, using, and reviewing federal user fees. That report examined fees using four criteria: efficiency, equity, revenue adequacy, and administrative burden. These criteria have often been used to assess user fees and other government collections such as taxes. We further reported on user fee design principles in Federal User Fees: Fee Design Options and Implications for Managing Revenue Instability. That report describes six key fee design decisions intended to inform congressional design of fees that strike Congress’s desired balance between agency flexibility and congressional control. We have also evaluated several individual user fees, including agricultural quarantine inspection user fees, patent fees, immigration fees, and air passenger inspection fees. As we reported in September 2013, user fee designs can vary widely and, in general, are governed by two authorities: an authority to charge fees and an authority to use fee collections. Agencies derive their authority to charge fees either from the Independent Offices Appropriations Act of 1952 (IOAA) or from a specific statutory authority. IOAA gives agencies the authority to charge fees for a service or thing of value provided by the agency. Separate authority is needed for an agency to retain and obligate collected fees. The terms of a specific statute permitting an agency to charge a fee would determine whether or not the agency can retain and obligate the collected fees. However, Congress has frequently provided agencies with statutory authority both to collect fees and to use the collections. In these specific fee authorities, Congress determines the degree of flexibility to make fee design and implementation decisions that will be retained or delegated to the agency. Our September 2013 report found that in designing individual user fees, Congress can decide among options to retain its control or increase agency flexibility for various elements of the fee, including how rates are set, how collections can be used, and what reporting and oversight is required. The legal and policy framework governing regulations is also relevant to the management of regulatory user fees. Specific statutory authority may grant an agency the authority to issue a regulation which creates rights and obligations, and addresses other substantive matters in ways that have the force and effect of law. These regulations are generally codified in the Code of Federal Regulations. Typically, regulations require a desired action or prohibit certain actions by regulated parties. The regulatory process is governed by statutes, executive orders, and agencies’ policies and procedures that, for example, require agencies to evaluate the need for regulations, assess the potential effects of new regulations, and obtain public input (with certain exceptions) during the development of regulations. OMB is responsible for establishing government-wide financial management policies, such as OMB Circular No. A-25, User Charges. OMB is also responsible for ensuring that federal regulations issued by agencies follow executive order requirements and guidance. Congress determines in statute the degree of flexibility to make fee design and implementation decisions that will be retained or delegated to the agency. This has implications for whether agencies issue regulations to set fees, who will determine the amount of regulatory activity, and how costs will be allocated among the various beneficiaries of regulatory programs, including small entities (see figure 1). As our prior work found, the degree to which Congress delegates or retains the authority to set user fees has implications for fee program management, including agencies’ use of the rulemaking process to set the fees. When an agency has greater flexibility, the agency typically sets the fee by regulation; when Congress retains a greater degree of control, fees are typically set in statute and agencies would not need to use notice-and-comment rulemaking. The 10 selected regulatory user fees included in this report exist at varying points along this spectrum. Examples of How Congress Exercises Control over Regulatory User Fee Setting and Delegates Authority to Agencies Congress sets in legislation the amount of the Environmental Protection Agency (EPA) pesticide registration service fee paid by each user. Congress set by statute the total amount of tobacco user fees the Food and Drug Administration (FDA) is authorized to collect each year, and established a formula in the Tobacco Control Act for allocating the total amount among the tobacco product classes that FDA regulates. Congress directs the Securities and Exchange Commission (SEC) in legislation to collect fees that are designed to recover the costs of SEC’s annual appropriation, and prescribes a methodology that SEC must use to determine how much each user pays. Congress directs the Nuclear Regulatory Commission (NRC) in legislation to recover approximately 90 percent of its annual appropriation through user fees, but the agency has the authority to allocate the charges among individual users. Congress provided the National Credit Union Administration (NCUA) and the Office of the Comptroller of the Currency (OCC) with broad statutory authority to charge fees to fund their operations. These agencies set the total amount to be collected each year and establish the fee rates paid by individual users. Regulatory user fees can be set by notice-and-comment rulemaking or other mechanisms. The six selected agencies typically use notice-and- comment rulemaking to set regulatory user fee rates or structures in cases where Congress has authorized them to determine who will pay what amount. For example, the Clean Air Act, as amended, authorizes EPA to establish fees to recover the costs of its Motor Vehicle and Engine Compliance (MVECP) program. EPA uses notice-and-comment rulemaking to set MVECP fees and make changes to the fee structure. Similarly, NRC issues annual rulemakings to set the fee rates for its Part 170 fee for regulatory services—which is charged for direct services to applicants and licensees—and its Part 171 annual fee—which covers other regulatory costs. NCUA uses notice-and-comment rulemaking to establish its fee structure and uses memoranda to annually update the fee rates. In contrast, the six selected agencies typically did not use notice-and-comment rulemaking in cases where they do not have discretion to set or change the fee rate, or where a formula for calculating the fee is set in statute. In these cases, agencies communicate new fee rates by posting information to their websites, directly contacting fee payers, or using notices in the Federal Register, among other mechanisms. For example, within approximately 30 days of receiving its annual appropriation, SEC publishes a notice in the Federal Register to communicate the new Section 31 securities transaction fee rates. In setting regulatory user fees, decision makers can be faced with a policy decision about the scope of the agency’s regulatory activities and the amount of the fee to be charged. Depending on its decisions, Congress may choose to provide additional appropriations to the agency. When regulatory user fees provide funding for an agency or program, Congress’s or an agency’s fee-setting decisions can affect the frequency, amount, or timeliness of the agency’s regulatory activity because the total amount of regulatory user fees can determine the level of service or regulation that the agency provides. In some cases, such as SEC’s Section 31 fee, this policy decision is made entirely by Congress because the total amount of fees to be collected is set by the annual appropriations process. In other cases, such as NCUA’s operating fee and OCC’s semiannual assessments, the agencies have broad discretion to set the amount of fees and use them to fund their regulatory activities. For example, OCC has statutory authority to collect an assessment, fee, or other charge from financial institutions as the Comptroller determines is necessary. The effects of fee-setting decisions depend on whether the program is fully or partially funded by user fees. Some programs’ only source of budgetary resources is the fees they charge. In these cases, regulatory user fees are often charged to an entire industry to cover the full cost incurred by the agency to regulate that industry. For example, FDA’s tobacco user fee is the sole source of funding for the agency’s Center for Tobacco Products, and it is charged to manufacturers and importers of tobacco products that are subject to FDA regulation. In other cases, regulatory user fees can be set to cover the costs of an additional level of service above and beyond what is funded by the agency’s annual appropriations. For example, both EPA’s pesticide registration service fee and FDA’s prescription drug fee may provide additional resources to supplement annual appropriations and enable faster review of new pesticide and prescription drug products. Some agency officials participating in our panel discussion said that their fee-funded regulatory programs benefit the general public—either directly or indirectly—such as by protecting public health or economic stability, but pointed out that fee payers often pass on the cost of regulation to their customers. Panel participants also said that fee payers can receive direct services in exchange for paying regulatory user fees, such as a license or the review of an application. For example, motor vehicle and engine manufacturers pay EPA’s MVECP user fee in exchange for EPA assessing whether engines meet emission standards and issuing certifications. The manufacturers receive an EPA certification that allows them to sell engines, while the public benefits from cleaner air. In some cases payers receive the right to engage in a regulated activity or business. For example, the tobacco manufacturers and importers who pay FDA’s tobacco user fee can engage in the regulated business. Most of these user fees, as our prior work found, are spent to promote public health, such as through public education, regulatory science, product review, and compliance and enforcement. The intended benefit of paying a fee can also be a stable financial environment and consumer confidence. For example, the financial institutions regulated by NCUA and OCC pay fees for the right to operate financial institutions, such as credit unions or banks, and they benefit from the economic stability and consumer confidence that is supported by regulation of our nation’s financial system. In setting fees, agencies typically give special consideration to small entities—which consist of small businesses and other small organizations, such as small banks or certain small educational institutions that handle nuclear material regulated by NRC. This can be an equity consideration that takes into account small entities’ ability to pay the regulatory user fee and compete with larger businesses and organizations. These considerations can include exemptions or lower fee amounts. Of the 10 fees we examined, only SEC’s filing fees and Section 31 fees do not have special considerations for small entities. The process for setting these fees is established by statutory provisions that do not take into account the size of the payer. Also, Section 31 fees are paid by national securities exchanges and the Financial Industry Regulatory Authority (FINRA), none of which is considered to be a small entity under SEC rules. In some cases, agencies are required by statute to provide special consideration for small entities. For example, the Pesticide Registration Improvement Act of 2003, as amended, provides for fee waivers for small businesses and for exemptions from fees in certain circumstances based on factors such as number of employees and volume of revenue. Further, in our prior work, we found that the Leahy-Smith America Invents Act required the U.S. Patent and Trademark Office to establish reduced fee rates for micro entities. In addition, when agencies set their fees by rulemaking they are required by the Regulatory Flexibility Act to consider the impact of these proposed regulations on small entities. For example, EPA regulations provide that an entity may be eligible for a reduced MVECP fee if the full fee for an application for certification for a model year exceeds 1 percent of the aggregate projected retail sales prices of all vehicles or engines covered by a certificate, which is intended to ease the burden on smaller manufacturers. Additionally, agencies can provide special considerations for small entities when they are not required to do so. For example, NCUA and OCC—which have broad authority to set their fees—take into account the amount of assets held by the financial institutions that they regulate when setting their fee amounts. As a result of this fee structure, smaller financial institutions with fewer assets pay a lower amount than larger ones. Regulatory user fees are not always collected at the time of a specific service or transaction. Rather, many are collected from an entire industry at regular intervals as prescribed by statute or regulation. There are management implications for the timing of fee collections, as well as whether a minimum amount of annual appropriations must be reached before the fee can be collected (see figure 2). Six of our 10 selected regulatory fees are collected at regular intervals such as quarterly or annually. Unlike transactional fees collected at the time of an individual transaction, these non-transactional fees include NCUA’s operating fee, which is collected annually, as well as OCC’s semiannual assessments and SEC’s Section 31 fee, which are collected semiannually. Similarly, FDA’s tobacco fee and NRC’s Part 170 fee are billed quarterly, while NRC’s part 171 fee is billed quarterly or annually. FDA’s prescription drug user fee includes both transactional charges for prescription drug applications and non-transactional fees, namely annual charges for establishments that manufacture prescription drugs and existing prescription drug products. In contrast to these non-transactional fees, our prior user fee work, including the User Fee Design Guide, focused on fees that are collected at the time of transactions for government goods and services. Some non-transactional fees are charged to an entire industry at regular intervals to broadly cover the costs of regulating that industry. In these cases, the amount of the fee is not calculated based on services provided to the individual user or the number of transactions by the user. For example, financial institutions pay fees to NCUA and OCC to cover the cost of regulation. The amounts collected are in part based on the amount of assets held by the financial institution, rather than directly tied to the amount of time or resources the government spends regulating them. Non-transactional fees can also cover costs the agency incurs in serving the broader public interest through regulation as well as other responsibilities not directly associated with services provided to the regulated parties. For example, FDA’s tobacco fee pays for public education, such as educating consumers on the danger of tobacco products, as well as the agency’s regulation of these products. Other non- transactional fees are collected at regular intervals but calculated based on specific regulatory services provided to the user. For example, NRC’s Part 170 fee is charged for specific services provided to identifiable users such as licensing and inspection. NRC charges an hourly rate for these services and bills the users quarterly. Fee collection procedures have implications for management. Internal control standards identify properly executing transactions—which can include fee collections—as a key control activity. Accordingly, agencies need to have internal controls in place for all user fees to ensure that all fees due are collected and that each user pays the correct amount, among other things. An important element for designing these controls is having accurate and complete data for identifying and billing users. For example, FDA uses excise tax data when calculating the fee amount paid by each tobacco manufacturer and importer. For fees that are collected at the time of specific transactions (rather than at regular intervals), agencies can ensure all fees are collected by withholding a service until they are paid. For example, EPA’s MVECP fee is collected on a transactional basis and the agency uses two checks— one at the time an application is submitted and another before a certificate is issued—to make sure its MVECP fee has been paid. EPA will not issue a certificate if the fee has not been paid. By contrast, internal control mechanisms may take a different form for non- transactional fees, such as periodic inspections, as shown at left. One advantage of regular collections is that they can create a predictable revenue stream. According to FDA officials, the non-transactional elements of the prescription drug user fee—namely, annual fees for existing prescription drug products and establishments that manufacture prescription drugs—create a stable, predictable source of revenue, whereas revenue from the transactional application fees for new prescription drugs can be less predictable. We have previously concluded that the timing of fee collections can sometimes cause agencies to experience revenue instability. Specifically, collections that come in small increments on a rolling basis or late in the fiscal year may inhibit an agency’s ability to identify overall patterns and fluctuations, or may create cash flow challenges. An additional consideration for managing regulatory user fees is whether fee collection is manual or automated. We have previously concluded that moving to electronic collections can reduce costs and mitigate risks, such as theft. Moreover, two agencies—SEC and OCC—told us that automating fee collections reduces administrative burden. For example, SEC officials told us that improvements in automation would alleviate some of the administrative burdens related to filing fees. Specifically, many wire transfer payments lack adequate identifying information to post to the appropriate registrant’s account and are therefore temporarily posted to an “unassigned” account. So SEC staff must research these payments and post them to the correct registrant account. According to officials, if SEC used additional payment options that required registrants to provide their account information prior to submitting monies to the SEC, it could eliminate payments going into the unassigned account. OCC officials also said automation has helped reduced the administrative burden of the agency’s fee process, which was previously manual. The appropriations process can also have implications for the collection of regulatory user fees. Two of our selected fees have minimum appropriations thresholds that are established by statute. In other words, the agency must receive a certain amount of appropriations for the fiscal year before it can collect fees. For example, under provisions of the Pesticide Registration Improvement Act of 2003, as amended, pesticide registration service fees may not be assessed for a fiscal year unless Congress provides at least a set amount of annual appropriations for certain functions of the Office of Pesticide programs for that year. Similarly, prescription drug user fees shall be refunded unless annual appropriations for salaries and expenses of FDA (excluding the amount of fees appropriated for that year) are equal to or greater than a specific amount. Considering key questions about using regulatory user fees can enable Congress and agencies to identify and manage issues related to revenue instability. As we have previously found, largely or wholly fee-funded programs do not necessarily see a proportional decline in costs when they experience a drop in collections. The authority to create or implement a tool to manage revenue instability may be retained by Congress or it may be delegated to the agency. Consideration of whether a user fee will fund a regulatory program includes determining the risk that fee revenue instability will affect the program and the appropriate strategies for managing that risk (see figure 3). Some agencies have offsetting collection authority, which allows the agency to obligate against fee collections without additional congressional action. These agencies may decide to maintain an unobligated balance as a strategy to manage revenue instability. While the use of available unobligated balances to manage revenue instability is not unique to regulatory user fees, stability of fee revenue can be an important consideration when agencies rely on fees to carry out regulatory activities. As we have previously concluded, it is important that an agency develop a risk-based strategy when considering approaches to managing fee revenue instability. This strategy would include identification and analysis of risks, as well as the effect on the agency’s ability to provide goods and services. Compliance with regulation is often a precursor to or requirement for engaging in certain businesses or activities. Sometimes fees are paid for faster and more predictable services and decisions from regulatory agencies. The agency must also execute its mission, such as FDA’s mission to ensure the safety and effectiveness of prescription drugs. For example, each reauthorization of the Prescription Drug User Fee Act is accompanied by performance goals for FDA’s prescription drug review program, such as goal time frames for FDA’s review of new drug applications. In our prior work, we found that available unobligated balances can help sustain operations for fee-funded programs in the event of a sharp downturn in collections or increase in costs. The following agencies have available unobligated balances that they use to mitigate revenue instability for fee-funded programs: OCC maintains an available unobligated balance which, according to officials, it uses for unexpected expenses and to manage revenue instability. For example, OCC used these funds when it assumed certain supervisory responsibilities from the Office of Thrift Supervision in 2011. The U.S. Patent and Trademark Office (USPTO) maintains an available unobligated balance to ensure its ability to maintain operations. The agency began maintaining this unobligated balance in 2010 to smooth the impact of economic downturns on operations and to help address funding uncertainty. The Animal and Plant Health Inspection Service’s (APHIS) largest fee comes from air passengers. Airlines collect the fee from passengers and remit it to the agency quarterly. APHIS maintains an unobligated balance to cover the time between the provision of services and fee remittance. Some collections are only available to the agencies if Congress appropriates them. Unavailable balances can accumulate in cases where the agency is authorized to collect more in fees than Congress appropriates. For example, at the end of fiscal year 2014, SEC had a $6.6 billion unavailable balance in its Salaries and Expenses account because, when SEC collects more in Section 31 fees than its annual appropriation, the excess collections are not available for obligation without additional congressional action. According to SEC officials, this large unavailable balance resulted from historical features of its Section 31 fee structure that are no longer in place. While in recent years Section 31 securities transaction fee rates have been adjusted annually to equal the agency’s appropriation, officials said in prior years the amount of fee collections was disconnected from the appropriations process. As a result, over the years, SEC collected more in Section 31 fees than Congress appropriated to the agency, which led to a growing unavailable balance, as shown in figure 4. Similarly, the Environmental Protection Agency’s (EPA) Motor Vehicle and Engine Compliance Program (MVECP) fee collections have not been made available to the agency. These fees are deposited into the Environmental Services Special Fund. However, according to officials, Congress has not appropriated money to EPA from this fund for MVECP purposes. EPA has instead received annual appropriations which may be used for MVECP purposes. As a result, the unavailable balance of this fund has steadily increased. The unavailable balance totaled $370 million at the end of fiscal year 2014. In contrast, consistent with our findings from prior work, Customs and Border Protection’s customs air passenger inspection fees are used to reimburse the agency’s annual appropriations for a specific set of reimbursable expenses, such as overtime compensation and certain premium pay costs. Similarly, NRC’s user fees are used to offset approximately 90 percent of the agency’s appropriation in a given fiscal year. Regulatory user fee reviews provide decision makers with information important for deliberations about fee financing, such as identifying effects of changes in a regulated industry, but the appropriate time frames and methods will vary by individual circumstances. Given the mix of public benefits and services to users inherent in regulatory programs, it is important for fee structures and costs to be transparent. Agencies can promote transparency by providing appropriate information to address the diverse needs of policymakers and stakeholders, including fee payers and the general public. Through appropriate stakeholder involvement and dissemination of information, decision makers can help avoid the appearance of regulatory capture (that fee payers have undue influence on regulatory outcomes) (see figure 5). Reviews provide decision makers with comprehensive information necessary to support robust deliberations about fee financing. The regulatory agencies included in this report reviewed their fees regularly. Fee reviews take a variety of forms and the process and practice for reviewing fees—including activities, time frames, and uses—varied among the 10 fees we examined. In contrast, some of our previous fee work found that agencies did not always conduct regular timely reviews of their user fees. Further, we have previously concluded that fees that are not regularly reviewed run the risk of becoming misaligned with costs and consequently overcharging or undercharging users. Fee review activities can range from comprehensive reviews to ensure that fees are aligned with costs—as APHIS has done for its Agricultural Quarantine Inspection fee—to simpler review activities such as checking to ensure that expected fee collections equal the appropriated amount. Agency officials described using a variety of fee review activities tailored to the specific fees. The scope of these activities can depend on whether the fee is cost-based and the amount of authority Congress has delegated to the agency. It is important for agencies to monitor internal controls to ensure they collect the correct amounts from all fee payers. Statutes or regulatory processes typically determine how often specific regulatory fees should be reviewed. Through statutes, Congress may specify the frequency of fee reviews or give an agency discretionary authority to decide when to review its fees. Four of these agencies are subject to the Chief Financial Officers Act of 1990, which requires that they review their fees biennially and recommend fee adjustments as appropriate. In addition, some regulatory user fees are reauthorized by Congress at regular intervals and undergo a more in-depth review during the reauthorization process. Agency officials reported using Office of Management and Budget (OMB) Circular A-25, internal guidance, and guidance contained in statutes and regulations as criteria when reviewing their regulatory fees. OMB Circular A-25 is applied by agencies in their assessment of user charges under the Independent Offices Appropriation Act of 1952 and also provides guidance to agencies regarding their assessment of user charges under other statutes. Some regulatory fees are charged by agencies that are not required to follow standard review requirements and criteria. For example, as a matter of practice, the independent regulatory agencies we interviewed told us they do not follow Circular A-25. However, National Credit Union Administration (NCUA) officials noted that NCUA will evaluate the concepts in OMB Circular A-25 when designing future operating fee assessments. Also, Congress includes specific criteria in some authorizing statutes, and three of the case-study agencies referred to our User Fee Design Guide for criteria. All six case-study agencies conducted some type of fee review activity annually. Often these fee reviews were connected to the annual appropriations process, but the substance of the different fee reviews varied. Agencies characterized these reviews as ranging from budget management reviews to more complicated reviews that identified fee amounts and allocations across different payers and parts of the regulated industry, and are used to amend fee regulations. The following examples illustrate this range of review activities: Officials from FDA’s Center for Tobacco Products, Office of the Comptroller of the Currency (OCC), and NCUA all identified internal processes to check whether expected fee collections are aligned with agency priorities, plans, and commitments. For example, OCC officials stated that the agency’s projected fees are compared to projected expenses, as part of the internal budget process, to ensure that fee levels are adequate to fund the agency. NRC officials said their fee review process is integrated into its structured annual fee rule process and associated rulemaking. Most licensing fees are reviewed and recalculated annually using the current year budget and NRC’s hourly rate for services provided to regulated parties. In addition, certain fees have been established as flat fees. NRC reviews these flat fees and small entity fees biennially rather than as part the annual fee rule process. However, NRC may adjust them annually if the hourly rate or time estimate for its services changes. SEC officials said they recalculate their fee rates annually as required by the statutory provisions authorizing the fees. Further, as required by law, SEC reviews its Section 31 fee rates midyear to determine whether an adjustment to the fee rates is necessary. Further, some fees that are reviewed annually undergo more detailed reviews during the reauthorization process. Some EPA and FDA user fees are reauthorized by Congress at regular intervals and undergo a more in-depth review during the reauthorization process. FDA’s prescription drug user fee is reauthorized every 5 years. The reauthorization establishes the fee requirements and the process by which FDA establishes the annual fee rates and requires FDA to provide annual reports on its progress in meeting negotiated performance goals for the 5-year period. The reauthorization gives FDA, industry, and public stakeholders the opportunity to propose and discuss enhancements and adjustments for the next iteration of the program. Similarly, EPA has a 5- year cycle for reauthorizing its pesticide registration service fees, and fee reviews are conducted in conjunction with that reauthorization process. FDA and EPA officials noted that the information they provide as part of these processes supports deliberations by Congress and stakeholders about changes to the fee programs. Reviews of regulatory user fees can be used as an important tool to identify and respond to changes in the regulated industry. We have generally highlighted the importance of retrospective regulatory reviews to, among other things, respond to changes in technology, market conditions, and the behaviors of regulated entities that cannot be predicted by prospective analysis before implementation of regulations. Reviews of regulatory user fees can similarly capture such changes in the regulated industry that might affect the efficiency and equity of fees. For example, according to EPA officials, their technical assistance in the review of pesticide registration service fees is useful in determining whether additional categories are appropriate. As a result of these fee reviews, the Pesticide Registration Improvement Act contains about twice as many pesticide user fee categories than when it was originally enacted in 2004. For example, over the last 3 reauthorization cycles for this program, the fee categories more than doubled. Decision makers identified different ways that they ensure appropriate transparency and opportunities for public participation when reviewing regulatory user fees. Transparency and public participation are especially important for regulatory user fees because these fees support a mix of benefits to the general public, not only to fee payers. In particular these principles are important for regulatory user fees because these fees are often a condition of engaging in a particular business. Further, there are diverse stakeholders, some of whom are not fee payers, who have varying interests in the implementation of these regulatory user fees and the programs they support. For the purposes of this report, transparency entails disclosing information about the fee process with stakeholders so that regulated entities understand the amount they are paying and why, and other stakeholders understand how the fees contribute to a program’s mission and expected public benefits. Public participation entails providing opportunities for all interested parties and stakeholders to provide input, not just regulated fee payers. The forms of stakeholder engagement differed among agencies according to specific circumstances, such as the underlying statutory authorities and the amount of discretion the agency is given in setting and reviewing its fees. Stakeholder engagement requirements are clear when agencies issue regulations governing their fees. The federal rulemaking process provides standards for public notice, opportunities for comment, agencies’ obligation to respond to significant comments, and the need to maintain a public rulemaking record. The public notices and rulemaking record aid public participation in the rulemaking process and provide access to the supporting facts and analyses for the agency’s rulemaking decisions. Examples of fees set through notice- and-comment rulemaking include OCC’s semiannual assessments, NRC’s Part 170 and Part 171 fees, and EPA’s Motor Vehicle and Engine Compliance Program fees. In some cases, Congress establishes specific procedures for periodic reauthorizations of the fee programs. For example, FDA’s prescription drug fee reauthorization process includes regular meetings between the agency and industry, in addition to ongoing consultations between the agency and public stakeholder groups. FDA works with these stakeholders to develop a negotiated fee proposal, which it submits to Congress. For the pesticide registration service fee, EPA provides technical input to a coalition of stakeholders that helps develop the reauthorization statute. When an agency does not go through the rulemaking process to update fees and allocations, agency officials consider which alternative mechanisms are appropriate to communicate information about their fees and fee reviews, and to obtain input from stakeholders and the public. For some fees, agencies publish notices in the Federal Register that are not subject to stakeholder comment. This process is typically used when statutes specify a formula that the agency must apply, and the agency has little discretion in updating the fee rates, such as with SEC’s fees. OCC and NCUA officials said their agencies promote transparency by posting information on their websites when fees are updated. NCUA officials also noted using public board meetings to disclose information. Our review of stakeholder comments from rulemakings and public meetings for these regulatory user fees showed both support for inclusive fee reviews and fee-setting processes, and concerns that agencies sometimes are not providing enough transparency. For example, FDA prescription drug fee stakeholders indicated support for the reauthorization process, which requires regular meetings between the agency and industry and public stakeholder groups, while some NRC stakeholders said that the agency could be more transparent about its fee-setting process by showing how it arrived at its revised fee rates. In addition, OMB staff pointed out the importance of agencies having sufficient understanding of how the timing of the delivery of fee services affects stakeholders. In general, stakeholders provided substantive comments about the equity of fees, administrative burden of fees, level of service, fee setting methods, and fee reviews. Some of the agencies told us that transparency and participation of all stakeholders, including public interest groups, can mitigate the risk of regulatory capture or the appearance of regulatory capture. According to FDA officials, the perception that there could be regulatory capture is a concern with some of FDA’s regulatory user fees. To address this issue, FDA’s prescription drug fee, among other fees, is structured in a way that minimizes that risk. For example, the performance commitments that FDA negotiates with the regulated industry focus on the regulatory submission review process—what will happen and when—and not the outcomes of the review. In addition, EPA officials said that the agency created a docket for pesticide registration decisions, such as risk assessments and proposed registration decisions, and allows the public to comment on them. Case study agencies also described other management strategies to minimize the appearance of regulatory capture. For example, these strategies include separating fee payments from the outcome of regulatory reviews both in terms of the processes and the staff involved, and considering the diverse viewpoints of industry and public interest stakeholders in the fee-setting process. Further, OCC officials said that the agency’s contingency reserve allows it to maintain its independence from the banks by providing revenue stability if a bank were to move to a different regulator. We provided a draft of this report to the Secretaries of Agriculture, Commerce, Health and Human Services, and Homeland Security; the Administrator of EPA; the Managing Director of FCC; the Chairmen of FERC, NCUA, and NRC; the Director of Enterprise Governance of OCC; and the Chief Operating Officer of SEC for review and comment. We also provided a copy of the report to the Director, OMB for informational purposes. We received written responses from the Executive Director of the National Credit Union Administration and the Executive Director for Operations of the Nuclear Regulatory Commission, which are reprinted in appendixes III and IV. NCUA’s response agreed with our findings and stated that the questions identified in this report will be helpful when the agency determines future annual operating fee assessments. NRC’s letter stated that the agency had no comments. We also received technical comments from the Department of Health and Human Services, NCUA, OCC, and SEC, which we incorporated as appropriate. APHIS, CBP, EPA, FCC, FERC, and USPTO had no comments. We are sending copies of this report to interested congressional committees and the aforementioned agencies. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. These questions can help decision makers design, implement, and evaluate regulatory user fees. They are intended to supplement the fee design questions in Federal User Fees: A Design Guide and Federal User Fees: Fee Design Options and Implications for Managing Revenue Instability. While the key questions in those reports remain relevant for all types of federal user fees, here we provide additional questions that are specific to regulatory user fees (that is, fees charged by federal agencies to regulated entities in conjunction with regulatory activities). These products should be used together when designing, implementing, and evaluating regulatory user fees. We note that some of these questions may overlap. Further, we recognize that there is no one-size- fits-all approach to managing regulatory user fees. Because fee designs vary widely, when considering an individual regulatory user fee some questions may be more applicable than others. 1. To what extent does Congress retain control, and to what extent does it delegate authority to the agency? a. Is the total amount to be collected set in statute or determined by the annual appropriations process? b. Does the fee reimburse the agency’s annual appropriations? c. Does the statute specify the fee rate paid by each user, or a specific formula or methodology that the agency must use to determine the fee rate? d. Does the agency use the rulemaking process to set the fee rate? e. In cases where rulemaking is not used, how does the agency communicate new fee rates to users? 2. To what extent do the fees affect the scope of the agency’s regulatory activities? a. Do the fees provide funding for activities that support the agency’s regulatory mission? b. Are the fees the sole source of funding for the regulatory activities, or one of multiple sources of funding? c. Does the amount of the fee affect the scope of the agency’s activities to carry out its mission, such as the level, frequency, or amount of regulation? d. What role does Congress play in setting the total amount of the fee and the rates paid by individual users, and to what extent has this authority been delegated to the agency? e. What role does the appropriations process play, if any, in setting the total amount of the fee? 3. To what extent does the regulatory activity for which the fee is charged provide services to regulated entities and benefits to the general public? a. Is the fee paid as a condition of engaging in a particular activity or business subject to federal government regulation? b. Are fee payers provided with a regulatory service, such as a certificate or application review? c. Is the fee intended to fund a higher level of regulatory service than is funded by annual appropriations, such as expedited reviews? d. Do fee payers receive an indirect benefit from federal government regulation, such as increased consumer confidence or industry stability? e. What public benefits are derived from the regulatory activity for which the fee is charged? 4. What consideration has been given to whether small entities should be allowed exemptions or lower fees? a. Does the statute require exemptions or lower fee rates for small entities? b. For fees set by rulemaking, how will the agency consider the effect of the fee regulation on small entities, as required by the Regulatory Flexibility Act? c. Will the fee structure account for any other policy considerations for small entities? 1. Will the fee be collected at the time of a transaction, collected at regular intervals, or both? a. What internal controls will be in place to ensure that all fees due are collected? b. For fees collected at regular intervals, what additional record keeping, data collection, or modeling, if any, is needed to ensure that fees are collected correctly? c. Can the agency withhold service when the fee is not paid? d. To what extent can fee collections be automated to reduce administrative burden and mitigate risks such as theft? 2. Is there a minimum amount of annual appropriations that must be received before fees can be collected? 1. Does the agency rely on user fees to carry out its regulatory mission, or a portion of its regulatory mission? a. If so, what is the risk that the program could face an unexpected decline in collections or an increase in costs? 2. Are fee collections available to the agency without further congressional action? a. If so, has the agency considered whether it needs to maintain an unobligated balance? b. Does the statute establish a reserve fund, or specify whether the agency should establish a reserve fund? c. If the agency maintains an unobligated balance, does it have a target amount? 1. What methods and time frames are appropriate for reviewing the fee? a. What guidance should be used when reviewing the fee? Specifically, are the provisions of OMB Circular A-25 appropriate, or is there a need for program-specific guidance? b. For independent regulatory agencies that do not follow OMB Circular A-25, is there a need for policies or requirements to ensure regular and timely reviews of regulatory user fees? c. How, and by whom, will the fee review be used? d. Are the fee reviews designed to identify meaningful changes that might occur in the regulated industry? 2. What level of transparency, stakeholder outreach, and input is appropriate as part of the fee review process? a. Are fee review results communicated in a way that addresses the needs of the full range of stakeholders, appropriately addressing both fee payers and other beneficiaries such as the general public? b. If fee changes are not done through a rulemaking, what alternative methods or venues can be used to communicate results of fee reviews and solicit stakeholder input? c. How, if at all, will stakeholders provide feedback on the timing of fee-funded services? d. How will the agency avoid the possibility or appearance that fee payers could have undue influence over regulatory decisions or outcomes (i.e., regulatory capture)? Our objectives were to identify the design and implementation characteristics of regulatory user fees in terms of how these fees are: (1) set, (2) collected, (3) used, and (4) reviewed. In carrying out these objectives, we also assessed the extent to which agencies found the User Fee Design Guide to be applicable to their regulatory user fees. To address all of these objectives, we examined the characteristics of regulatory fees using a literature review, case studies, and a multi-agency panel discussion. Because there is no one standard definition of a regulatory user fee, and no comprehensive list of federal user fees, we used a literature review to define and identify regulatory user fees. Our literature review included Office of Management and Budget (OMB) and agency budget documents; Inspectors General, Congressional Budget Office (CBO), and Organisation for Economic Co-operation and Development (OECD) reports; and our related reports. We identified relevant literature by searching web-based databases and resources, including CBO and Congressional Research Service databases, the Federal Register, and ProQuest. We also reviewed the President’s budget request for fiscal years 2015 and 2016 and searched for information on agency websites. We included in our review any literature that we identified on user fees charged by the federal government to regulated entities in conjunction with regulatory activity. To develop a definition of regulatory user fees for the purposes of this report, we analyzed the use of this term by other sources, including OMB, CBO, OECD, and academic literature. Based on these sources and feedback from the agencies we spoke with in the course of our case studies and panel discussion, for the purposes of this report, we define regulatory user fees as a subset of federal user fees which are charged to nonfederal entities subject to federal government regulation, in conjunction with regulatory activities. To develop a list of regulatory user fees that meet our definition, we used the President’s fiscal year 2015 budget request, agencies’ congressional budget justifications, agency performance documents, and the results of our literature review. To ensure that we identified as many regulatory user fees as possible, we also searched for agencies with certain fiscal characteristics and high levels of rulemaking activity and corroborated the results of that search by reviewing agency documents. We determined that computer-processed data were not expected to materially affect our findings, conclusions, or key questions, thus rendering a data reliability assessment unnecessary. We selected case studies of 10 regulatory user fees within six agencies based on high dollar amounts of regulatory user fee collections, high amounts of rulemaking activity, and diverse fee characteristics, including: agencies that are subject to the Chief Financial Officers Act of 1990 (CFO Act) and agencies that are not; independent regulatory agencies and executive agencies; fees that are intended to recover the full cost of operating an agency or program, as well as fees that are intended to recover partial costs and fees that are not cost based; and fees that are collected at the time of a specific transaction between the user and the regulator, and fees that are collected at regular time intervals (such as annually or quarterly) and not at the time of a transaction. The fiscal and regulatory characteristics of our selected case studies are shown in table 2. For each case study, we reviewed documents related to the fee, such as rulemakings and congressional budget justifications, and interviewed agency officials about their management of the fee, including how regulatory user fees influence the agency’s regulatory mission and how congressional control and oversight affect the fee. We examined stakeholder views on the selected fees by reviewing public comments on proposed rules in the Federal Register and minutes of public meetings. Our analysis included the documented views of stakeholders from the regulated industry, trade associations, public interest groups, and individual members of the public. We reviewed public comments on rulemakings to the extent they were available, which we identified by searching regulations.gov. Public comments on rulemakings were available for 7 of our 10 selected fees. For each of these 7 fees, we reviewed comments on from one to four rulemakings depending on the extent of comments the agencies received, using the most recent rulemaking available. These rulemakings were issued from 2000 through 2014. For 2 additional selected fees—EPA’s pesticide registration service fee and FDA’s prescription drug fee—we reviewed minutes of public meetings because no rulemakings were available. Specifically, we reviewed the minutes of three public meetings for the pesticide registration fee, which took place in 2012 and 2013, and the minutes of one public meeting for the prescription drug fee, which took place in 2011. We did not examine stakeholder views for SEC’s filing fees because documentation of stakeholder views was not available. SEC officials said the agency has not issued any rulemakings or held any public meetings for this fee. To analyze the stakeholder views for the other selected fees, one analyst reviewed each source to identify relevant themes—such as the effect of regulatory user fees on small entities and the transparency of fee-setting decisions—and categorize stakeholders’ comments related to those themes. A second analyst then reviewed the documentation to verify categorization decisions. Then, both analysts met to resolve any discrepancies. Finally, we evaluated the categorized information to identify common issues. The results of our review of stakeholder comments are not generalizable to all stakeholders, but provide insights into the views of fee payers, public interest groups, and other interested parties. We invited 12 agencies, including the 6 named above, to respond to a structured questionnaire and participate in a panel discussion. Eleven agencies completed the questionnaire and 10 attended the panel. In addition to the 6 case study agencies named above, the panel included participants from the Animal and Plant Health Inspection Service, Federal Communications Commission, Federal Energy Regulatory Commission, and U.S. Patent and Trademark Office. U.S. Customs and Border Protection officials also completed the questionnaire but were unable to attend the panel due to a schedule conflict. We also invited the Centers for Medicare and Medicaid Services, but the agency declined to participate. These agencies were selected based on our prior reviews of regulatory user fees, as well as large amounts of regulatory user fee collections and diversity of fiscal and regulatory characteristics. To help identify topics for discussion at our panel, we developed and distributed a questionnaire to obtain these agencies’ views on how regulatory user fees are set, collected, used, and reviewed, including the extent to which agencies found the User Fee Design Guide to be applicable to their regulatory user fees. To minimize errors that might occur from respondents interpreting our questions differently from our intended purpose, we pretested the questionnaire by phone with EPA and SEC officials. During these pretests, we asked officials to review the questionnaire as we interviewed them to determine whether (1) the questions were clear and unambiguous, (2) the terms used were precise, (3) the questionnaire did not place an undue burden on the officials completing it, and (4) the questionnaire was objective and unbiased. We modified the questions based on feedback from the pretests, as appropriate. We distributed the questionnaire on February 25, 2015, and asked respondents to complete the questionnaire within an electronic form and return it as an e-mail attachment. We sent follow-up emails to agencies that had not yet responded on March 6, 2015. We received 11 responses by March 19, 2015. To account for agencies with multiple regulatory user fees, we instructed each agency to answer the questions for the one regulatory user fees—or group of similar fees—with the highest dollar amount of collections in fiscal year 2014. Respondents from the 11 agencies that completed the questionnaire included officials from agency budget offices and program offices. In addition, we held a 3-hour panel discussion with the 10 agencies named above on March 27, 2015. We used the questionnaire, as well as the preliminary results of our audit work, to determine discussion topics for the panel discussion. To do this, we summarized and synthesized this information to identify common themes. Discussion topics included the differences between regulatory user fees and other user fees, promising practices and challenges for managing regulatory user fees, fee reviews, and managing stakeholder relationships. We used information from the panel discussion to validate the themes identified in our case studies and ensure that they are broadly applicable to a larger set of agencies. While the results of the case studies, questionnaire, and panel discussion are designed to reflect the broad diversity of regulatory user fee characteristics, they cannot be generalized to all regulatory user fees. Because the universe of regulatory user fees is not defined, as noted above, it was not possible for us to design a representative sample. Throughout this report, we use specific, selected examples to illustrate how regulatory user fees are set, collected, used, and reviewed. We also met with Office of Management and Budget staff to supplement information obtained from the case studies, questionnaire, and panel discussion. We conducted this performance audit from August 2014 through September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Timothy Bober, Assistant Director, Steven Campbell, Susan Irving, Sharon Miller, and Laurel Plume made key contributions to this report. Also contributing to this report were JoAnna Berry, Tom Beall, A. Nicole Clowers, Marcia Crosse, Lorraine Ettaro, Robert Gebhart, Alfredo Gomez, James R. Jones Jr., Andrea Levine, Felicia Lopez, Julie Matta, Donna Miller, Amanda Postiglione, Susan Offutt, Oliver Richard, Cynthia Saunders, Anne Stevens, Colleen Taylor, Kimberly Walton, and Orice Williams Brown. Tobacco Product Regulation: Most FDA Spending Funded Public Education, Regulatory Science, and Compliance and Enforcement Activities. GAO-14-561. Washington, D.C.: June 20, 2014. Reexamining Regulations: Agencies Often Made Regulatory Changes, but Could Strengthen Linkages to Performance Goals. GAO-14-268. Washington, D.C.: April 11, 2014. Federal User Fees: Fee Design Options and Implications for Managing Revenue Instability. GAO-13-820. Washington, D.C.: September 30, 2013. Agricultural Quarantine Inspection Fees: Major Changes Needed to Align Fee Revenues with Program Costs. GAO-13-268. Washington, D.C.: March 1, 2013. Federal Communications Commission: Regulatory Fee Process Needs to Be Updated. GAO-12-686. Washington, D.C.: August 10, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. User Fees: Additional Guidance and Documentation Could Further Strengthen IRS’s Biennial Review of Fees. GAO-12-193. Washington, D.C.: November 22, 2011. Budget Issues: Better Fee Design Would Improve Federal Protective Service’s and Federal Agencies’ Planning and Budgeting for Security. GAO-11-492. Washington, D.C.: May 20, 2011. Federal User Fees: Fee Design Characteristics and Trade-Offs Illustrated by USCIS’s Immigration and Naturalization Fees. GAO-10-560T. Washington, D.C.: March 23, 2010. Budget Issues: Electronic Processing of Non-IRS Collections Has Increased but Better Understanding of Cost Structure Is Needed. GAO-10-11. Washington, D.C.: November 20, 2009. Federal User Fees: Additional Analyses and Timely Reviews Could Improve Immigration and Naturalization User Fee Design and USCIS Operations. GAO-09-180. Washington, D.C.: January 23, 2009. Immigration Application Fees: Costing Methodology Improvements Would Provide More Reliable Basis for Setting Fees. GAO-09-70. Washington, D.C.: January 23, 2009. Federal User Fees: Substantive Reviews Needed to Align Port-Related Fees With the Programs They Support. GAO-08-321. Washington, D.C.: February 22, 2008. Federal User Fees: A Design Guide. GAO-08-386SP. Washington, D.C.: May 29, 2008. Federal User Fees: Improvements Could Be Made to Performance Standards and Penalties in USCIS’s Service Center Contracts. GAO-08-1170R. Washington, D.C.: September 25, 2008. Federal User Fees: Key Aspects of International Air Passenger Inspection Fees Should Be Addressed Regardless of Whether Fees Are Consolidated. GAO-07-1131. Washington, D.C.: September 24, 2007. Federal User Fees: Some Agencies Do Not Comply With Review Requirements. GAO/GGD-98-161. Washington, D.C.: June 30, 1998. Federal User Fees: Budgetary Treatment, Status, and Emerging Management Issues. GAO/AIMD-98-11. Washington, D.C.: December 19, 1997.
Regulatory user fees are assessed on certain nonfederal entities subject to regulation in conjunction with regulatory activities. They represent a significant source of federal government revenue—some individual regulatory user fees exceed $1 billion in annual collections—and often support agencies' regulatory missions. Well-designed regulatory user fees can help fund regulatory programs while reducing taxpayer burden. GAO built on its prior user fee work by assessing what additional design and implementation characteristics exist specifically for regulatory user fees in terms of how these fees are: (1) set, (2) collected, (3) used, and (4) reviewed. To do so, GAO reviewed relevant literature and analyzed 10 regulatory user fees within 6 agencies—Environmental Protection Agency, Food and Drug Administration, National Credit Union Administration (NCUA), Nuclear Regulatory Commission, Office of the Comptroller of the Currency, and Securities and Exchange Commission. GAO selected these agencies based on their high amounts of fee collections and rulemaking activity and diverse fee characteristics. GAO also examined stakeholder views on these selected fees and held a multi-agency panel discussion to ensure the broad applicability of the findings. GAO identified key elements of regulatory user fees for decision makers to consider as they design, implement, and evaluate these fees. Setting regulatory user fees: Congress determines in statute the degree of flexibility to make fee design and implementation decisions that will be retained or delegated to the agency. This has implications for whether agencies issue regulations to set fees, who will determine the level of regulatory activity, and how costs will be allocated among beneficiaries. In setting fees, agencies typically give special consideration to small businesses' ability to pay. Collecting regulatory user fees: Regulatory user fees are not always collected at the time of a specific service or transaction. While some regulatory user fees are charged for specific services, many are collected from an entire industry at regular intervals as prescribed by statute or regulation. Collecting fees this way can create a stable revenue stream. Agencies may use different methods to ensure collection of fees because they cannot always withhold services until the fee is paid. Using regulatory user fees: It is important to consider the availability of fee collections and unobligated balances. In some cases, agencies have the authority to use balances to mitigate revenue instability. In other cases, collected fees are only available to the agencies if Congress appropriates them. Reviewing regulatory user fees: Regulatory user fee reviews provide important information for decision makers, such as identifying the effects of changes in a regulated industry. The appropriate time frames and methods for agency review will vary by individual circumstances. Regulatory programs produce both public benefits and services to fee payers, so it is important that fee review processes provide opportunities for input from stakeholders, including fee payers and the general public. Agencies can promote transparency by providing information on how fees are calculated and used to address the diverse needs of policymakers, stakeholders, and the general public. Decision makers can help mitigate the appearance that fee-payers have undue influence on regulatory outcomes through appropriate stakeholder involvement and dissemination of information. GAO is not making any recommendations in this report. NCUA provided written comments agreeing with GAO's findings. NCUA and three other agencies also provided technical comments, which were incorporated as appropriate.
SAFETEA-LU authorized over $45 billion for federal transit programs, including $8 billion for the New Starts program, from fiscal year 2005 through fiscal year 2009. Under New Starts, FTA identifies and recommends fixed-guideway transit projects for funding—including heavy, light, and commuter rail; ferry; and certain bus projects (such as bus rapid transit). FTA generally funds New Starts projects through full funding grant agreements (FFGA), which establish the terms and conditions for federal participation in a New Starts project. FFGAs also define a project’s scope, including the length of the system and the number of stations; its schedule, including the date when the system is expected to open for service; and its cost. For a project to obtain an FFGA, it must progress through a local or regional review of alternatives and meet a number of federal requirements, including requirements for information used in the New Starts evaluation and rating process (see fig. 1). New Starts projects must emerge from a regional, multimodal transportation planning process. The first two phases of the New Starts process—systems planning and alternatives analysis— address this requirement. The systems planning phase identifies the transportation needs of a region, while the alternatives analysis phase provides information on the benefits, costs, and impacts of different options, such as rail lines or bus routes. The alternatives analysis phase results in the selection of a locally preferred alternative, which is intended to be the New Starts project that FTA evaluates for funding, as required by statute. After a locally preferred alternative is selected, the project sponsor submits an application to FTA for the project to enter the preliminary engineering phase. When this phase is completed and federal environmental requirements are satisfied, FTA may approve the project’s advancement into final design, after which FTA may approve the project for an FFGA and proceed to construction, as provided for in statute. FTA oversees grantees’ management of projects from the preliminary engineering through construction phases and evaluates the projects for advancement into each phase of the process, as well as annually for the New Starts report to Congress. To help inform administration and congressional decisions about which projects should receive federal funds, FTA assigns ratings on the basis of various statutorily defined evaluation criteria—including both financial commitment and project justification criteria—and then assigns an overall rating. These evaluation criteria reflect a broad range of benefits and effects of the proposed project, such as cost-effectiveness, as well as the ability of the project sponsor to fund the project and finance the continued operation of its transit system. FTA assigns the proposed project a rating for each criterion and then assigns a summary rating for local financial commitment and project justification. Finally, FTA develops an overall project rating. Projects are rated at several points during the New Starts process—as part of the evaluation for entry into the preliminary engineering and final design phases, and yearly for inclusion in the New Starts annual report. As required by statute, the administration uses the FTA evaluation and rating process, along with the phase of development of New Starts projects, to decide which projects to recommend to Congress for funding. Although many projects receive a summary rating that would make them eligible for FFGAs, only a few are proposed for FFGAs in a given fiscal year. FTA proposes projects for FFGAs when it believes that the projects will be able to meet the following conditions during the fiscal year for which funding is proposed: All non-federal project funding must be committed and available for the project. The project must be in the final design phase and have progressed to the point where uncertainties about costs, benefits, and impacts (i.e., environmental or financial) are minimized. The project must meet FTA’s tests for readiness and technical capacity, which confirm that there are no remaining cost, project scope, or local financial commitment issues. SAFETEA-LU made a number of changes to the New Starts program and FTA has made progress in implementing some of those changes. However, FTA has more work to do to implement these changes. In particular, although the Small Starts program has fewer application and document submission requirements than the New Starts program, project sponsors have expressed concern that the Small Starts program could be further streamlined. In addition, SAFETEA-LU added economic development to the list of evaluation criteria, but FTA has not fully incorporated this criterion into the New Starts and Small Starts evaluation and rating processes. SAFETEA-LU introduced a number of changes to the New Starts program. For example, SAFETEA-LU added economic development to the list of evaluation criteria that FTA must use in evaluating and rating New Starts projects and required FTA to issue notice and guidance each time significant changes are made to the program. In addition, SAFETEA-LU established the Small Starts program, a new capital investment grant program to provide funding for lower-cost fixed- and non-fixed-guideway projects such as bus rapid transit, streetcars, and commuter rail projects. This program is intended to advance smaller-scale projects through an expedited and streamlined evaluation and rating process. Small Starts projects are defined as those that require less than $75 million in federal funding and have a total cost of less than $250 million. According to FTA’s guidance, Small Starts projects must (a) meet the definition of a fixed guideway for at least 50 percent of the project length in the peak period or (b) be a corridor-based bus project with the following minimum elements: traffic signal priority/pre-emption, to the extent, if any, that there are traffic signals on the corridor, low-floor vehicles or level boarding, branding of the proposed service, and 10 minute peak/12 minute off-peak running times (i.e., headways) or better while operating at least 14 hours per weekday. FTA has made progress in implementing SAFETEA-LU changes. For example, it published the New Starts policy guidance in January 2006 and February 2007, and interim guidance on the Small Starts program in July 2006. The July 2006 interim guidance introduced a separate eligibility category within the Small Starts program for “Very Small Starts” projects. Small Starts projects that qualify as Very Small Starts are simple, low-cost projects that FTA has determined qualify for a simplified evaluation and rating process. These projects must meet the same eligibility requirements as Small Starts projects and be located in corridors with more than 3,000 existing riders per average weekday who will benefit from the proposed project. In addition, the projects must have a total capital cost less than $50 million (for all project elements) and a per-mile cost of less than $3 million, excluding rolling stock (e.g., train cars). Table 1 describes SAFETEA-LU provisions for the New Starts program and the status of the implementation of those provisions as of April 2007. Although FTA has made progress in implementing SAFETEA-LU changes, more work remains. Project sponsors identified two key issues for FTA to consider as it moves forward in implementing SAFETEA-LU changes: further streamline the Small Starts program and fully incorporate economic development into the New Starts and Small Starts evaluation and rating processes. FTA officials agree that the Small Starts program can be further streamlined. Further, FTA officials said they understand the importance of economic development, and are currently working to develop an appropriate economic development measure. In implementing the Small Starts program, FTA has taken steps to streamline the application and evaluation and rating process for smaller- scale transit projects, as envisioned by SAFETEA-LU. According to our analysis of the number and types of requirements for the New Starts and Small Starts application processes, the Small Starts process has fewer requirements. For example, in the categories of travel forecasting, project justification, and local financial commitment, the requirements were reduced. In addition, FTA developed simplified methods for travel forecasts that predict transportation benefits and reduced the number of documents that need to be submitted as part of the Small Starts application process. For example, the number of documents required for the Small Starts application is one-quarter fewer than those for the New Starts program. Furthermore, FTA established the Very Small Starts program, which has even fewer application and document submission requirements than the Small Starts program. Despite these efforts, many of the project sponsors we interviewed find the Small Starts application process time consuming and costly to complete, and would like to see FTA further streamline the process. Frequently, project sponsors said that the current Small Starts application process takes as long and costs as much to complete as the New Starts application process, even though the planned projects cost less. For example, a project sponsor who applied for the Small Starts program told us that FTA asks its applicants to submit templates used in the New Starts application process that call for information not relevant for a Small Starts project. For example, while project sponsors are only required to submit an opening year travel forecast as part of their Small Starts application, the template FTA provides project sponsors asks for information on additional forecasting years. The project sponsor suggested that FTA develop a separate set of templates for the Small Starts program that would ask only for Small Starts-related information. FTA officials told us that in these cases, they would not expect project sponsors to provide the additional information that is not required. Another project sponsor we interviewed told us that although FTA tried to streamline the process by requiring ridership projections only for the opening year of Small Starts projects, the environmental impact statement still mandates the development of multi- year ridership projections. Such extensive ridership projections take a considerable amount of work, staff time, and funding to produce. Several other project sponsors who applied to the Small Starts or Very Small Starts programs expressed additional concerns about having to provide duplicate information, such as project finance and capital cost data that can be found in other required worksheets. FTA officials do not believe that such duplicate information is burdensome for projects sponsors to submit. However, because some of the project sponsors are smaller-sized entities and have no previous experience with the New Starts program, the concerns expressed by project sponsors likely reflect their inexperience and lack of in-house expertise and resources. In reviewing the Small Starts application process requirements, we also found that the application is not, in some cases, tailored for Small Starts applicants and, in several instances, requests duplicate information. FTA officials acknowledged that the Small Starts application process could be further streamlined and are working to reduce the burden, such as minimizing the duplicate information project sponsors are currently required to submit. However, FTA officials noted that some requirements are statutorily-defined or reflect industry-established planning principles. For example, SAFETEA-LU requires that projects, even Small Starts projects, emerge from an alternatives analysis that considered various options to address the transportation problem at hand. Therefore, only certain aspects of the process can or should be streamlined. Project sponsors also noted that FTA has not fully incorporated economic development—a new project justification evaluation criterion identified by SAFETEA-LU—into the evaluation process. Specifically, FTA currently assigns a weight of 50 percent each to cost-effectiveness and land use to calculate a project’s overall rating; the other four statutorily-identified criteria, including economic development, mobility improvements, operating efficiencies, and environmental benefits, are not weighted. To reflect SAFETEA-LU’s increased emphasis on economic development, FTA has encouraged project sponsors to submit information that they believe demonstrates the impacts of their proposed transit investments on economic development. According to FTA, this information is considered as an “other factor” in the evaluation process, but not weighted. However, FTA officials told us that few project sponsors submit information on their projects’ economic development benefits for consideration as an “other factor.” We previously reported that FTA’s reliance on two evaluation criteria to calculate a project’s overall rating is drifting away from the multiple-measure evaluation and rating process outlined in statute and current New Starts regulations. Thus, we recommended that FTA improve the measures used to evaluate New Starts projects so that all of the statutorily-defined criteria can be used in determining a project’s overall rating, or provide a crosswalk in the regulations showing clear linkages between the criteria outlined in statute and the criteria and measures used in the evaluation and rating process in the upcoming rulemaking process. Many of the project sponsors and all industry groups we interviewed also stated that certain types of projects are penalized in the evaluation and rating process because of the weights assigned to the different evaluation criteria. Specifically, by not weighting economic development, the project sponsors and industry groups said that the evaluation and rating process does not consider an important benefit of some transit projects. They also expressed concern that the measure FTA uses to determine cost- effectiveness does not adequately capture the benefits of certain types of fixed guideway projects—such as streetcars—that have shorter systems and provide enhanced access to a dense urban core rather than transport commuters from longer distances (like light or heavy rail). Project sponsors and an industry group we interviewed further noted that FTA’s cost effectiveness measure has influenced some project sponsors to change their project designs from more traditional fixed-guideway systems like light rail or streetcars to bus rapid transit, expressly to receive a more favorable cost-effectiveness rating from FTA. According to FTA officials, they understand the importance of economic development to the transit community and the concerns raised by project sponsors, and said they are currently working to develop an appropriate economic development measure. FTA is currently soliciting input from industry groups on how to measure economic development, studying possible options, and is planning to describe how it will incorporate economic development into the evaluation criteria in its upcoming rulemaking. FTA officials also stated that incorporating economic development into the evaluation process prior to the issuance of a regulation would have the potential of creating significant evaluation and rating uncertainty for project sponsors. Furthermore, they agreed with our previous recommendation that this issue should be addressed as part of their upcoming rulemaking, which they expect to be completed in April 2008. FTA officials noted that they have had difficulty developing an economic development measure that both accurately measures benefits and distinguishes competing projects. For example, FTA officials said that separating economic development benefits from land use benefits— another New Starts evaluation criterion—is difficult. In addition, FTA noted that many economic development benefits result from direct benefits (e.g., travel time savings), and therefore, including them in the evaluation could lead to double counting the benefits FTA already measures and uses to evaluate projects. Furthermore, FTA noted that some economic development impacts may represent transfers between regions rather than a net benefit for the nation, raising questions about the usefulness of these benefits for a national comparison of projects. We have also reported on many of the same challenges of measuring and forecasting indirect benefits, such as economic development and land use impacts. For example, we noted that certain benefits are often double counted when evaluating transportation projects. We also noted that indirect benefits, such as economic development, may be more correctly considered transfers of direct user benefits or economic activity from one area to another. Therefore, estimating and adding such indirect benefits to direct benefits could constitute double counting and lead to overestimating a project’s benefits. Despite these challenges, we have previously reported that it is important to consider economic development and land use impacts, since they often drive local transportation investment choices. The number of projects in the New Starts pipeline has decreased since the fiscal year 2001 evaluation and rating cycle, and the types of projects in the pipeline have changed. FTA and project sponsors ascribed these changes to different factors, with FTA officials citing their increased scrutiny of applications and projects, and the project sponsors pointing to the complex, time-consuming, and costly nature of the New Starts process. FTA is considering different ideas on how to improve the New Starts process, some of which may address the concerns identified by project sponsors. Since the fiscal year 2001 evaluation cycle, the number of projects in the New Starts pipeline—which includes projects that are in the preliminary engineering or final design phases—has decreased by more than half, from 48 projects in the fiscal year 2001 evaluation cycle to 19 projects in the fiscal year 2008 evaluation cycle. Similarly, the number of projects FTA has evaluated, rated, and recommended for New Starts FFGAs has decreased since the fiscal year 2001 evaluation and rating cycle. Specifically, as shown in table 2, the number of projects that FTA evaluated and rated decreased by about two-thirds, from 41 projects to 14 projects. The composition of the pipeline—that is, the types of projects in the pipeline—has also changed since the fiscal year 2001 evaluation cycle. During fiscal years 2001 through 2007, light rail and commuter rail were the more prevalent modes for projects in the pipeline. In fiscal year 2008, bus rapid transit became the most common transit mode for projects in the pipeline. Overall, heavy rail has become a less common mode for projects in the pipeline since fiscal year 2001 (see fig. 2). The increase in bus rapid transit projects is likely due to a number of factors, including SAFETEA-LU’s expanded definition of fixed guideways and foreign countries’ positive experiences with this type of transit system. In particular, SAFETEA-LU expanded the definition of fixed guideways for the Small Starts program to include corridor-based bus projects. To be eligible, a corridor-based bus project must (1) operate in a separate right- of-way dedicated for public transit use for a substantial portion of the project, or (2) represent a substantial investment in a defined corridor. FTA and project sponsors identified different reasons for the decrease in the New Starts pipeline. FTA officials cited their increased scrutiny of applications to help ensure that only the strongest projects enter the pipeline, and said they had taken steps to remove projects from the pipeline that were inactive, not advancing, or did not adequately address identified problems. FTA officials told us that they believe projects had been progressing too slowly through the pipeline in recent years and therefore needed encouragement to move forward or be removed from the pipeline. Along these lines, since fiscal year 2004, FTA has issued warnings to project sponsors that alert them to specific project deficiencies that must be corrected by a specified date in order for the project to advance through the pipeline. If the deficiency is not corrected, FTA removes the project from the pipeline. To date, FTA has issued warnings for 13 projects. Three projects have only recently received a warning and their status is to be determined; 3 projects have adequately addressed the deficiency identified by FTA; 1 project was removed by FTA for failing to address the identified deficiency; and 6 projects were withdrawn from the pipeline by the projects’ sponsor. FTA officials told us that project sponsors are generally aware of FTA’s efforts to better manage the pipeline. Although FTA has taken steps to remove inactive or stalled projects from the pipeline, FTA officials noted that most projects have been withdrawn by their project sponsors, not FTA. According to FTA data, 23 projects have been withdrawn from the New Starts pipeline between 2001 and 2007. Of these, 16 were withdrawn at the request of the project sponsors, 6 were removed in response to efforts initiated by FTA, and 1 was removed at congressional direction (see fig. 3). Of the projects that were withdrawn by project sponsors, the most common reasons were that the projects were either reconfigured (the project scope or design was significantly changed) or reconsidered, or that the local financial commitment was not demonstrated. Similarly, FTA initiated the removal of 4 of the 6 projects for lack of a local financial commitment, often demonstrated by a failed referendum at the local level. Of the 23 projects withdrawn from the New Starts pipeline, 3 were expected to reenter the pipeline at a later date. The project sponsors we interviewed provided other reasons for the decrease in the number of projects in the New Starts pipeline. The most common reasons cited by project sponsors are that the New Starts process is too complex, costly, and time-consuming: Complexity and cost of the New Starts process: The majority of project sponsors we interviewed told us that the complexity of the requirements, including those for financial commitment projections and travel forecasts—which require extensive analysis and economic modeling—create disincentives to entering the New Starts pipeline. Sponsors also told us that the expense involved in fulfilling the application requirements, including the costs of hiring additional staff and private grant consultants, discourages some project sponsors with fewer resources from applying for New Starts funding. Time required to complete the New Starts process: More than half of the project sponsors we interviewed said that the application process is too time-consuming or leads to project delays. One project sponsor we interviewed told us that constructing a project with New Starts funding (as opposed to without) delays the time line for the project by as much as several years, which in turn leads to increased project costs as inflation and expenses from labor and materials increase with the delay. The lengthy nature of the New Starts process is due, at least in part, to the rigorous and systematic evaluation and rating process established by law—which we have previously noted could serve as a model for other transportation programs. In addition, FTA officials noted that most project delays are caused by the project sponsor, not FTA. Other reasons cited by project sponsors for the decrease in the pipeline include that project sponsors are finding other ways to fund projects, such as using other federal funds or seeking state, local, or private funding. One project sponsor remarked that sponsors try to avoid the New Starts process by obtaining a congressional designation, so that they can skip the cumbersome New Starts application process and construct their project faster. In addition, three other project sponsors we interviewed said that since the New Starts process is well-established and outcomes are predictable, many potential project sponsors do not even enter the pipeline because they realize their projects are unlikely to receive New Starts funding. Our survey results also reflect many of the reasons for the decline in the New Starts pipeline. Among the project sponsors we surveyed with completed transit projects, the most common reasons given for not applying to the New Starts program were that the process is too lengthy or that the sponsor wanted to move the project along faster than could be done in the New Starts process. About two-thirds of these project sponsors reported that their most recent project was eligible for New Starts, yet more than one-fourth of them did not apply to the program . Instead, these project sponsors reported using other federal funding and state, local, and private funding—with other federal and local funding being the most commonly used and private funding least commonly used—to fund their most recently completed project. Further, we also found that two-thirds of the large project sponsors we surveyed applied to the New Starts program for its most recently completed project while only about one-third of medium and smaller project sponsors did. Other reasons these project sponsors cited for not applying include sufficient funding from other sources to complete the project, concern about jeopardizing other projects submitted for New Starts funding, and difficulty understanding and completing the process and the program’s eligibility requirements. FTA is considering and implementing different ideas on how to improve the New Starts process—many of which would address the concerns identified by project sponsors. For example, FTA has recognized that the process can be lengthy and in 2006, FTA commissioned a study to examine, among other issues, opportunities for accelerating and simplifying the process for implementing the New Starts program. According to FTA officials, one of the study’s recommendations was to implement project development agreements to solidify New Starts project schedules and improve FTA’s timeline for reviews. FTA officials told us that they are implementing this recommendation, and have already implemented project schedules for three New Starts projects in the pipeline. In addition, in February 2007, FTA proposed the elimination of a number of reporting requirements. FTA’s Administrator stated that FTA will continue to look for ways to further improve the program. Our survey of project sponsors indicates that there will be a future demand for New Starts, Small Starts, and Very Small Starts funding. About forty-five percent (75 of 166) of the project sponsors we surveyed reported that they had a total of 137 planned transit projects, which we defined as those currently undergoing an alternatives analysis or other corridor- based planning study. According to the project sponsors, they anticipate seeking New Starts, Small Starts, or Very Small Starts funding for 100 of these 137 planned projects. More specifically, they anticipate seeking New Starts funding for 57 of the planned projects; Small Starts funding for 29 of the planned projects; and Very Small Starts funding for 14 of the planned projects (see fig 4). Although the project sponsors we surveyed indicated that they were considering a range of project type alternatives in their planning, the most commonly cited alternatives were bus rapid transit and light rail. All of the Small Starts and Very Small Starts project sponsors we interviewed view the new Small Starts and Very Small Starts programs favorably. These project sponsors told us that they appreciate the emphasis FTA has placed on smaller transit projects through its new programs and the steps FTA has taken to streamline the application process for the programs. The project sponsors also told us that the Small Starts and Very Small Starts programs address a critical and unmet funding need, and that they believe their projects will be more competitive under these programs then under the New Starts program because they are vying for funding with projects and agencies of similar size. FTA told us that they have been responsive in providing assistance on the program when contacted. Our survey results also indicate that, through its Small Starts and Very Small Starts programs, FTA is attracting project sponsors that would not have otherwise applied for the New Starts program or have not previously applied to the New Starts program. For example, project sponsors indicated that they would not have applied for New Starts funding for 14 of the 18 Small Starts and Very Small Starts projects identified in our survey, if the Small Starts and Very Small Starts programs had not been established. In addition, of 28 project sponsors that intend to seek Small Starts or Very Small Starts funding for their planned projects, 13 have not previously applied for New Starts, Small Starts, or Very Small Starts funding. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact Katherine Siggerud at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony include Nikki Clowers, Assistant Director; Elizabeth Eisenstadt; Carol Henn; Bert Japikse; Amanda Miller; SaraAnn Moessbauer; Nitin Rao; Tina Won Sherman; Bethany Claus Widick; and Elizabeth Wood. Public Transportation: New Starts Program Is in a Period of Transition. GAO-06-819. Washington, D.C.: August 30, 2006. Public Transportation: Preliminary Information on FTA’s Implementation of SAFETEA-LU Changes. GAO-06-910T. Washington, D.C.: June 27, 2006. Public Transportation: Opportunities Exist to Improve the Communication and Transparency of Changes Made to the New Starts Program. GAO-05-674. Washington, D.C.: June 28, 2005. Mass Transit: FTA Needs to Better Define and Assess Impact of Certain Policies on New Starts Program. GAO-04-748. Washington, D.C.: June 25, 2004. Mass Transit: FTA Needs to Provide Clear Information and Additional Guidance on the New Starts Ratings Process. GAO-03-701. Washington, D.C.: June 23, 2003. Mass Transit: Status of New Starts Program and Potential for Bus Rapid Transit Projects. GAO-02-840T. Washington, D.C.: June 20, 2002. Mass Transit: FTA’s New Starts Commitments for Fiscal Year 2003. GAO-02-603. Washington, D.C.: April 30, 2002. Mass Transit: FTA Could Relieve New Starts Program Funding Constraints. GAO-01-987. Washington, D.C.: August 15, 2001. Mass Transit: Implementation of FTA’s New Starts Evaluation Process and FY 2001 Funding Proposals. GAO/RCED-00-149. Washington, D.C.: April 28, 2000. Mass Transit: Status of New Starts Transit Projects With Full Funding Grant Agreements. GAO/RCED-99-240. Washington, D.C.: August 19, 1999. Mass Transit: FTA’s Progress in Developing and Implementing a New Starts Evaluation Process. GAO/RCED-99-113. Washington, D.C.: April 26, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Through the New Starts program, the Federal Transit Administration (FTA) identifies and recommends new fixed-guideway transit projects for funding--including heavy, light, and commuter rail; ferry; and certain bus projects. The Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) authorized the New Starts program through fiscal year 2009 and made a number of changes to the program, including creating a separate program commonly called Small Starts. This program is intended to offer an expedited and streamlined evaluation and rating process for smaller-scale transit projects. FTA subsequently introduced a separate eligibility category within the Small Starts program for "Very Small Starts" projects. Very Small Starts projects are simple, low-risk projects that FTA has determined qualify for a simplified evaluation and rating process. This testimony discusses GAO's preliminary findings on (1) FTA's implementation of SAFETEA-LU changes to the New Starts program, (2) the extent to which the New Starts pipeline (i.e., projects in the preliminary engineering and final design phases) has changed over time, and (3) future trends for the New Starts and Small Starts pipelines. To address these objectives, GAO surveyed 215 project sponsors and interviewed FTA officials, 15 project sponsors, and 3 industry groups. Our survey response rate was 77 percent. FTA has made progress in implementing SAFETEA-LU changes, but more work remains. Project sponsors frequently identified two key issues for FTA to consider as it moves forward in implementing SAFETEA-LU changes: (1) further streamline the Small Starts program and (2) fully incorporate economic development as a criterion in the New Starts and Small Starts evaluation and rating processes. According to our analysis of the number and types of requirements for New Starts and Small Starts application processes, the Small Starts process has fewer requirements. However, project sponsors said that FTA should further streamline the process by, for example, eliminating requests for duplicate information requested in required worksheets. SAFETEA-LU added economic development to the list of project justification evaluation criteria that FTA must use to evaluate and rate projects. However, FTA currently assigns a weight of 50 percent each to cost-effectiveness and land use in calculating a project's overall rating--the other 4 statutorily identified criteria, including economic development, are not weighted. We previously reported that FTA's reliance on two evaluation criteria to calculate a project's overall rating is drifting away from the multiple-measure evaluation and rating process outlined in statute. Further, without a weight for economic development, project sponsors say, the evaluation and rating process does not reflect an important benefit of certain projects. FTA officials said they are currently working to develop an appropriate economic development measure as part of their upcoming rulemaking. The New Starts pipeline--that is, projects in different stages of planning--has changed in size and composition since the fiscal year 2001 evaluation and rating cycle, and a variety of factors have contributed to these changes. Since then, the number of projects in the New Starts pipeline has decreased by more than half. Additionally, the types of projects in the pipeline have changed during this time frame, as bus rapid transit projects are now more common than commuter or light rail projects. FTA officials attributed the decrease in the pipeline to their increased scrutiny of applications to help ensure that only the strongest projects enter the pipeline, and to their efforts to remove projects from the pipeline that were not advancing or did not adequately address identified problems. Project sponsors GAO interviewed provided other reasons for the pipeline's decrease, including that the New Starts process is too complex, time-consuming, and costly. Our survey results reflect many of these same reasons for the decline in the pipeline. Despite these concerns, GAO's survey of project sponsors indicates future demand for New Starts, Small Starts, and Very Small Starts funding. The sponsors GAO surveyed reported having 137 planned projects and intend to seek New Starts, Small Starts, or Very Small Starts funding for almost three-fourths of these projects. Project sponsors GAO surveyed also reported considering a range of project type alternatives in their planning. The most commonly cited alternatives were bus rapid transit and light rail.
CMS has made progress strengthening provider enrollment to try to better ensure that only legitimate providers and suppliers are allowed to bill Medicare. However, CMS has not completed other actions that could help prevent individuals intent on fraud from enrolling, including implementation of some relevant PPACA provisions. Our previous work found persistent weaknesses in Medicare’s enrollment standards and procedures that increased the risk of enrolling entities intent on defrauding the Medicare program. We, CMS, and the HHS Office of Inspector General (OIG) have previously identified two types of providers whose services and items are especially vulnerable to improper payments and fraud—home health agencies (HHA) and suppliers of durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS). We found weaknesses in oversight of these providers’ and suppliers’ enrollment. For example, in 2008, we identified weaknesses when we created two fictitious DMEPOS companies, which were subsequently enrolled by CMS’s contractor and given permission to begin billing Medicare. In 2009, we found that CMS’s contractors were not requiring HHAs to resubmit enrollment information for re-verification every 5 years as required by CMS. To strengthen the Medicare enrollment process, in 2006 CMS began requiring all providers and suppliers—including those that order HHA services or DMEPOS for beneficiaries to be enrolled in Medicare. The agency also required all providers and suppliers to report their National Provider Identifiers (NPI) on enrollment applications, which can help address fraud because providers and suppliers must submit either their Social Security Number or their employer identification number and state licensing information to obtain an NPI. In 2007, CMS initiated the first This phase of a Medicare competitive-bidding program for DMEPOS.program requires suppliers’ bids to include new financial documentation for the year prior to submitting the bids. Because CMS can now disqualify suppliers based in part on new scrutiny of their financial documents, competitive bidding can help reduce fraud. Finally, in 2010, CMS also required that all DMEPOS suppliers be accredited by a CMS-approved accrediting organization to ensure that they meet certain quality standards. Such accreditation also increased scrutiny of these businesses. PPACA authorized CMS to implement several actions to strengthen provider enrollment. As of April 2012, the agency has completed some of these actions. Screening Provider Enrollment Applications by Risk Level: CMS and OIG issued a final rule with comment period in February 2011 to implement some of the new screening procedures required by PPACA. CMS designated three levels of risk—high, moderate, and limited—with different screening procedures for categories of Medicare providers at each level. Providers in the high-risk level are subject to the most rigorous screening. To determine which providers to place in these risk levels, CMS considered issues such as past occurrences of improper payments and fraud among different categories of providers. Based in part on our work and that of the OIG, CMS designated newly enrolling HHAs and DMEPOS suppliers as high risk and designated other providers at lower levels. (See table 1.) Providers at all risk levels are screened to verify that they meet specific requirements established by Medicare such as having current licenses or accreditation and valid Social Security numbers. High- and moderate-risk providers are additionally subject to unannounced site visits. Further, depending on the risks presented, PPACA authorizes CMS to require fingerprint-based criminal history checks, and the posting of surety bonds for certain providers. CMS may also provide enhanced oversight for specific periods for new providers and for initial claims of DMEPOS suppliers. CMS indicated that the agency will continue to review the criteria for its screening levels on an ongoing basis and would publish changes if the agency decided to update the assignment of screening levels for categories of Medicare providers. This may become necessary because fraud is not confined to HHAs and DMEPOS suppliers. We are currently examining the types of providers involved in fraud cases investigated by the OIG and the Department of Justice (DOJ), which may help illuminate risk to the Medicare program from different types of providers. Further, in their 2011 annual report on the Health Care Fraud and Abuse Control Program, DOJ and HHS reported convictions or other legal actions, such as exclusions or civil monetary penalties, against several types of Medicare providers other than DMEPOS suppliers and HHAs, including pharmacists, orthopedic surgeons, infusion and other types of medical clinics, and physical therapy services. CMS also has established triggers for adjustments to an individual provider’s risk level. For example, CMS regulations state that an individual provider or supplier at the limited- or moderate-risk level that has had its billing privileges revoked by a Medicare contractor within the last 10 years and is attempting to re- enroll, would move to the high-risk level for screening. New National Enrollment Screening and Site Visit Contractors: In a further effort to strengthen its enrollment processes, CMS contracted with two new entities at the end of 2011 to assume centralized responsibility for automated screening of provider and supplier enrollment and for conducting site visits of providers. Automated-screening contractor. In December 2011, the new contractor began to establish systems to conduct automated screening of providers and suppliers to ensure they meet Medicare eligibility criteria (such as valid licensure, accreditation, a valid NPI, and no presence on the OIG list of providers and suppliers excluded from participating in federal health care programs). Prior to the implementation of this new automated screening, such screening was done manually for the 30,000 enrollees each month by CMS’s Medicare Administrative Contractors (MAC), which enroll Medicare providers, and the National Supplier Clearinghouse (NSC), which enrolls DMEPOS suppliers. According to CMS, the old screening process was neither efficient nor timely. CMS officials said that in 2012, the automated-screening contractor began automated screening of the licensure status of all currently enrolled Medicare providers and suppliers. The agency said it expects the automated- screening contractor to begin screening newly enrolling providers and suppliers later this year. CMS expects that the new, national contractor will enable better monitoring of providers and suppliers on a continuous basis to help ensure they continue to meet Medicare enrollment requirements. The new screening contractor will also help the MACs and the NSC maintain enrollment information in CMS’s Provider Enrollment Chain and Ownership System (PECOS)—a database that contains details on enrolled providers and suppliers. In addition, CMS officials said the automated-screening contractor is developing an individual risk score for each provider or supplier, similar to a credit risk score. Although these individual scores are not currently used to determine an individual provider’s placement in a risk level, CMS indicated that this risk score may be used eventually as additional risk criteria in the screening process. Site visits for all providers designated as moderate and high risk. Beginning in February 2012, a single national site-visit contractor began conducting site visits of moderate- and high-risk providers to determine if sites are legitimate and the providers meet certain Medicare standards. The contractor collects the same information from each site visit, including photographic evidence that will be available electronically through a Web portal accessible to CMS and its other contractors. The national site-visit contractor is expected to validate the legitimacy of these sites. CMS officials told us that the contractor will provide consistency in site visits across the country, in contrast to CMS relying on different MACs to conduct any required site visits. Implementation of other enrollment screening actions authorized by PPACA that could help CMS reduce the enrollment of providers and suppliers intent on defrauding the Medicare program remains incomplete, including: Surety bond—PPACA authorizes CMS to require a surety bond for certain types of at-risk providers, which can be helpful in recouping erroneous payments. CMS officials expect to issue a proposed rule to require surety bonds as conditions of enrollment for certain other types of providers. Extending the use of surety bonds to these new entities would augment a previous statutory requirement for DMEPOS suppliers to post a surety bond at the time of enrollment. CMS issued final instructions to its MACs, effective February 2012, for recovering DMEPOS overpayments through surety bonds. CMS officials reported that as of April 19, 2012, they had issued notices to 20 surety bond companies indicating intent to collect funds, but had not collected any funds as of that date. Fingerprint-based criminal background checks—CMS officials told us that they are working with the Federal Bureau of Investigation to arrange contracts to help conduct fingerprint-based criminal background checks of high-risk providers and suppliers. On April 13, 2012, CMS issued a request for information regarding the potential solicitation of a single contract for Medicare provider and supplier fingerprint-based background checks. The agency expects to have the contract in place before the end of 2012. Providers and suppliers disclosure—CMS officials said the agency is reviewing options to include in regulations for increased disclosures of prior actions taken against providers and suppliers enrolling or revalidating enrollment in Medicare, such as whether the provider or supplier has been subject to a payment suspension from a federal health care program. In April 2012, agency officials indicated that they were not certain when the regulation would be published. CMS officials noted that the additional disclosure requirements are complicated by provider and supplier concerns about what types of information will be collected, what CMS will do with it, and how the privacy and security of this information will be maintained. Compliance and ethics program—CMS officials said that the agency was studying criteria found in OIG model plans as it worked to address the PPACA requirement that the agency establish the core elements of compliance programs for providers and suppliers.April 2012, CMS did not have a projected target date for implementation. Increased efforts to review claims on a prepayment basis can better prevent payments that should not be made, while improving systems used to review claims on a post-payment basis could better identify patterns of fraudulent billing for further investigation. Having robust controls in claims payment systems to prevent payment of problematic claims can help reduce loss. As claims go through Medicare’s electronic claims payment systems, they are subjected to automated prepayment controls called “edits,” instructions programmed in the systems to prevent payment of incomplete or incorrect claims. Some edits use provider enrollment information, while others use information on coverage or payment policies, to determine if claims should be paid. Most of these controls are fully automated; if a claim does not meet the criteria of the edit, it is automatically denied. Other prepayment edits are manual; they flag a claim for individual review by trained staff who determine if it should be paid. Due to the volume of claims, CMS has reported that less than 1 percent of Medicare claims are subject to manual medical record review by trained staff. Having effective pre-payment edits that deny claims for ineligible providers and suppliers depends on having timely and accurate information about them, such as whether the providers are currently enrolled and have the appropriate license or accreditation to provide specific services. We previously recommended that CMS take action to ensure the timeliness and accuracy of PECOS—the database that maintains Medicare provider and supplier enrollment information. We noted that weaknesses in PECOS data may result in CMS making improper payments to ineligible providers and suppliers. These weaknesses are related to the frequency with which CMS’s contractors update enrollment information and the timeliness and accuracy of information obtained from outside entities, such as state licensing boards, the OIG, and the Social Security Administration’s Death Master File, which contains information on deceased individuals that can be used to identify deceased providers in order to terminate those providers’ Medicare billing privileges. These sources vary in the ease in which CMS contractors have been able to access their data and the frequency with which they are updated. CMS has indicated that its new national- screening contractor should improve the timeliness and accuracy of the provider and supplier information in PECOS by centralizing the process, increasing automation of the process, continuously checking databases, and incorporating new sources of data, such as financial, business, tax, and geospatial data. However, it is too soon to tell if these efforts will better prevent payments to ineligible providers and suppliers. Having effective edits to implement coverage and payment policies before payment is made can also help to deter fraud. The Medicare program has defined categories of items and services eligible for coverage and excludes from coverage items or services that are determined not to be “reasonable and necessary for the diagnosis and treatment of an illness or injury or to improve functioning of a malformed body part.” its contractors set policies regarding when and how items and services will be covered by Medicare, as well as coding and billing requirements for payment, which also can be implemented in the payment systems through edits. We have previously found Medicare’s payment systems did not have edits for items and services unlikely to be provided in the normal course of medical care. CMS has since implemented edits to flag such claims—called Medically Unlikely Edits. We are currently assessing Medicare’s prepayment edits based on coverage and payment policies, including the Medically Unlikely Edits. 42 U.S.C. § 1395y(a)(1)(A). Additionally, suspending payments to providers suspected of fraudulent billing can be an effective tool to prevent excess loss to the Medicare program while suspected fraud is being investigated. For example, in March 2011, the OIG testified that payment suspensions and pre- payment edits on 18 providers and suppliers stopped the potential loss of more than $1.3 million submitted in claims by these individuals. Furthermore, HHS recently reported that it imposed payment suspensions on 78 home health agencies in conjunction with arrests related to a multimillion-dollar health care fraud scheme. While CMS had the authority to impose payment suspensions prior to PPACA, the law specifically authorized CMS to suspend payments to providers pending the CMS officials reported that investigation of credible allegations of fraud.the agency had imposed 212 payment suspensions since the regulations implementing the PPACA provisions took effect. Agency officials indicated that almost half of these suspensions were imposed this calendar year, representing about $6 million in Medicare claims. CMS is replacing its legacy Program Safeguard Contractors (PSC) with seven ZPICs. While the PSCs were responsible for program integrity for specific parts of Medicare, such as Part A, the ZPICs are responsible for Medicare’s fee-for-service program integrity in their geographic zones. For simplicity, we refer to these program integrity contractors as ZPICs throughout the testimony. suspicious patterns or abnormalities in Medicare provider networks, claims billing patterns, and beneficiary utilization. According to CMS, FPS may enhance CMS’s ability to identify potential fraud because it analyzes large numbers of claims from multiple data sources nationwide simultaneously before payment is made, thus allowing CMS to examine billing patterns across geographic regions for those that may indicate fraud. The results of FPS are used by the ZPICs to initiate investigations that could result in payment suspensions, implementation of automatic claim denials, identification of additional prepayment edits, or the revocation of Medicare billing privileges. CMS began using FPS to screen all FFS claims nationwide prior to payment as of June 30, 2011, and CMS has been directing the ZPICs to investigate high priority leads generated by the system. Because FPS is relatively new and we have not completed our work, it is too soon to determine whether FPS will improve CMS’s ability to address fraud. Questions have also been raised about CMS’s ability to adequately assess ZPICs’ performance and we have been asked to examine CMS’s management of the ZPICs, including criteria used by CMS to evaluate their effectiveness. “Bust-out” fraud schemes in which providers or suppliers suddenly bill very high volumes of claims to obtain large payments from Medicare could be addressed by adding a prepayment edit. Such an edit would set thresholds to stop payment for atypically rapid increases in billing thus helping them to stem losses from these schemes. In our prior work on DMEPOS, we recommended that CMS require its contractors to develop thresholds for unexplained increases in billing and use them to develop pre-payment controls that could suspend these claims for further review before payment. CMS officials told us that they are currently considering developing analytic models in FPS that could help CMS and ZPICs identify and address billing practices suggestive of bust outs. Further actions are needed to improve use of two CMS information technology systems that could help CMS and program integrity contractors identify fraud after claims have been paid. The Integrated Data Repository (IDR) became operational in September 2006 as a central data store of Medicare and other data needed to help CMS’s program integrity staff, ZPICs, and other contractors prevent and detect improper payments of claims. However, we found IDR did not include all the data that were planned to be incorporated by fiscal year 2010, because of technical obstacles and delays in funding. Further, as of December 2011 the agency had not finalized plans or developed reliable schedules for efforts to incorporate these data, which could lead to additional delays. One Program Integrity (One PI) is a Web portal intended to provide CMS staff, ZPICs, and other contractors with a single source of access to data contained in IDR, as well as tools for analyzing those data. While One PI is operational, we reported in December 2011 that CMS had trained few program integrity analysts and that the system was not being widely used. GAO recommended that CMS take steps to finalize plans and reliable schedules for fully implementing and expanding the use of both IDR and One PI. Although the agency told us in April 2012 that it had initiated activities to incorporate some additional data into IDR and expand the use of One PI, such as training more ZPIC and other staff, it has not fully addressed our recommendations. Having mechanisms in place to resolve vulnerabilities that lead to improper payments is critical to effective program management and could help address fraud. A number of different types of program integrity contractors are responsible for identifying and reporting vulnerabilities to CMS. However, our work and the work of OIG have shown weaknesses in CMS’s processes to address vulnerabilities identified by these contractors. CMS’s Recovery Audit Contractors (RAC) are specifically charged with identifying improper payments and vulnerabilities that could lead to such payment errors. However, in our March 2010 report on the RAC demonstration program, we found that CMS had not established an adequate process during the demonstration or in planning for the national program to ensure prompt resolution of such identified vulnerabilities in Medicare; further, the majority of the most significant vulnerabilities identified during the demonstration were not addressed. recommended that CMS develop and implement a corrective action process that includes policies and procedures to ensure the agency promptly (1) evaluates findings of RAC audits, (2) decides on the appropriate response and a time frame for taking action based on established criteria, and (3) acts to correct the vulnerabilities identified. GAO, Medicare Recovery Audit Contracting: Weaknesses Remain in Addressing Vulnerabilities to Improper Payments, Although Improvements Made to Contractor Oversight, GAO-10-143 (Washington, D.C.: Mar. 31, 2010). by other contractors were resolved. CMS had not resolved or taken significant action to resolve 38 of 44 vulnerabilities (86 percent) reported in 2009 by ZPICs. Only 1 vulnerability had been fully resolved by January 2011. The OIG made several recommendations, including that CMS have written procedures and time frames to assure that vulnerabilities were resolved. CMS has indicated that it is now tracking vulnerabilities identified from several types of contractors through a single vulnerability tracking process. We are currently examining aspects of CMS’s vulnerability tracking process and will be reporting on it soon. Although CMS has taken some important steps to identify and prevent fraud, including implementing provisions in PPACA and the Small Business Jobs Act, more remains to be done to prevent making erroneous Medicare payments due to fraud. In particular, we have found CMS could do more to strengthen provider enrollment screening to avoid enrolling those intent on committing fraud, improve pre- and post- payment claims review to identify and respond to patterns of suspicious billing activity more effectively, and identify and address vulnerabilities to reduce the ease with which fraudulent entities can obtain improper payments. It is critical that CMS implement and make full use of new authorities granted by recent legislation, as well as incorporate recommendations made by us, as well as the OIG in these areas. Moving from responding once fraud has already occurred to preventing it from occurring in the first place is key to ensuring that federal funds are used efficiently and for their intended purposes. As all of these new authorities and requirements become part of Medicare’s operations, additional evaluation and oversight will be necessary to determine whether they are implemented as required and have the desired effect. We have several studies underway that assess efforts to fight fraud in Medicare and that should continue to help CMS refine and improve its fraud detection and prevention efforts. Notably, we are assessing the effectiveness of different types of pre-payment edits in Medicare and of CMS’s oversight of its contractors in implementing those edits to help ensure that Medicare pays claims correctly the first time. We are also examining the use of predictive analytics by CMS and the ZPICs to improve fraud prevention and detection. ZPICs play an important role in detecting and investigating fraud and identifying vulnerabilities, and FPS will likely play an increasing role in how ZPICs conduct their work. Additionally, we have work under way to identify the types of providers and suppliers currently under investigation and those that have been found to have engaged in fraudulent activities. These studies may enable us to point out additional actions for CMS that could help the agency more systematically reduce fraud in the Medicare program. Due to the amount of program funding at risk, fraud will remain a continuing threat to Medicare, so continuing vigilance to reduce vulnerabilities will be necessary. Individuals who want to defraud Medicare will continue to develop new approaches to try to circumvent CMS’s safeguards and investigative and enforcement efforts. Although targeting certain types of providers that the agency has identified as high risk may be useful, it may allow other types of providers committing fraud to go unnoticed. We will continue to assess efforts to fight fraud and provide recommendations to CMS, as appropriate, that we believe will assist the agency and its contractors in this important task. We urge CMS to continue its efforts as well. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you or other members of the committee may have. For further information about this statement, please contact Kathleen M. King at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Thomas Walke, Assistant Director; Michael Erhardt; Eden Savino; and Jennifer Whitworth were key contributors to this statement. Medicare Program Integrity: CMS Continues Efforts to Strengthen the Screening of Providers and Suppliers. GAO-12-351. Washington, D.C.: April 24, 2012. Improper Payments: Remaining Challenges and Strategies for Governmentwide Reduction Efforts. GAO-12-573T. Washington, D.C.: March 28, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Expand Efforts to Support Program Integrity Initiatives. GAO-12-292T. Washington, D.C.: December 7, 2011. Medicare Part D: Instances of Questionable Access to Prescription Drugs. GAO-12-104T. Washington, D.C.: October 4, 2011. Medicare Part D: Instances of Questionable Access to Prescription Drugs. GAO-11-699. Washington, D.C.: September 6, 2011. Medicare Integrity Program: CMS Used Increased Funding for New Activities but Could Improve Measurement of Program Effectiveness. GAO-11-592. Washington, D.C.: July 29, 2011. Improper Payments: Reported Medicare Estimates and Key Remediation Strategies. GAO-11-842T. Washington, D.C.: July 28, 2011. Fraud Detection Systems: Additional Actions Needed to Support Program Integrity Efforts at Centers for Medicare and Medicaid Services. GAO-11-822T. Washington, D.C.: July 12, 2011. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Ensure More Widespread Use. GAO-11-475. Washington, D.C.: June 30, 2011. Medicare and Medicaid Fraud, Waste, and Abuse: Effective Implementation of Recent Laws and Agency Actions Could Help Reduce Improper Payments. GAO-11-409T. Washington, D.C.: March 9, 2011. Medicare: Program Remains at High Risk Because of Continuing Management Challenges. GAO-11-430T. Washington, D.C.: March 2, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Medicare Part D: CMS Conducted Fraud and Abuse Compliance Plan Audits, but All Audit Findings Are Not Yet Available. GAO-11-269R. Washington, D.C.: February 18, 2011. Medicare Fraud, Waste, and Abuse: Challenges and Strategies for Preventing Improper Payments. GAO-10-844T. Washington, D.C.: June 15, 2010. Medicare Recovery Audit Contracting: Weaknesses Remain in Addressing Vulnerabilities to Improper Payments, Although Improvements Made to Contractor Oversight. GAO-10-143. Washington, D.C.: March 31, 2010. Medicare Part D: CMS Oversight of Part D Sponsors’ Fraud and Abuse Programs Has Been Limited, but CMS Plans Oversight Expansion. GAO-10-481T. Washington, D.C.: March 3, 2010. Medicare: CMS Working to Address Problems from Round 1 of the Durable Medical Equipment Competitive Bidding Program. GAO-10-27. Washington, D.C.: November 6, 2009. Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments. GAO-09-628T. Washington, D.C.: April 22, 2009. Medicare: Improvements Needed to Address Improper Payments in Home Health. GAO-09-185. Washington, D.C.: February 27, 2009. Medicare Part D: Some Plan Sponsors Have Not Completely Implemented Fraud and Abuse Programs, and CMS Oversight Has Been Limited. GAO-08-760. Washington, D.C.: July 21, 2008. Medicare: Covert Testing Exposes Weaknesses in the Durable Medical Equipment Supplier Screening Process. GAO-08-955. Washington, D.C.: July 3, 2008. Medicare: Thousands of Medicare Providers Abuse the Federal Tax System. GAO-08-618. Washington, D.C.: June 13, 2008. Medicare: Competitive Bidding for Medical Equipment and Supplies Could Reduce Program Payments, but Adequate Oversight Is Critical. GAO-08-767T. Washington, D.C.: May 6, 2008. Improper Payments: Status of Agencies’ Efforts to Address Improper Payment and Recovery Auditing Requirements. GAO-08-438T. Washington, D.C.: January 31, 2008. Improper Payments: Federal Executive Branch Agencies’ Fiscal Year 2007 Improper Payment Estimate Reporting. GAO-08-377R. Washington, D.C.: January 23, 2008. Medicare: Improvements Needed to Address Improper Payments for Medical Equipment and Supplies. GAO-07-59. Washington, D.C.: January 31, 2007. Medicare: More Effective Screening and Stronger Enrollment Standards Needed for Medical Equipment Suppliers. GAO-05-656. Washington, D.C.: September 22, 2005. Medicare: CMS’s Program Safeguards Did Not Deter Growth in Spending for Power Wheelchairs. GAO-05-43. Washington, D.C.: November 17, 2004. Medicare: CMS Did Not Control Rising Power Wheelchair Spending. GAO-04-716T. Washington, D.C.: April 28, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has designated Medicare as a high-risk program. Since 1990, every two years GAO has provided Congress with an update on this program, which highlights government operations that are at high risk for waste, fraud, abuse mismanagement or in need of broad reform. Medicare has been included in this program in part because its complexity makes it particularly vulnerable to fraud. Fraud involves an intentional act or representation to deceive with the knowledge that the action or representation could result in gain. The deceptive nature of fraud makes its extent in the Medicare program difficult to measure in a reliable way, but it is clear that fraud contributes to Medicare’s fiscal problems. Reducing fraud could help rein in the escalating costs of the program. This statement focuses on the progress made and important steps to be taken by CMS and its program integrity contractors to reduce fraud in Medicare. These contractors perform functions such as screening and enrolling providers, detecting and investigating potential fraud, and identifying improper payments and vulnerabilities that could lead to payment errors. This statement is based on relevant GAO products and recommendations issued from 2004 through 2012 using a variety of methodologies, such as analyses of Medicare claims, review of relevant policies and procedures, and interviews with officials. The Centers for Medicare & Medicaid Services (CMS)—the agency that administers Medicare—has made progress in implementing several key strategies GAO identified in prior work as helpful in protecting Medicare from fraud; however, important actions that could help CMS and its program integrity contractors combat fraud remain incomplete. Provider Enrollment : GAO’s previous work found persistent weaknesses in Medicare’s enrollment standards and procedures that increased the risk of enrolling entities intent on defrauding the program. CMS has strengthened provider enrollment—for example, in February 2011, CMS designated three levels of risk—high, moderate, and limited—with different screening procedures for categories of providers at each level. However, CMS has not completed other actions, including implementation of some relevant provisions of the Patient Protection and Affordable Care Act (PPACA). Specifically, CMS has not (1) determined which providers will be required to post surety bonds to help ensure that payments made for fraudulent billing can be recovered, (2) contracted for fingerprint-based criminal background checks, (3) issued a final regulation to require additional provider disclosures of information, and (4) established core elements for provider compliance programs. Pre- and Post-payment Claims Review : GAO had previously found that increased efforts to review claims on a prepayment basis can prevent payments from being made for potentially fraudulent claims, while improving systems used by CMS and its contractors to review claims on a post-payment basis could better identify patterns of potentially fraudulent billing for further investigation. CMS has controls in Medicare’s claims-processing systems to determine if claims should be paid, denied, or reviewed further. These controls require timely and accurate information about providers that GAO has previously recommended that CMS strengthen. GAO is currently examining CMS’s new Fraud Prevention System, which uses analytic methods to examine claims before payment to develop investigative leads for Zone Program Integrity Contractors (ZPIC), the contractors responsible for detecting and investigating potential fraud. Additionally, CMS could improve its post-payment claims review to identify patterns of fraud by incorporating prior GAO recommendations to develop plans and timelines for fully implementing and expanding two information technology systems it developed. Robust Process to Address Identified Vulnerabilities : Having mechanisms in place to resolve vulnerabilities that lead to erroneous payments is critical to effective program management and could help address fraud. Such vulnerabilities are service- or system-specific weaknesses that can lead to payment errors—for example, providers receiving multiple payments as a result of incorrect coding. GAO has previously identified weaknesses in CMS’s process for addressing identified vulnerabilities and the Department of Health and Human Services’ Office of Inspector General recently reported on CMS’s inaction in addressing vulnerabilities identified by its contractors, including ZPICs. GAO is evaluating the current status of the process for assessing and developing corrective actions to address vulnerabilities.
The DOD supply chain is a global network that provides materiel, services, and equipment to the joint force. DOD’s supply-chain responsiveness and reliability affect the readiness and capabilities of military forces and are critical to the overall success of joint operations. Inventory management, a key component of the DOD supply chain, is the process of determining requirements and procuring, managing, cataloging, distributing, overhauling, and disposing of materiel. DOD manages more than 5 million secondary inventory items, with a reported value of approximately $98 billion as of the end of fiscal year 2013. Management and oversight of DOD inventory is a responsibility shared among the Under Secretary of Defense for Acquisition, Technology and Logistics within the Office of the Secretary of Defense; DLA; and the military services. The Under Secretary of Defense for Acquisition, Technology and Logistics and its subordinate, the Assistant Secretary of Defense for Logistics and Materiel Readiness, are responsible for developing materiel-management policies and ensuring their implementation in a uniform manner throughout the department, while DLA and the services are responsible for implementing DOD policies and procedures for materiel management. As of the end of fiscal year 2013, the Army, Navy, and Air Force were responsible for about $78 billion of DOD’s secondary inventory, while DLA was responsible for inventory valued at about $19 billion. DLA manages, integrates, and synchronizes suppliers and supply chains to provide materiel to the military services, allies, and multinational partners. DLA manages mostly consumable items—those that are normally expended or intended to be used up beyond recovery or repair—for the military services. DLA provides support across nine diverse supply chains: aviation, clothing and textile, construction and equipment, energy, land, maritime, medical, industrial hardware, and subsistence. To carry out its responsibilities, DLA manages a global network of distribution depots that receive, store, and issue a wide range of commodities owned by the military services, General Services Administration, and DLA. DLA functions through the use of a working capital fund that relies on sales revenue rather than direct appropriations to finance its continuing operations. DOD guidance requires DLA to assess the ability of the inventory to meet the military services’ requirements and ensure that surplus inventories are kept only if warranted. The guidance also requires the services and DLA to group their item inventories into several specific categories, according to the purpose for which they are held. The categorization is designed to provide visibility of DOD inventory requirements, assets (on-hand and on- order), demand, and overages or shortfalls. As specified in DOD guidance, the key inventory categories include the approved acquisition objective, including WRM, and three categories that exceed the approved acquisition objective—economic retention stock, contingency retention stock, and potential reutilization stock. Approved acquisition objective: The quantity of an item authorized for peacetime and wartime requirements to equip and sustain U.S. and allied forces, including inventory categorized as WRM. Economic retention stock: Materiel that has been calculated to be more economical to keep than to dispose of and repurchase because it will likely be needed in the future. Contingency retention stock: Materiel retained to support specific contingencies, such as supporting foreign military sales, future military operations, disaster relief or civil emergencies, or mitigating risk associated with diminished manufacturing sources or nonprocurable stock. Potential reutilization stock: Items that have been identified for possible disposal but have potential for reuse and are under review for transfer to DLA Disposition Services. The military departments are responsible for supplying, organizing, training, and equipping the force. To carry out this responsibility, they are to procure and manage inventory to support the maintenance of their equipment and to equip the force. Each of the four services has its own organizations responsible for managing inventory. Similarly, the services are responsible for managing and funding their WRM programs and procuring certain WRM items, while they rely on DLA to provide certain items it manages that could be needed for a military operation. DOD guidance states that service-owned WRM items are to be stored as either starter stocks or swing stocks. DOD guidance does not specifically define the period that is to be supported by WRM, and the services use various periods, such as 60 days, for planning purposes, but this figure can vary by service and item. (App. I provides an overview of the services’ WRM programs.) MREs are a type of individual combat, or operational, food ration that is designed to sustain servicemembers engaged in heavy activity. It is considered a primary food ration for the military as it sustains troops in the early stages of a military operation, especially before supply lines are well established. These rations consist of a full meal packed in a flexible meal bag, which is lightweight yet durable for use in difficult environments. While the entree may be eaten cold, it can be heated in various ways and comes with a flameless heater inside the bag. Figure 1 shows an MRE and the contents of an MRE pouch. Once field feeding can begin during a military operation, the military services seek to transition from MREs to other types of rations, such as group rations that can be used to heat and serve meals for 50 individuals per pack, and then later to dining facility-prepared meals, once those capabilities exist. As a result, MREs may be crucial for the early stages of a military operation. We found in April 2005 that U.S. forces in Iraq experienced temporary shortages of MREs during the deployment and major combat phases in early 2003 before dining facilities were established, and data showed that both the Army and Marine Corps were We at risk of running out of food if supply distribution was hindered.found that these shortages resulted from both ineffective distribution, specifically a lack of sufficient logistics resources that hindered DOD’s efforts to move MREs promptly from ports to the units that had ordered them, as well as from inadequate supply forecasts. During peacetime, MREs are typically consumed during training, such as field exercises. MREs have limited shelf-life (typically 3 years), so stocks must be regularly rotated and used to minimize disposals. As such, MREs are a special category of WRM that is managed differently than other DLA-managed items. All MREs owned and managed by DLA are considered WRM while at the same time are issued to support peacetime needs such as training. In 2004, the Deputy Secretary of Defense designated the Director of DLA as the Executive Agent for the department’s subsistence supply chain, which includes MREs. According to DLA officials, the Director of DLA delegated this authority to DLA Troop Support. As the Executive Agent, DLA is to plan for, procure, manage, distribute, and ensure the wholesomeness of subsistence products throughout the supply chain, as well as to deliver items as needed. Further, DLA is to maintain war reserve subsistence stocks. DLA purchases MREs from three primary U.S. vendors through its working capital fund. A DLA study from July 2013 described the specifications of contracting with these three companies, stating that each year, MRE production percentages by vendor are readjusted to ensure maximum production capability among the three vendors. DLA officials stated that each vendor is guaranteed at least a 20-percent award of the total annual quantity, but no firm will receive an award in excess of 50 percent. DLA-owned MREs are stored in temperature-controlled facilities to maximize shelf-life. DLA rotates MREs on a regular basis by issuing the stock with least remaining shelf life for use in training exercises or other needs to the services to maximize the use of the product and associated resources. The sale of MREs to the services replenishes the DLA working capital fund; the rates that DLA charges the services are higher than the purchase prices from its vendors to recoup DLA’s expenses for contract fees, transportation, storage, and other overhead costs. In fiscal year 2014, the rate that DLA charged the services for an MRE case was about $100. DLA has established an annual process with the military services to obtain their WRM requirements for most DLA-managed items and to assess the extent to which it has inventory available to help meet those requirements. After DLA issues a data call, the services identify their WRM requirements for DLA-managed items, which are determined primarily using operational plans and other related inputs. DLA compares the service-identified WRM requirements against its assets to identify the level of available inventory, including any potential shortfalls, and communicates this information back to the military services, which in turn use this information to make procurement decisions regarding WRM. Figure 2 shows the processes for determining WRM requirements and available inventory for DLA-managed items. The military services determine their WRM requirements for most DLA- managed items based on operational plans that support warfighting scenarios approved by the Joint Chiefs of Staff and other inputs such as deployment schedules and equipment usage data. DOD guidance directs how WRM requirements are to be determined and requires that the military services calculate war reserve requirements annually, based on current defense strategic guidance. Service-specific guidance provides further detail on the requirements process. For example, the Army’s regulation requires that DOD guidance be used to provide the war-fighting scenarios necessary to guide WRM requirements determination. Marine Corps guidance specifies that other information, such as time-phased force-deployment data, shelf-life information, and equipment-usage data, among other factors, be used to determine requirements for certain classes of materiel. For the requirements determination process for DLA-managed items, the Army, Marine Corps, and Air Force begin their coordination with DLA when DLA issues an annual data call requesting them to submit their WRM requirements for DLA-managed items. Navy officials stated that the Navy does not participate in this data call; materiel needed during deployment is stocked on deployed ships as part of the Navy’s allowancing process (see app. I). DLA officials stated that the data call is typically sent to the services around November or December each year. For fiscal year 2014, DLA sent the data call to the services in December 2013 and asked that they provide their WRM requirements to DLA by January 2014. In implementing DOD and service guidance, officials from the Army, Marine Corps, and Air Force stated that approved operational plans are the primary drivers for service WRM requirements for items that are covered by DLA’s annual data call. Operational plans are developed by military planners to support war-fighting scenarios set forth in broad defense strategic guidance provided by the President, the Secretary of Defense, and the Chairman of the Joint Chiefs of Staff. Service logistics officials stated that operational plans generate the selection of military units, which in turn have associated personnel and equipment, and thus are the major driver of WRM requirements. Associated information, such as time-phased force-deployment data that support an approved operational plan, is likewise used for determining requirements. For example, Air Force logistics officials stated that operational plans are used as the basis to determine requirements for WRM, and that three theater working groups determine the personnel and equipment necessary to support those operational plans in each of the component commands by reviewing, validating, and planning the movement of WRM globally. Similarly, Marine Corps officials stated that annual discussions are held during which leadership from each Marine Expeditionary Force determines WRM requirements based on the personnel and equipment needed to support selected operational plans.stated that operational plans drive the entire process of determining both Army prepositioning and WRM requirements. Each service that participates in the annual data call uses modeling programs to develop and compute WRM requirements needed to support operational plans. These models contain information such as historical usage data and equipment maintenance information. Army logistics officials stated that officials responsible for operations and planning determine what types and numbers of units would be necessary to carry out a given operational plan, and these units would have a defined level of personnel and equipment that would make up the parameters of the data entered into modeling programs. The Army has various modeling programs to determine WRM needs for prepositioned and sustainment stocks. To develop sustainment WRM requirements, for example, it maintains a modeling program that uses actual data from training and contingencies that are uploaded to the model every 90 days. Air Force officials stated that the decisions regarding units and equipment made during theater working-group discussions are input into an Air Force data system that computes WRM requirements. Marine Corps logistics officials stated that to develop WRM requirements, they use technical data, such as equipment and personnel data, that comes from authoritative data sources and service systems of record, and a computer modeling program computes the WRM requirements. Some service officials stated that changes to troop end strength, force posture, and force structure could over time be reflected in operational plans, and ultimately affect WRM requirements, but these factors are more long-term influences than the primary drivers of the annual WRM requirements that are the focus of DLA’s annual data call process. Regarding troop end strength, Marine Corps logistics officials stated that missions do not change as a result of increases or decreases to end strength. One official added that operational plans are largely independent from end strength, since forces will always be first devoted to contingencies, with the services assuming risk in other areas as a result of decreases in end strength. Air Force logistics officials stated that factors such as troop end strength or force posture do not necessarily affect WRM requirements, especially in the short term, although over time they may do so. For example, if changes in force posture were to lead to the closure of bases in Central Command, then the Air Force would eventually most likely review moving excess WRM from that location to other locations, or if cheaper than moving they may choose to divest of the WRM by disposal or sales. Further, the officials stated that the availability of funding and transportation and storage costs affect WRM considerably as these factor into decisions about how the Air Force could best support operational plans. As they determine their WRM requirements for DLA-managed items, some service officials stated they review their service’s inventory levels and prepositioned stocks to determine what inventory is already in place. The services can then determine and submit their WRM requirements to DLA for analysis. DLA compares the service-identified WRM requirements against its assets to identify the level of available inventory, including any potential shortfalls, and communicates this information back to the military services. DLA guidance states that it will identify to the services the DLA- The managed WRM assets available within its stocks or from industry.guidance states that DLA also will identify potential shortfalls in DLA- managed assets so that the services can budget to procure additional stocks if they choose. After the military services submit their WRM requirements, typically by January each year, DLA screens the total service WRM requirements against its own inventory levels. DLA also considers inventory that is available through surge clauses within existing contracts and reviews possible acceptable substitutes for inventory with identified potential shortfalls. Based on this analysis, DLA identifies the amount of each service’s WRM potential shortfalls, if any, by inventory item and provides this information to each of the services by the end of March. Service officials told us that DLA’s information on potential WRM shortfalls is the starting point for their decision-making on potential procurements. The officials said they review the potential shortfalls and the criticality of the items and determine what level of risk is acceptable, which then drives a procurement budgetary decision. The military services must either accept the risk associated with the WRM shortfall or address the potential shortfall by investing resources in acquiring additional inventory. DOD guidance states that the military services are responsible for programming and funding the acquisition of WRM when requirements exceed assets and no offset agreement can be reached, and also for completing a risk assessment to identify any negative impacts on readiness resulting from non-programmed and unfunded requirements or when they choose to reallocate resources to other priorities. Service-specific guidance details the offices and organizations responsible for determining requirements and assessing the criticality of items against potential shortfalls. For example, Army logistics officials stated that Army budgetary priorities can change dramatically based on the potential WRM shortfalls identified by DLA. As the DOD Executive Agent for subsistence, DLA monitors various types of data on MREs, but it lacks other analysis and information that could be useful for managing inventory levels. Among the types of data DLA monitors are its purchases from the MRE industry and its sales to the military services, factors that cause MRE inventory levels to fluctuate over time. Military service officials expressed concerns that in light of changing needs, it may be difficult for the services to consume MREs in the future at a rate that will prevent disposals due to expiring shelf life. However, DLA has not conducted recent analysis to determine the level of purchases needed annually to sustain the current industrial base while retaining the ability to meet a surge in requirements. In addition, while the military services provide DLA with their estimated future demand for MREs, DLA does not obtain information from the services, as part of existing coordination efforts, about potential changes to consumption and disposals that could affect future demand. Such analyses and information could be useful to DLA in managing MRE inventory levels. DLA’s MRE inventory levels are not set through the annual data call but are managed against an identified war reserve requirement, with DLA monitoring various types of data to set exact purchase levels. DOD can experience a surge in MRE requirements for various reasons, including military operations, natural disasters, and other emergencies. To satisfy a surge requirement if needed, DLA maintains a certain level of MREs as WRM inventory that is owned and managed by DLA. DLA has identified an MRE war reserve level of 5 million cases. The current level was established in fiscal year 2005 at a time when U.S. military operations were ongoing in Iraq and Afghanistan. DLA officials stated that DLA purchases up to the WRM level, meeting the 5 million case level through a combination of on-hand assets and assets that are due to be delivered within a 12-month period. According to a 2013 DLA study, while DLA is not obligated to purchase a specific amount of MREs per year from its suppliers, it currently has an annual purchase objective of 2.5 million MRE cases. This purchase objective represents an annual minimum target for MRE purchases from industry. As DLA manages MREs within the WRM level of 5 million cases, it monitors various types of data to determine the exact amount of MRE purchases that will be made and when. For example, DLA monitors yearly sales estimates that are provided by the military services, information provided by the services about their ability to consume MREs through training, and agreements in place with the military services about storage and transportation of MREs. While operational plans are the primary driver for WRM requirements determination for other DLA-managed items, numerous factors are at play as DLA plans MRE purchases to buy towards the level of WRM. For example, service officials told us that end strength is not a key factor for deciding WRM requirements for other items. In contrast, end strength is important for MRE WRM levels because end strength levels directly affect the number of MREs that can be consumed in training exercises and thus the amount of MREs that can be rotated in a given year and that would be replaced by future purchases. DLA monitors purchases from industry and sales to the services for planning purposes. According to DLA officials, the goal is to ensure that the services’ MRE requirements are met in the most efficient and effective manner possible. DLA also seeks to ensure that appropriate plans are in place to rotate the WRM stocks of MREs to prevent the need for disposals. For example, DLA officials stated that the war reserve stocks of MREs with the shortest shelf life are to be rotated out first to prevent disposal. Therefore, DLA uses information about service training cycles to help plan the rotation of the oldest stock. This information affects the amount of MREs that will need to be replaced as WRM through additional purchases from industry, all while monitoring on-hand and on-order MRE inventory levels as compared to the WRM level. DLA officials stated that DLA uses information such as its knowledge of DOD priorities, service training cycles, and updates on service training and operational activities obtained through monthly telephone calls and other regular interaction with the services to forecast service demand for MREs and further inform planning. Further, DLA has performance-based agreements with the Army, Navy, Marine Corps, and Air Force that detail the specific amount of MRE war reserve stocks that DLA will deliver to a specific location within a defined time frame. This information is also tracked by DLA and can further affect inventory levels and the requirement to purchase additional MREs. For example, the performance-based agreement between DLA Troop Support and the Army signed in March 2013 specifies the total number of MRE cases stored by DLA for the Army in locations in the continental United States, Europe, Hawaii, Japan, Korea, and Southwest Asia, as well as aboard Army prepositioning ships. The agreement sets forth responsibilities of DLA Troop Support, such as to (1) stock and store the levels of MREs in DLA- or DLA-commercial-provided storage, (2) rotate the MREs through the DLA customer base, (3) maintain the capability to deliver the MREs in certain time frames depending on location and situation (for crisis or contingency needs as opposed to peacetime needs), and (4) maintain records of MRE stocks, among other duties. Similarly, the agreement details the responsibilities of the Army, which include the need to notify DLA Troop Support upon execution of contingency operation plans and peacetime exercises to allow for the release of stocks and to submit requisitions to DLA Troop Support for MREs with specific lead times that vary by location and situational need. DLA officials stated that as with other types of WRM, the Navy does not operate in the same manner as the other services. Through the allowancing process described earlier, the Navy outfits its ships with stocks in advance of a deployment. The DLA and Navy entered into a performance-based agreement in 1984 that has remained unchanged since that time. Under the agreement, DLA is to be able to provide about 20,000 cases of MREs for the Navy in the continental United States. DLA’s MRE inventory levels fluctuate based on the flow of purchases from industry and sales to the services, as shown in figure 3. Based on data from fiscal years 2007 through 2014, on average DLA purchased 3.3 million MRE cases from industry per year and sold 3.56 million cases per year to the services. The average on-hand inventory level was about 4.66 million cases. In addition, the data show that the average yearly MRE sales to the services decreased from about 4 million cases in fiscal year 2010 to 3 million cases in fiscal year 2012 and remained relatively constant at around 3 million cases from fiscal years 2012 through 2014. There were also decreases in annual purchases from suppliers starting in fiscal year 2010. The services have different requirements for, and therefore purchase different amounts of, MREs. As shown in figure 4, the Army is the largest purchaser of MREs. In addition to monitoring sales and purchase data, DLA has conducted several reviews since 2003 to determine whether the war reserve level is appropriate based on potential contingency requirements, historical demand, and industry capability. A DLA study conducted in 2003 reviewed the processes used to determine war reserve requirements for MREs and found that the war reserve level could be maintained at 4.1 million cases. A DLA study conducted in 2007 analyzed historical sales data that included a worst-case planning scenario and found that the war reserve level could be within the range of 3.2 million to 5 million cases. A DLA study completed in July 2013, using a methodology similar to that of previous studies, found that the level of MRE on-hand inventory could be at a range between 3.45 million and 3.96 million cases of MREs to support contingency requirements, efficiently manage resources, and sustain the industrial base. The 2013 study stated that reasons for the proposed decrease in MRE on-hand inventory included less demand for MREs from fiscal years 2010 through 2012; the withdrawal of U.S. troops from Iraq and Afghanistan; the possible effects of sequestration on force structure; and an increased capacity reported by the MRE industry to meet surge requirements. Further, the Federal Emergency Management Agency moved from using military grade MREs to commercially available alternatives, meaning that the demand for MREs is further reduced. In addition, the study noted that other troop feeding options such as new types of rations and dining facilities that can be quickly established could reduce dependence on MREs during contingency operations. Although DLA’s 2013 study supported a lower war reserve inventory level for MREs, DLA issued an MRE strategic plan in September 2013 stating that after subsequent analysis, DLA leadership decided to maintain the current level of 5 million cases and revisit this level after 2014. As of March 2015, DLA officials stated that an update to the MRE strategic plan was under review. According to DLA officials, higher-than-expected demand for MREs occurred in fiscal years 2013 and 2014. Prior estimates for those 2 years were 2.3 million to 2.7 million cases, but actual sales were around 2.9 million each year. Officials attributed the higher-than-anticipated demand to increased training needs during peacetime and to small-scale deployments in response to crises such as the Ebola virus epidemic in Africa and tensions in Syria and the Ukraine. The 2013 MRE strategic plan projected purchases of 2.3 million to 2.5 million cases a year through fiscal year 2016, a level that, according to the plan, will allow DLA to uphold the war reserve level of 5 million cases. Military service officials who manage MREs expressed concerns that in light of changing needs, resulting from budgetary impacts and smaller end strengths, it may be difficult for the services to consume MREs in the future at a rate that will allow them to maintain sufficient rotations of MREs and prevent disposals due to expiring shelf life. Army and Marine Corps officials stated that they did not believe reduced purchases of MREs would result in shortages, but they also cautioned that since MREs are critical for military operations, it is important that the MRE industry be able to increase production to meet a surge in need. According to service officials, each service estimates the amount of MREs needed for training and other purposes. These estimates are largely based on training plans and are submitted to DLA annually. According to service officials, these estimates have ranged from about 3.2 million to 3.6 million cases per year over the past several years. Service officials stated that they are responsible for monitoring MRE consumption and ensuring that their service works towards purchasing the estimated amounts from DLA. Service officials also stated that they track use of MREs to the unit level for training or operational needs. However, they acknowledged that they may not always know if MREs are consumed or disposed of. For example, an Air Force subsistence official stated that the Air Force has recently begun to track MRE disposals and he knew that disposals occurred, but he could not identify how many disposals had occurred. A Marine Corps subsistence official stated that some MREs are disposed of due to issues such as accidental improper storage, and the Marine Corps’ subsistence office would only know this information if the unit decided to report the disposal. An Army subsistence official stated that he receives reports on MRE disposals, but that the reports may not always be complete and reliable. The performance-based agreements between DLA and each service establish the policies, procedures, and responsibilities concerning operational rations support to the service by DLA. According to these agreements, DLA is financially responsible for inventory losses unless the losses result from a service’s inability to rotate inventory within the required time limits; then the loss is the responsibility of the service. Some service subsistence officials stated that it can be challenging to meet the rotation demands in these agreements through training and operational use of MREs. Service subsistence officials also stated that disposal of unused MREs is costly, and that MREs cost almost as much to dispose of as to purchase because they must be disposed of in certain ways due to the flameless heating component. With regard to the current MRE war reserve level of 5 million cases, Army and Marine Corps officials stated that they have been able to meet the required rotation demands in recent years, and that they believe that MRE disposals have not been a major issue. However, officials stated that, in the coming years, decreasing troop end strength will likely result in fewer service members to train. Because many MREs are consumed during field training, reductions in the number of service members therefore are likely to decrease the services’ overall demand for MREs. For example, the DLA study conducted in 2013 stated that the Army planned to reduce its MRE war reserve levels by 200,000 cases due to decreases in troop end strength. Army officials told us that they expect that planned reductions in troop end strength will reduce MRE consumption. Army and Marine Corps officials also stated that, while MREs are of critical importance because they are a primary food ration for sustaining military forces during the early phases of military operations, their experience shows that the current war reserve level of 5 million cases per year is probably not necessary and that they would support DLA maintaining a lower inventory level of MREs. However, these officials stated that the level would ultimately depend on industry capabilities to meet surge needs. Officials from the Army’s subsistence program stated that they have been concerned for several years that withdrawals from operations in Iraq and Afghanistan would result in substantial amounts of MREs that the Army could not consume. One official stated that although MRE disposals due to lack of consumption have not yet been an issue for the Army, two events likely prevented the need for disposals in recent years. First, according to Army and DLA officials, in 2012 a warehouse fire in Afghanistan destroyed around 125,000 cases of MREs. Second, the 2011 tsunami in Japan destroyed around 100,000 cases of MREs. Army subsistence officials also stated that due to concerns about the Army being able to use enough of the MRE stock stored in Japan, DLA and the Army chose to restock the Japan levels to 4,800 cases instead of the 100,000, cases that were stored there prior to the tsunami. Further, service and DLA officials stated that there are 300,000 cases of MREs from fiscal year 2011 stored in a cold-storage warehouse facility that had been held back from consumption to undergo extensive testing to ensure the cases were not infested with a certain type of beetle. While the MREs were ultimately deemed safe for consumption through 2015, service officials expressed concerns that the MREs were at the end of their shelf life and, consequently, they did not want to accept them from DLA. However, they added that it may be necessary to accept the MREs to prevent a large-scale disposal. An Army official stated that while this issue stemmed from a possible infestation that required MREs to be taken out of rotation and tested, it had altered the flow of MREs from various fiscal years and problems with meeting rotational levels would continue. DLA officials stated that this was a one-time occurrence and they did not share the Army’s concern. In addition, an official from the Air Force’s subsistence program stated that Air Force demand for MREs may decrease in the coming years as a result of shortening the duration of basic training. While shorter training may result in less consumption of MREs, the official stated that the Air Force uses fewer MREs than the Army and Marine Corps and is therefore not as concerned about the ability of the Air Force to rotate MREs if the war reserve level remains at 5 million cases, provided that funding for the Air Force’s rotation program remains available. The official stated that Air Force war reserve levels for MREs may decrease in certain areas such as Europe due to a focus on operations elsewhere, but that the Air Force’s overall war reserve requirement was unlikely to decrease dramatically in the coming years. Although DLA monitors various data on MREs, as previously discussed, it lacks analysis on the level of MRE purchases needed to sustain the industrial base while maintaining surge capability. More specifically, DLA has not assessed whether its annual purchase objective of 2.5 million cases is valid. Further, DLA’s ability to forecast the number of MREs the services will need is limited to some extent because DLA does not obtain information from the services about their usage of MREs, including consumption and possible disposals. Standards for Internal Control in the Federal Government state that an agency needs relevant, reliable, and timely information to effectively and efficiently run its operations and make appropriate decisions. Further, DOD guidance states that the department’s materiel management shall operate as a high-performing and agile supply chain responsive to customer requirements during peacetime and war while balancing risk and total cost. DLA has not conducted a recent detailed analysis to determine the level of MRE purchases from industry necessary to sustain the current industrial base while retaining the ability to meet a surge capability. Although DLA has an annual purchase objective of at least 2.5 million cases, officials stated that this number reflects an unofficial agreement with the MRE industry and is based on limited information from industry rather than on a DLA analysis of industry capabilities. Further, DOD has acknowledged that DLA’s future acquisitions of MREs may need to address a reduced annual purchase objective to avoid disposal of unused stock. DOD noted that reduced acquisitions could challenge the department’s ability to meet surge requirements. However, the 2013 MRE strategic plan states that it is a challenge for DOD to determine what quantities of MRE sales will keep each of the MRE suppliers producing at a level that sustains the industrial base to meet the needs of the department as well as respond to a surge requirement that could occur. The strategic plan further states that a minimum sustaining rate study could be conducted to develop this type of analysis. A 2013 review conducted by DLA to assess the appropriateness of the MRE war reserve level states that such a study would require the participation and cooperation of industry. DLA officials stated that this type of analysis was conducted more than 20 years ago regarding the MRE industrial base, but noted that this dated analysis would not reflect the current industrial base. DLA officials told us that this type of analysis has been conducted on other supply chains, and that while this type of study can be performed by an outside party such as a contractor, an office within DLA routinely conducts these studies. For example, in 2013, DLA conducted a minimum sustaining rate study and other industrial capability assessments of the three manufacturers of a certain type of parachute that had high demand during the height of military operations in Afghanistan, but now has far less demand as a result of the drawdown. DLA subsequently reported that information collected about its own demand patterns and the capabilities of the manufacturers was used to determine the most cost- effective industrial solution for that particular item. According to DLA officials, DLA has not conducted a minimum sustaining rate study or other similar analysis for MREs because industry would have to provide access to financial and production records. Access to such records is provided by companies for any minimum sustaining rate study conducted by DLA, as was the case for the study on parachute manufacturers discussed previously. However, DLA officials stated that DOD cannot require a supplier to provide the financial data needed to complete a minimum sustaining rate study. DLA may request the suppliers agree to such a financial audit, as was the case for the study on parachute manufacturers discussed previously. While DLA officials told us that they obtain some information from MRE suppliers on their capabilities, DLA stated in both its 2013 MRE study and the 2013 strategic plan that it is difficult to know the capabilities of the MRE industries without more detailed information provided by industry, since as stated in its strategic plan, the companies have expanded product lines and customer bases. For example, the three suppliers are expanding their commercial business to make MRE-like rations and other products available to the public (such as shelf-stable meals and pouched foods) and producing other types of operational rations, such as first strike rations and unitized group rations. DLA has conducted recent analyses of the supply chains for other types of operational rations. In 2014, DLA conducted an analysis of the first strike rations and unitized group rations that have low or zero peacetime demand. DLA’s analysis focused on possible strategies to assist the subsistence industry in quickly ramping up production of these rations. DLA officials noted that this analysis included a review of and data from the operational ration industry that produces MREs. However, several companies that do not produce MREs were part of this analysis, and one major MRE supplier was not included in this analysis. Additionally, DLA conducted an analysis in 2010 that assessed various acquisition strategies for MREs, from continuing its current strategy of relying on three selected MRE suppliers to pursuing full and open competition among possible suppliers. As part of this analysis, DLA reviewed the industrial capabilities that exist among MRE suppliers relative to projected DOD surge needs. However, the analysis did not determine the level of MRE purchases from the current three suppliers that can sustain the industrial base, and it did not include an assessment of the validity of the current 2.5 million case purchase objective. Without conducting an analysis to determine the amount of annual MRE purchases needed to sustain the industrial base and respond to surge needs, DLA lacks information that would be useful for managing MRE inventory, including an assessment of the validity of the 2.5 million case purchase objective and an understanding of the potential consequences of falling below the annual purchase objective. DLA officials told us that it is important to appropriately balance risk and cost and ensure that the services have MREs as needed while best using resources. Further, as previously discussed, DLA conducted reviews to assess the appropriateness of the MRE war reserve levels in 2003, 2007, and 2013. All of these reviews resulted in various conclusions to either hold constant or reduce purchases, as they were based mostly on DOD’s projected demand for MREs and not a detailed analysis of industry capabilities. Such analysis could be useful to DLA in managing MRE inventory. In addition, although DLA coordinates with the military services to obtain information on MRE requirements and demands as previously discussed, DLA does not obtain consumption and disposal information from the services. DLA states in its 2013 MRE strategic plan that it is vital to collaborate and share information with the military services to continually improve processes. DLA officials stated that DLA does not obtain information from the services on consumption and disposals because DLA’s responsibility ends once it sells MREs to the services. Service officials agreed that it is the responsibility of the services to incorporate information related to usage of MREs in determining their requirements for MRE purchases from DLA. However, DLA acknowledges in the strategic plan that sharing information about the military services’ demand and usage patterns will be vital to making purchase decisions. Officials from the Army, Marine Corps, and Air Force stated that they monitor the consumption of MREs. Further, service officials stated that while they do not monitor total MRE disposals at this time, they do collect and track some information regarding disposals when this information is provided by units and could provide this information to DLA if requested. Such information, along with the information DLA already tracks, could provide further insight on potential changes in the services’ future demand for MREs during peacetime and help DLA in managing its MRE inventory. DLA reports in its MRE strategic plan that it intends to continue monthly coordination phone calls with the services and to include effects of sequestration and training budgets as items for discussion. As there is already considerable existing coordination between DLA and the military services, sharing additional information related to changes in MRE consumption and disposals could provide additional insight to DLA on service demand and usage patterns. In the absence of such information, DLA’s information regarding future service demand for MREs may be limited, making it difficult to optimize the MRE supply chain across DOD. Obtaining this additional information from the services would help DLA ensure that, consistent with DOD guidance, its MRE supply chain is agile and responsive to customer requirements during peacetime and war while balancing risk and total cost. The difficulty of forecasting demand for items has been a recurring inventory-management problem across DOD. Previous GAO reports have cited difficulty with demand forecasting at each of the services and DLA. In addition, we have identified DOD supply-chain management as a high- risk area since 1990 due in part to weaknesses in accurately forecasting the demand for spare parts. Furthermore, DOD issued its Comprehensive Inventory Management Improvement Plan to focus, in part, on improving the accuracy of demand forecasts. Finally, the Office of the Deputy Assistant Secretary of Defense for Supply Chain Integration commissioned a study on forecasting across the department that recommended that DLA and the services should tailor their approach based on an item’s demand pattern. DLA uses various supply-chain strategies to balance cost with readiness in meeting the need for items identified as WRM and needed for surges associated with new contingencies or crises. DLA is working to reduce its on-hand inventory to reduce costs, but will continue to stock certain types of items, such as those that are military-unique or of limited availability. On the other hand, DLA seeks to contract for fast access to those items that are readily available on the commercial market that would be costly to stock, such as medical supplies. Further, DLA for many years has been seeking to facilitate and improve access to certain WRM items through its Warstopper Program. DLA has sought to reduce its on-hand inventory, including WRM items, to gain certain benefits, but will continue to stock items that are, among other things, military-unique or of limited availability. In a June 2014 report, we found that DLA had set an internal goal in 2012 for reducing inventory and disposing of nearly $4 billion of on-hand inventory, and that it accomplished this goal in fiscal year 2013. According to DLA headquarters officials, the benefits of reducing on-hand inventory are (1) achieving cost savings by reducing the warehouse infrastructure needed to store on-hand inventory and (2) preventing reductions to DLA’s working-capital fund obligational authority, which might reduce supply availability for the military customers. As part of its responsibilities to perform storage and distribution functions for WRM in support of operational requirements, DLA keeps on-hand assets that are above current needs (in other words, the assets are in excess of the approved acquisition objective and being held as retention stock) if the materiel will help to meet a service WRM requirement. DLA recategorizes such stock as WRM in its inventory stratification reporting, and this stock becomes part of the approved acquisition objective. This recategorization protects the WRM stocks from disposal that could otherwise occur. However, these stocks are not otherwise managed or stored separately specifically for the purpose of being available in the event of a military operation. Rather, the inventory is comingled with regular stocks, and DLA issues these stocks in response to customer requisitions, whether for peacetime needs or in response to a surge requirement related to a military operation. DLA officials stated that there are certain types of items that need to be stocked in order to ensure they are available for surge needs, such as those items that are military-unique or of limited availability. Further, small items that are low cost and without shelf-life constraints can be stocked. The officials stated that decisions to stock items are made on a case-by- case basis depending on the assessment of the responsible supply-chain managers, which includes an assessment of the surge needs associated with an item. Some examples of military-unique items stocked by DLA are camouflaged bandages, chemical protection suits, and operational rations, including MREs. As another example, supply-chain managers from the construction and equipment supply chain stated that DLA stocks a type of matting (called AM2 matting) that can be configured into landing pads for expeditionary aircraft because it is of limited availability and military-specific. Industrial hardware supply-chain managers told us that fasteners, nuts, and bolts are stocked in DLA depots because many are low cost and without shelf-life constraints. Further, DLA officials told us that these items do not take up considerable storage space in DLA storage depots. DLA also has sought to comingle its stocks with service stocks to gain efficiencies and reduce storage and transportation costs. These initiatives are not focused specifically on WRM inventories since WRM stocks are not maintained or stored separately from other stocks, but do include items that have WRM requirements. For example, DLA and the Army have a joint initiative under way to transfer sustainment stocks of DLA- managed items previously owned by the Army to DLA. When transferred, these items would be comingled in DLA storage facilities. Army officials stated that because of efficiencies that DLA can provide, such as improved forecasting and better rotation of inventory, the Army will have to buy and stock less prepositioned inventory and will have fewer disposals due to shelf-life issues. To maximize use of funds and minimize inventory storage costs, DLA attempts to meet some WRM requirements by contracting for fast access to materiel that is readily available from commercial sources when practicable. DLA makes use of commercial practices, such as prime- vendor contracts, to obtain commercial goods and supplies.of such commercial practices is to decrease the need for and costs of maintaining government inventory. DLA also acquires items through contingency contracts with industry as an alternative to stocking items on shelves. A DLA official stated that the agency’s long-term contracts with industry attempt to leverage the commercial marketplace to acquire surge coverage to offset the need to stock items as WRM. As such, contracts can include surge and sustainment clauses to provide access to items with WRM requirements when the need for them arises. For example, according to DLA officials, since medical items are costly to stock and have limited shelf life, DLA’s medical supply chain uses contingency contracts to acquire the majority of its items. In recent years, DLA has sought to stock fewer items within the medical supply chain and relied more on contingency contracts. A DLA official stated that this strategy had improved delivery times from over 200 days prior to operations in Iraq and Afghanistan to a current delivery time of 3 to 5 days. This official stated that when DLA contracts for access to medical inventory and obtains the inventory when needed, it does not have to be stocked on DOD shelves, which prevents wasting money on stocked items that must be later destroyed due to expired shelf life and ensures that medical professionals and their patients obtain high- quality products. DLA also incorporates surge clauses into its contracts for some items that it maintains in its stocks. For example, MREs are stocked, but DLA may obtain additional supplies of these if needed through surge clauses. DLA officials stated that the current contract between DLA and MRE-producing companies outlines specific guidelines for each company to produce MREs in the event of a surge requirement that results from various scenarios, including new or escalating military operations, natural disasters, or other emergencies. In addition to stocking certain items and seeking to establish contracts that can rapidly provide items to meet surge needs, DLA has a Warstopper Program that is aimed at facilitating and improving access to certain items by enabling DLA to maintain an industrial base for critical “go-to-war” items. DLA uses Warstopper funds to address weaknesses in certain supply chains by making targeted investments in industry that guarantee DLA access to materiel and enable industry to increase production when needed. The Warstopper Program was authorized by Congress in the early 1990s as a result of critical shortages for certain items that occurred during Operation Desert Storm in 1991. These items generally had high demand during operations, but low demand during peacetime. DLA was designated the lead for the program, and Congress began funding the program in fiscal year 1992. DLA’s guidance regarding the Warstopper Program states that its purpose is to fund initiatives that ensure materiel availability when DLA’s normal peacetime procurements, inventory, and service prepositioned war reserve stocks are not adequate to meet the services’ go-to-war shortfalls for critical materiel.investments are intended to facilitate the acceleration of production for critical items and maintain critical industrial capability. Further, the guidance states that DLA manages the Warstopper Program to ensure the transition from peacetime to wartime is supported by a viable industrial base despite the variable demand patterns, technology Warstopper inhibitors, skill retention, and general industry issues that may exist for DLA-managed go-to-war items. Items must meet certain mission, demand, or production characteristic criteria to be funded through the Warstopper Program. Regarding mission characteristics, at the establishment of the program, Congress identified certain items to be included as part of the program due to the critical shortages that emerged during Operation Desert Storm, such as operational rations (including MREs), nerve-agent antidote auto-injectors, chemical protective gloves, chemical protective suits, combat boots, and barrier materials. Aside from these congressionally-identified items, other mission characteristics include life-saving or life-preserving items and items that, if unavailable, can severely affect a strategic warfighting capability. For example, medical items and personal protection items with high value to the preservation of a servicemember’s life (examples include helmets, body armor, fire-retardant garments, medical patient movement items and surgical equipment) are important for DOD’s warfighting capability, according to the Warstopper Program guidance. Similarly, the guidance includes examples of energy items and repair parts capable of stopping a strategic warfighter capability if they are unavailable, such as lithium batteries and helicopter windshields. Demand characteristics that can facilitate an item’s inclusion in the Warstopper Program include items with a validated WRM requirement and those with a low peacetime and high wartime demand. Lastly, production characteristics that will exceed the industrial capability to meet wartime requirements, such as long lead times or short shelf lives, could make an item eligible for Warstopper funding. DLA conducts research and analysis to determine whether an investment is needed to help ensure availability of items and materiel that meet the criteria for the program. DLA guidance states that studies, data collection, and reports are outputs of the program that provide information to assess the state of the industrial base and develop industrial solutions, as opposed to buying more stocks to store. The office responsible for managing a particular item or supply chain develops a proposal for the investment and submits the proposal to DLA headquarters for approval. The investment may take the form of one of several types of contingency contracts or other means of investment, such as the purchase of critical raw materials or the purchase of government-provided equipment to speed up or modernize industrial processes. For example, DLA’s subsistence office developed a proposal to provide operational ration manufacturers with two types of machines to assist with the cooking and filling and sealing process for operational rations, including MREs. Subsistence officials stated that manufacturers can use the equipment from the government for commercial use so that the equipment does not fall into disrepair from lack of use. DLA is to review Warstopper investments annually. According to DLA officials, since 1993, a cumulative investment total of $856 million in Warstopper funding has resulted in cost avoidances for the department of about $5.9 billion that would have been spent on stocking items and other related costs. Past investment items include operational rations, certain types of batteries, fiber used in flame-retardant items, specialty steels for repair parts, nerve-agent antidote auto-injectors, and certain types of military specific barriers, among others. DLA officials provided examples of how the program has increased availability of items that are needed to meet surge wartime needs. Regarding the fiber used in fire-retardant items, $1.37 million of Warstopper funding was invested to increase surge output by up to 54 percent in the first 180 days of surge needs. Similarly, $6.1 million was invested in long-lead time components of AM2 matting used to create landing pads for expeditionary aircraft, which subsequently increased surge output by 85 percent in the first 180 days. Numerous items within the medical supply chain also are funded by Warstopper investments. DLA’s medical supply chain was budgeted for over $36 million in Warstopper funds in fiscal year 2015, and it executes approximately 55 percent of DLA’s Warstopper budget annually. Through the use of over 155 contingency contracts funded by the Warstopper Program in the medical supply chain, DLA officials stated that the department has purchased access to over $280 million of medical and pharmaceutical supplies at a cost of $24 million for contract fees. DLA officials stated that these contingency contracts exist with manufacturers, distributors, and prime vendors. The ready availability of war reserve materiel items is central to ensuring that U.S. forces can be sustained in the early stages of operations, before regular supply chains are established. By utilizing a variety of contracting strategies for items that are readily available, stocking some military- essential items, and acting to support the integrity of its supply chains through the Warstopper program, DLA is working to meet warfighter needs while also pursuing efficiencies in its operations at a time of budget constraints and uncertainty. DLA has also put effort into managing the MRE inventory and, in doing so, monitors various types of data. However, DLA lacks other analysis and information that could be useful in managing this inventory. For instance, without conducting an analysis that provides more information on industry capabilities than its previous studies, DLA does not have reasonable assurance that it is balancing readiness and budget priorities with the need to sustain the industrial base in the most efficient way. Similarly, without obtaining information from the military services about potential changes to consumption and disposals of MREs that could affect future demand, DLA may be limited in its ability to optimize the supply chain across the department. Forecasting demand for supplies has been a long-standing challenge for DOD in managing its inventories, and additional information sharing among DLA and the services could help to reduce uncertainty about the future demand for MREs. Such analysis and information will help ensure that DLA, consistent with DOD guidance, is acquiring, sizing, and managing MRE war reserve stocks to maximize flexibility while minimizing investment. To obtain information useful to DLA’s decision making regarding MRE inventory levels, we recommend that the Assistant Secretary of Defense for Logistics and Materiel Readiness direct the Director, DLA, to take the following two actions: Conduct an analytical study of the MRE industry’s capabilities that provides information on the level of MRE purchases needed to sustain the industrial base, including the ability to respond to a surge requirement. Specifically, the analysis should assess the validity of the current annual purchase objective of 2.5 million cases. Request that the military services, as part of existing coordination efforts, share information on potential changes to MRE consumption and disposals that could affect future demand. We provided a draft of this report to DOD for comment. In written comments, DOD concurred with our two recommendations aimed at improving the information used as part of DLA’s decision making regarding MRE inventory levels. DOD’s comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated into the report as appropriate. DOD concurred with our first recommendation that DLA conduct an analytical study of the MRE industry’s capabilities that provides information on the level of MRE purchases needed to sustain the industrial base, including the ability to respond to a surge requirement, and that this analysis assess the validity of the current annual purchase objective of 2.5 million cases. DOD stated in its written response that DLA will conduct an analytical study on the level of MRE purchases needed to sustain the industrial base. DOD noted that this effort would require the participation and cooperation of industry, as we acknowledge in our report. DOD did not explicitly state that this analysis would assess the validity of the current purchase objective of 2.5 million cases, and therefore we encourage DLA to plan to incorporate this assessment as part of the analysis it conducts. DOD concurred with our second recommendation that DLA request that the military services, as part of existing coordination efforts, share information on potential changes to MRE consumption and disposals that could affect future demand. We note that we made minor revisions to this recommendation after the draft was provided to DOD for its comment in order to clarify the recommendation in response to DOD’s technical comments. This revision did not alter the original intent of the recommendation. In its written response, DOD stated that it concurs and the military services will share potential changes in usage of MREs in the existing quarterly reviews with DLA. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Assistant Secretary of Defense for Logistics and Materiel Readiness; the Director of DLA, and the Secretaries of the Army, Navy, and Air Force. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Johana Ayers at (202) 512-5741 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The military services have different approaches for managing their war reserve materiel (WRM) programs. Army: The Army categorizes WRM as part of its Army Prepositioned Stock program. These stocks consist of major end items to replace combat losses and war reserve secondary items to replace supplies consumed in battle. Army Materiel Command manages the Army Prepositioned Stock program and is required to coordinate with the Defense Logistics Agency (DLA) and other organizations that provide equipment or stock for the program, such as operational rations and medical supplies. Army logistics officials said revised Army guidance will consolidate information from several regulations that previously set policy and procedural guidance for management, use, and storage of prepositioned stock, as well as standardizing terminology within the prepositioning program. Navy: The Navy’s deployed ships carry the items necessary for operations during their deployment at sea. The Navy has an “allowancing” process to outfit ships with the correct stocks for a deployment, and it monitors metrics on the performance of these allowances. Naval supply guidance states that certain items are prepositioned as war reserve stocks, such as major assemblies, components, and equipment related to nuclear items and materials. U.S. Navy, Navy Supply Manual, vol. II, ch. 6, “Supply System Management”. operational capability. The Air Force is currently undertaking a reorganization of its WRM program. In the past, each Air Force major command had its own WRM program. Air Force officials stated that leadership decided to centralize and streamline the program to be more transparent and have “one voice” for WRM. In January 2015, the Air Force issued revised WRM guidance that details changes to the program and its management, including the development of a WRM global strategy and the appointment of a global manager of WRM. The revised guidance designates the Air Force Materiel Command as the global manager for WRM, which in turn assigned its subordinate Air Force Sustainment Center as global manager. Within the Air Force Sustainment Center, the 635th Supply Chain Operations Wing at Scott Air Force Base will execute centralized WRM global management. Offices that manage unique WRM items, such as ammunition and medical items, must coordinate with the global manager. Functional managers of items such as fuel and subsistence must coordinate with DLA for items it manages; however, the global manager coordinates with DLA for other items. In addition to the individual named above, key contributors to this report were Thomas Gosling (Assistant Director), Charlene Calhoon, Timothy Carr, Martin De Alteriis, Suzanne M. Perkins, Amie Steele, Sabrina C. Streagle, and Erik Wilkins-McKee.
The Department of Defense (DOD) maintains WRM to reduce reaction time and sustain forces in future military operations. WRM is managed by DLA and the military services. WRM is intended to meet short-term needs until supply pipelines are established. Cost-effective management of WRM that maintains war-fighting capabilities is important as the department faces budget constraints and changes in force structure. Senate Report 113-176, accompanying S. 2410, a proposed bill for the National Defense Authorization Act for Fiscal Year 2015, included a provision for GAO to review the management of DOD's WRM. This report examines (1) how DOD determines WRM requirements for DLA-managed items, (2) the extent to which DLA has the information needed for MRE inventory decision making, and (3) any strategies DLA pursues to balance cost with readiness in supplying WRM. GAO obtained information from the services on their processes for identifying WRM requirements, reviewed DLA's inventory-management processes and related guidance, and interviewed DLA and military service officials. The military services determine their war reserve materiel (WRM) requirements for Defense Logistics Agency (DLA)-managed items based on operational plans that support warfighting scenarios and other inputs such as deployment schedules and equipment-usage data. WRM can include repair parts, construction equipment and supplies, and chemical protection suits, among other items. Service officials stated that changes to troop end strength, force posture, and force structure could over time be reflected in operational plans, but these factors are more long-term influences than the primary drivers of service WRM requirements for DLA-managed items. DLA compares service WRM requirements against its assets to identify the level of available inventory, including any potential inventory shortfalls, and communicates this information to the military services, which use it to inform their procurement decisions. DLA monitors various types of data to manage Meals Ready to Eat (MRE), but it lacks other analysis and information that could be useful for managing this category of WRM. DLA monitors data such as purchases from industry and sales to the military services and currently has a yearly purchase objective of 2.5 million MRE cases. Service officials have expressed concerns that in light of changing needs resulting from budgetary effects and reduced end strengths, it may be difficult for the services to consume MREs in the future at a rate that will prevent disposals due to expiring shelf life. However, DLA has not conducted an analysis of the MRE industry to determine the level of purchases needed annually to sustain the industrial base while retaining the ability to meet a surge in requirements. Without conducting an analysis that provides more information on industry capabilities, DLA does not have reasonable assurance that it is balancing readiness and budget priorities with the need to sustain the industrial base in the most efficient way. DLA acknowledges in its strategic plan for MRE inventory that sharing information about the military services' usage patterns among DLA and the services will be vital to making purchase decisions. While the military services provide DLA with their estimated future demand for MREs, DLA does not obtain information from the services, as part of existing coordination efforts, about potential changes to MRE consumption and disposals that could affect future demand. Without obtaining this information from the military services, DLA may be limited in its ability to optimize the supply chain across the department. DLA uses various supply-chain strategies to balance cost with readiness in meeting the need for items identified as WRM and needed for surges associated with new contingencies or crises. For instance, DLA continues to stock certain types of items, such as those that are military-unique or of limited availability, but seeks to contract for fast access to those items that are readily available on the commercial market, such as medical supplies. Further, for many years DLA has sought to facilitate and improve access to certain items through its Warstopper Program, which addresses weaknesses in certain supply chains, such as MREs, by making targeted investments in industry that guarantee DLA access to materiel and enable industry to increase production when needed. To assist with DOD's decision making regarding MRE inventory levels, GAO recommends that DLA conduct analysis to obtain information on MRE industry capabilities and request information on MRE consumption and disposals is shared among DLA and the services as part of existing coordination with the services. DOD concurred with GAO's recommendations.
Children who have suffered a severe and potentially life threatening physical injury as a result of an event such as a motor vehicle crash or a fall need specialized care because of their unique anatomical, physiological, and psychological characteristics. Trauma centers—a key part of a region’s trauma system—have specialized resources to care for traumatically injured patients, with pediatric trauma centers having dedicated resources specific to the treatment of traumatically injured children. Responsibility for developing and operating emergency care systems, including trauma systems, primarily rests at the state and local level, with some involvement at the federal level. Children typically require specialized resources—both equipment and personnel—wherever they receive care due to unique anatomical, physiological, and psychological needs. For example, the use of specially sized equipment or the adjustment of medication dosages based on a child’s weight are required when treating children with traumatic injuries. In its 2006 report on emergency care for children, the Institute of Medicine recommended that all emergency departments appoint certain personnel who would address the resources that children need. Specifically, it recommended that all emergency departments have two part-time pediatric emergency coordinators—one a physician—who would have a number of responsibilities, including ensuring that fellow emergency department and other providers have adequate skills and knowledge to treat children, overseeing pediatric care quality improvement initiatives, and ensuring the availability of pediatric medications, equipment, and supplies. Additionally, the National Pediatric Readiness Project found that emergency departments with a pediatric emergency coordinator were more than twice as likely to have important policies in place related to treating children. Trauma centers have specialized resources to care for traumatically injured patients. Most emergency departments across the United States do not qualify as trauma centers because they do not have the optimal resources to treat severely-injured patients. Trauma center levels. Trauma centers across the United States are designated as one of five levels, which refer to the kinds of resources available in the trauma center and the number of patients admitted yearly. Making this designation is the responsibility of state or sometimes local entities, such as a state’s office of emergency medical services. While the criteria used to designate a trauma center’s level can vary from state to state, most states have adopted guidelines that are either the same as or similar to the guidelines developed by the American College of Surgeons Committee on Trauma (ACS-COT). Table 1 summarizes the general criteria for trauma centers based on the ACS-COT guidelines. Types of trauma centers. There are two types of trauma centers— pediatric and adult. Some trauma centers are only an adult trauma center, some are only a pediatric trauma center, and some are both. Pediatric trauma centers have dedicated resources to treat injured children and can be either stand-alone children’s hospitals or distinct units within larger hospitals. A pediatric trauma center must meet all the same requirements that an adult trauma center must meet, as well as additional requirements. For example, according to ACS-COT guidelines, a level I pediatric trauma center must have at least two surgeons who are board certified in pediatric surgery and must admit 200 or more injured children younger than 15 annually; and a level II pediatric trauma center must have at least one board- certified pediatric surgeon and must admit 100 or more injured children younger than 15 annually. Pediatric trauma centers are expected to provide trauma care for the most severely injured children and have a leadership role in education, research, and planning with other trauma centers and non-trauma center hospitals in their geographic area with regards to care for injured children. ACS-COT recommends that pediatric trauma centers be used to the fullest extent feasible to treat traumatically injured children; however, due to the limited number and geographic distribution of these centers, ACS- COT recognizes that adult trauma centers or non-trauma centers must provide initial care for injured children in areas where specialized pediatric resources are not available. Research shows that even in states that designate trauma centers, nearly half of injured children—45 percent— are treated at non-trauma centers. Many of these non-trauma centers where injured children receive treatment do not treat a high volume of pediatric patients and may not have the equipment recommended for treating children. The National Pediatric Readiness Project’s 2013 assessment of over 4,100 hospitals across the United States found that about 69 percent of hospitals see fewer than 14 children per day and that at least 15 percent of hospitals lacked one or more specific pieces of equipment recommended for treating children. Within trauma systems, coordinated trauma care activities occur across a broad continuum, ranging from injury prevention activities and pre- hospital care to hospital-based trauma care and rehabilitation (see fig. 1). Trauma care is an essential component of emergency care, which encompasses all services involved in emergency medical care—both injury and illness. A comprehensive trauma system may involve public health officials and departments, emergency medical services personnel, emergency departments and trauma centers, stakeholder and advocacy groups, and families, among others. Such a system typically organizes the delivery of trauma care across the continuum at the local, regional, state, or national level. Responsibility for developing and operating trauma systems and the broader emergency care efforts of which they are a part primarily rests at the state and local level, with some support from federal programs. Generally, federal involvement in trauma care has addressed trauma care system development or research. For example, the Department of Health and Human Services (HHS) Secretary can make grants and enter into cooperative agreements and contracts to conduct and support research, training, evaluations, and demonstration projects related to trauma care and to foster the development of trauma care systems. Additionally, in 2006, HHS’ Health Resources and Services Administration (HRSA) released the Model Trauma System Planning and Evaluation document, a guide for trauma system development across the United States. The guide has helped provide a foundation to create and maintain systems of trauma care for communities, regions, and states. We found that 57 percent of children in the United States lived within 30 miles of a high-level pediatric trauma center during the period 2011-2015. Some of the studies we reviewed suggest that children treated at pediatric trauma centers have a lower risk of mortality compared to children treated at other types of facilities, while other studies found no difference in mortality. Our analysis of data from the American Trauma Society and the Census Bureau’s American Community Survey shows that 57 percent, or 41.9 million, of the estimated 73.7 million children in the United States lived within 30 miles of a high-level pediatric trauma center during the period 2011-2015. These centers have the dedicated resources necessary to treat all injuries, regardless of severity. Among states, the proportion of children who lived within 30 miles of a high-level pediatric trauma center varied widely, ranging from no children in eight states to more than 90 percent of children in four states (see fig. 2). While an estimated 41.9 million children lived within 30 miles of a high- level pediatric trauma center, an estimated 31.8 million children did not. In areas without high-level pediatric trauma centers, children may have to rely on adult trauma centers with the resources to treat injured patients, even though these facilities are not specialized to treat children. When we consider both adult and pediatric trauma centers, the percentage of children living within 30 miles of the nearest high-level trauma center increases to 80 percent. When we consider all high- and mid-level trauma centers, the percentage of children living within 30 miles of one of these facilities increases to 88 percent, or 65.1 million. The proportion of children who lived within 30 miles of high- or mid-level trauma centers during the period 2011-2015 varied by state (see fig. 3). The findings from our analysis of children’s proximity to trauma centers are similar to the findings from other assessments of access to trauma care for all U.S. residents (adults and children). For example, one study found that in 2005, about 84 percent of residents could reach a high-level trauma center within an hour, and about 89 percent could reach a high- or mid-level trauma center in this time. Five of the studies we reviewed, including studies based on national data, suggest that children treated at pediatric trauma centers have a lower risk of mortality compared to children treated at other types of facilities. Three studies, which each analyzed data from a different state, found no significant differences in mortality. In addition, seven studies examined other outcome measures, such as imaging use or the rates of certain surgical procedures for severely injured children. However, some of the studies we reviewed and stakeholders we interviewed suggested that data on pediatric outcomes is limited and that more information is needed on outcomes other than mortality for children treated at pediatric trauma centers. More information is needed, in part, because mortality can be a limited measure since overall mortality is low among severely injured children. Mortality at pediatric trauma centers compared to other types of facilities. Five of the studies that we reviewed show that children treated at pediatric trauma centers had a lower risk of mortality compared with children treated at adult trauma centers or children transferred to a pediatric trauma center for treatment after initial treatment at another facility. For example, a 2015 study that examined hospitalizations nationwide among children ages 18 and under found that children treated at pediatric trauma centers had a lower risk of mortality compared with children treated at adult trauma centers or mixed trauma centers. Another study from 2016 that examined hospitalizations nationwide for injured adolescents aged 15 to 19 had a similar finding. A third study, from 2008, found that treatment in a pediatric trauma center compared to an adult trauma center was associated with an almost 8 percent reduction in the likelihood of mortality among pediatric trauma patients in Florida. Finally, two studies examined whether there were differences in outcomes based on whether children were transported directly to a pediatric trauma center following injury. Both studies found that after adjusting for injury severity, mortality was lower for children who were taken directly to a pediatric trauma center compared with children who were initially taken to a local hospital. In contrast, three of the studies we reviewed did not find a significant difference in the risk of mortality for children treated at pediatric trauma centers compared to children treated at adult trauma centers. All three of these studies were state-level analyses rather than analyses based on a national sample. For example, two studies, which each examined data for adolescents from a single state, did not identify significant differences in mortality among adolescents treated at pediatric and adult trauma centers. While the third study found no difference in mortality among children treated at pediatric and adult trauma centers, it also found that children treated at trauma centers had a 0.79 percentage point decrease in mortality compared to children treated at non-trauma hospitals. Data on other outcomes. Seven studies examined outcomes other than mortality, but according to some of the studies we reviewed and stakeholders we interviewed, more information is needed on outcomes other than mortality for children treated at pediatric trauma centers. Further, as some studies note, mortality can be a limited measure for determining quality of care or a trauma center’s contribution to survival, because overall mortality is low among severely injured children. One 2015 study found that adding a pediatric trauma center in Delaware decreased the frequency of pediatric splenectomies—a procedure that removes a child’s spleen. Another study found that pediatric trauma centers performed less imaging than adult trauma centers when treating severely injured adolescents. Information on other outcomes was limited. One study from 2016 that we reviewed noted that in the pediatric trauma literature there are no longitudinal studies on the long-term effects—both physical and psychological—of trauma on children. In addition, another study we reviewed indicated that the selection of outcome measures for analysis was constrained by what was available in the dataset used for the study. One stakeholder we interviewed told us that mortality is one of the few outcomes related to pediatric trauma that is captured in databases, because most trauma registries and other databases were initially developed to capture data for adult patients. Moreover, a few stakeholders told us that the pediatric trauma system is not as well developed as the adult trauma system and that both pediatric trauma care and research have tended to occur in isolation. One of these stakeholders said that because of this fragmentation, it has been difficult for researchers to use or build on the outcome measures that other researchers have developed in their work. Hospital-based pediatric trauma care activities are supported primarily through grants from two agencies within HHS—HRSA and the National Institutes of Health (NIH). Officials from these agencies reported that activities related to pediatric trauma care are coordinated through an interagency group focused broadly on emergency care, as well as through arrangements between individual agencies. Two agencies within HHS—HRSA and NIH—have grant programs and other activities that support hospital-based pediatric trauma care (see table 2). Within HRSA, the Emergency Medical Services for Children (EMSC) program, established in 1984, provides funding to states and academic medical institutions. It does so primarily through six grant programs and cooperative agreements that aim to enhance the capacity of emergency care—including hospital-based trauma care—to address the needs of children. The program’s annual appropriation is authorized at $20.2 million per fiscal year from fiscal years 2015 through 2019. According to HRSA officials, EMSC is the only federal program that focuses specifically on improving emergency care for children. Within NIH, the Pediatric Trauma and Critical Illness Branch supports research and training focused on preventing, treating, and reducing all forms of childhood trauma, injury, and critical illnesses. According to NIH officials, the Branch—which is part of the Eunice Kennedy Shriver National Institute of Child Health and Human Development—was established in 2012 to help unify research in pediatric trauma. Our analysis of data on all NIH-funded research in fiscal year 2015 shows that the Branch provided nearly $9 million in funding for 32 grants related to injuries. Beyond these two agencies, a few other federal efforts more broadly address emergency care, including trauma care. While they are not focused on pediatric trauma care, these efforts may indirectly address the needs of pediatric populations. For example, the Emergency Care Coordination Center within the HHS Office of the Assistant Secretary for Preparedness and Response (ASPR) aims to strengthen the day-to-day emergency care system to better prepare the nation for times of crisis and to support the federal coordination of in-hospital emergency medical care activities. Agency officials reported that the Center was funded at $820,000 per fiscal year in fiscal years 2015 and 2016. According to officials, the Center recently worked on two initiatives related to trauma— preparing a report requested by Congress on the nation’s capacity to respond to mass casualty events and issuing a request for proposals to award a contract for the development of an inventory of emergency departments, trauma centers, and burn centers and their capabilities across the United States. Recent federal funding to specifically support hospital-based trauma care activities or to develop trauma care systems has been limited as well. The Patient Protection and Affordable Care Act both continued existing and established new discretionary trauma care grant programs to help develop trauma care systems. However, according to HHS officials, no appropriations were made for these new programs and no grants have been made under these new authorities. HRSA and NIH officials reported that activities related to hospital-based trauma care and other emergency care, including pediatrics, are coordinated through an interagency group and through arrangements between individual agencies (see table 3). Both HRSA and NIH representatives are executive committee members of the Council on Emergency Medical Care, a federal interagency group led by ASPR’s Emergency Care Coordination Center, a center specially created as the policy lead for emergency care activities across the federal government. ASPR officials told us that the Council on Emergency Medical Care is the central meeting place for agencies across the federal government on issues related to emergency care, including pediatric care. HRSA and NIH officials reported using a variety of arrangements to collaborate with other federal agencies on hospital based pediatric trauma, such as supporting a program liaison position within another agency, establishing interagency agreements, and presenting at conferences and meetings. We provided a draft of this report to HHS for comment. The department provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Various types of training and resources are available for physicians and nurses on the delivery of pediatric trauma care. The training and resources are provided by stakeholder groups, such as professional, research and advocacy organizations. The training and resources from these groups supplement any training that physicians and nurses may receive during medical or nursing school or during any residencies or fellowships that may include or be completely focused on pediatric care. The available training includes standardized courses that stakeholder groups have developed as well as more ad-hoc training on pediatric trauma care topics of interest. Stakeholder groups also have developed resources on pediatric trauma care that physicians and nurses can access and consult when needed. The resources available include both policy statements that detail the infrastructure or resources needed to provide pediatric trauma care at the facility-level and other more individualized and clinical practice resources for physicians and nurses about the delivery of pediatric trauma care. To identify examples of the training and resources available to physicians and nurses on the delivery of pediatric trauma care, we interviewed stakeholder group representatives or received written responses from the following stakeholder groups: the American Academy of Pediatrics, the American Association of Neurological Surgeons/Congress of Neurological Surgeons, the American College of Emergency Physicians, the American College of Surgeons, the Childress Institute for Pediatric Trauma, the Emergency Nurses Association, the Pediatric Orthopaedic Society of North America, the Pediatric Trauma Society, and the Society of Trauma Nurses. We selected the groups to represent the perspectives of trauma care physicians and nurses, pediatric specialists, and research and advocacy organizations involved in or focusing on hospital-based pediatric trauma care. We asked all stakeholder groups a series of open- ended questions and, to the extent possible, corroborated statements with information available on stakeholder group websites. Many of the stakeholder groups we interviewed have developed standardized training courses on the evaluation, management, and treatment of trauma patients. In addition to standardized courses, stakeholder groups also offer other training opportunities related to pediatric trauma care on an ad-hoc basis (see table 4). These courses are generally available to all providers, but whether a provider must take any of these courses depends on the facility or system in which the provider works and its specific education or credentialing requirements. However, stakeholder representatives stated that these are all courses that any provider who treats trauma patients, including pediatric patients, generally should, and most likely will, take. For example, the American College of Surgeons Committee on Trauma (ACS-COT) publication, Resources for Optimal Care of the Injured Patient states that courses like the Advanced Trauma Life Support course, the Trauma Nursing Core Course, and the Advanced Trauma Care for Nursing course, among others, have become basic trauma education for providers. These courses have both classroom-based lectures and interactive components. Most of the courses are general trauma courses with pediatric elements rather than courses that are specific to pediatric trauma. One stakeholder representative noted that all providers should learn the baseline principles of trauma care from these courses and then build on that baseline to learn principles that are specific to pediatric trauma. Representatives from stakeholder groups said that these courses usually include a lecture and a trauma simulation exercise for a pediatric patient, even if the overall focus of the course is on emergency or trauma care for the general adult patient population. For example, representatives from the Emergency Nurses Association told us that the Trauma Nursing Core Course includes a participant skill station that is specific to pediatric trauma. In addition, the manual for this course includes a chapter on pediatric trauma. In addition to training, stakeholder groups have also developed resources for physicians and nurses related to pediatric trauma. The resources that these groups have developed, often in collaboration with each other, include 1) policy statements detailing the system-level infrastructure that must be in place to ensure that providers and facilities are prepared to care for injured children; and 2) other more individualized clinical resources, such as checklists, forums, and journal articles, that physicians and nurses can access to improve their individual knowledge and readiness to treat pediatric patients (see table 5). In addition to the contact named above, Karin Wallestad, Assistant Director, Alison Goetsch, Analyst-in-Charge, and Summar Corley made key contributions to this report. Also contributing were Leia Dickerson, Krister Friday, Giselle Hicks, Vikki Porter, and Jennifer Whitworth.
Pediatric trauma—a severe and potentially disabling or life threatening injury to a child resulting from an event such as a motor vehicle crash or a fall—is the leading cause of disability for children in the United States. More children die of injury each year than from all other causes combined. GAO was asked to examine issues related to pediatric trauma care. This report examines (1) what is known about the availability of trauma centers for children and the outcomes for children treated at different types of facilities, and (2) how, if at all, federal agencies are involved in supporting pediatric trauma care and how these activities are coordinated. GAO analyzed data on the number of pediatric and adult trauma centers in the United States relative to the pediatric population under 18 years of age. GAO used 2015 data on trauma centers from the American Trauma Society's Trauma Information Exchange Program and 5-year population estimates for 2011-2015 from the U.S. Census Bureau's American Community Survey, which were the latest available data at the time of GAO's analysis. GAO also reviewed the existing peer-reviewed, academic literature on outcomes for pediatric trauma patients, interviewed stakeholder group representatives and federal agency officials involved in activities related to hospital-based pediatric trauma care, and reviewed available agency documentation. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate. GAO estimates that 57 percent of the 73.7 million children in the United States during the period 2011-2015 lived within 30 miles of a pediatric trauma center that can treat all injuries regardless of severity. Among states, the proportion of children who lived within 30 miles of these pediatric trauma centers varied widely. In areas without pediatric trauma centers, injured children may have to rely on adult trauma centers or less specialized hospital emergency departments for initial trauma care. Some studies GAO reviewed, including nationwide studies, found that children treated at pediatric trauma centers have a lower mortality risk compared to children treated at adult trauma centers and other facilities, while other state-level studies GAO reviewed found no difference in mortality. Further, some studies GAO reviewed and stakeholders GAO interviewed suggest that more information is needed on outcomes other than mortality for children treated at pediatric trauma centers because mortality can be a limited outcome measure, as overall mortality is low among severely injured children. Two agencies within the Department of Health and Human Services (HHS)—the Health Resources and Services Administration (HRSA) and the National Institutes of Health (NIH)—have grant programs and other activities that support hospital-based pediatric trauma care. For example, HRSA's Emergency Medical Services for Children Program provides grants to integrate pediatric emergency care—which encompasses care for both traumatic injury and illness—into states' larger emergency medical services systems. GAO also found that federal activities related to hospital-based pediatric trauma care and other emergency care are coordinated through an interagency group and arrangements among agencies. For example, HRSA and NIH staff participate in the Council on Emergency Medical Care, an interagency group established to coordinate emergency care activities across the federal government by promoting information sharing and policy development.
Of the 1,700 drug court programs operating or planned as of September 2004, about 1,040—nearly 770 operating and about 270 being planned— were adult drug court programs, according to data collected by the Office of Justice Programs’ Drug Court Clearinghouse and Technical Assistance Project. The primary purpose of these programs is to use a court’s authority to reduce crime by changing defendants’ substance abuse behavior. In exchange for the possibility of dismissed charges or reduced sentences, eligible defendants who agree to participate are diverted to drug court programs in various ways and at various stages in the judicial process. These programs are typically offered to defendants as an alternative to probation or short-term incarceration. Drug court programs share several general characteristics but vary in their specific policies and procedures because of, among other things, differences in local jurisdictions and criminal justice system practices. In general, judges preside over drug court proceedings, which are called status hearings; monitor defendants’ progress with mandatory drug testing; and prescribe sanctions and rewards as appropriate in collaboration with prosecutors, defense attorneys, treatment providers, and others. Drug court programs also vary in terms of the substance abuse treatment required. However, most programs offer a range of treatment options and generally require a minimum of about 1 year of participation before a defendant completes the program. In order to determine defendants’ eligibility for participation, drug court programs typically screen defendants based on their legal status and substance use. The screening process and eligibility criteria can vary across drug court programs. According to the literature, eligible drug court program participants ranged from nonviolent offenders charged with drug-related offenses who have substance addictions to relatively medium- risk defendants with fairly extensive criminal histories and failed prior substance abuse treatment experiences. Participants were also described as predominantly male with poor employment and educational achievements. Appendix IV presents additional information about the general characteristics of drug court programs and participants in the evaluations we reviewed. Research on drug court programs has generally focused on program descriptions and process measures, such as program completion rates, and presented limited empirical evidence about the effectiveness of drug court programs in reducing recidivism and substance use. In 1997, we reported on 12 evaluations that met minimum research standards and concluded that the evaluations showed some positive results but did not firmly establish whether drug court programs were successful in reducing offender recidivism and substance use relapse. More recently, two syntheses of multiple drug court program evaluations have drawn positive conclusions about the impact of drug court programs. One synthesis concluded that criminal activity and substance use are reduced relative to other comparable offenders while participants are engaged in the drug court program, and that program completion rates ranged from 36 to 60 percent. Further, the other synthesis reported that drug offenders participating in a drug court program are less likely to re-offend than similar offenders sentenced to traditional correctional options, such as probation. Some of the evaluations included in these two syntheses had methodological limitations such as, the lack of strong comparison groups and the lack of appropriate statistical controls. Some did not use designs that compared all drug court program participants—including graduates, those still active, and dropouts—with similar nonparticipants. For example, they compared the outcomes of participants who completed the program with the outcomes of those who did not (that is, dropouts). These evaluations, upon finding that program graduates had better outcomes than dropouts, have concluded that drug court programs are effective. This is a likely overestimation of the positive effects of the intervention because the evaluation is comparing successes to failures, rather than all participants to nonparticipants. Additionally, other evaluations did not use appropriate statistical methods to adjust for preexisting differences between the program and comparison groups. Without these adjustments, variations in measured outcomes for each group may be a function of the preexisting differences between the groups, rather than the drug court program. In most of the evaluations we reviewed, adult drug court programs led to recidivism reductions during periods of time that generally corresponded to the length of the drug court program—that is, within-program. Our analysis of evaluations reporting recidivism data for 23 programs showed that lower percentages of drug court program participants than comparison group members were rearrested or reconvicted. Program participants also had fewer incidents of rearrests or reconvictions and a longer time until rearrest or reconviction than comparison group members. These recidivism reductions were observed for any felony offense and for drug offenses, whether they were felonies or misdemeanors. However, we were unable to find conclusive evidence that specific drug court program components, such as the behavior of the judge, the amount of treatment received, the level of supervision provided, and the sanctions for not complying with program requirements, affect participants’ within-program recidivism. Post-program recidivism reductions were measured for up to 1 year after participants completed the drug court program in several evaluations, and in these the evidence suggests that the recidivism differences observed during the program endured. A more detailed description of the recidivism reduction results is included in appendix V. Evidence about the effectiveness of drug court programs in reducing participants’ substance use relapse is limited and mixed. The evidence included in our review on substance use relapse outcomes is limited to data available from eight drug court programs. The data include drug test results and self-reported drug use; both measures were reported for some programs. Drug test results generally showed significant reductions in use during participation in the program, while self-reported results generally showed no significant reductions in use. Appendix VI presents additional information about the evaluations we reviewed that reported on substance use relapse outcomes. Completion rates, which refer to the number of individuals who successfully completed a drug court program as a percentage of the total number admitted, in the programs we reviewed that assessed completion ranged from 27 to 66 percent. As might be expected, program completion was associated with participants’ compliance with program requirements. Specifically, evaluations of 16 adult drug court programs that assessed completion found that participants’ compliance with procedures was consistently associated with completion. These program procedures include attending treatment sessions, engaging in treatment early in the program, and appearing at status hearings. No other program factor, such as the severity of the sanction that would be invoked if participants failed to complete the program and the manner in which judges conducted status hearings, predicted participants’ program completion. Several characteristics of the drug court program participants themselves were also associated with an increased likelihood of program completion. These characteristics include lower levels of prior involvement in the criminal justice system and age, as older participants were more likely to complete drug court programs than younger ones. Appendix VII presents additional information about the evaluations we reviewed that reported on program completion. A limited number of evaluations in our review discussed the costs and benefits of adult drug court programs. Four evaluations of seven drug court programs provided sufficient cost and benefit data to estimate their net benefits (that is, the benefits minus costs). The cost per drug court program participant was greater than the cost per comparison group member in six of these drug court programs. However, all seven programs yielded positive net benefits, primarily from reductions in recidivism affecting both judicial system costs and avoided costs to potential victims. Net benefits ranged from about $1,000 per participant to about $15,000 in the seven programs. These benefits may underestimate drug court programs’ true benefits because the evaluations did not include indirect benefits (such as reduced medical costs of treated participants). Financial cost savings for the criminal justice system (taking into account recidivism reductions) were found in two of the seven programs. We provide additional information about the reported costs and benefits of drug court programs we reviewed in appendix VIII. Overall, positive findings from relatively rigorous evaluations in relation to recidivism, coupled with positive net benefit results, albeit from fewer studies, indicate that drug court programs can be an effective means to deal with some offenders. These programs appear to provide an opportunity for some individuals to take advantage of a structured program to help them reduce their criminal involvement and their substance abuse problems, as well as potentially provide a benefit to society in general. Although not representative of all drug court programs, our review of 27 relatively rigorous evaluations provides evidence that drug court programs can reduce recidivism compared to criminal justice alternatives, such as probation. These results are consistent with those of past reviews of drug court evaluations. Positive results concerning recidivism are closely associated with program completion. Specifically, while drug court participation is generally associated with lower recidivism, the recidivism of program completers is lower than for participants in comparison or control groups. Thus, practices that encourage program completion may enhance the success of drug court programs in relation to recidivism. While our review sheds little light on the specific aspects of these programs that are linked to positive recidivism outcomes, both participant compliance with drug court procedures and some participant characteristics seem to be related to success. To the extent that research can help to discern best practices for drug courts, the models for effective programs can be enhanced. Specifically, to the extent that drug court program managers can learn more about methods to retain participants for the duration of the program, they may be able to further enhance the positive impacts of drug court programs. We requested comments on a draft of this report from the Attorney General and the Director of ONDCP. Department of Justice officials informed us that the agency had no comments on the report. ONDCP officials informed us that the agency generally concurred with our findings. We are sending copies of this report to other interested congressional committees, the Attorney General, and the Director of the Office of National Drug Control Policy. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or by e-mail at [email protected] or William J. Sabol, Assistant Director, at (202) 512-3464, or [email protected]. Key contributors to this report are listed in appendix IX. The 21st Century Department of Justice Appropriations Authorization Act requires that we assess drug court program effectiveness. Our objectives were to assess the results of methodologically sound, published empirical evaluations of adult drug court programs, particularly relating to (1) recidivism outcomes of participants and other comparable offenders, (2) substance use relapse of participants and other comparable offenders, (3) program completion of participants, and (4) costs and benefits of drug court programs. To identify the universe of evaluations to include in our review, we used a three-stage process. First, we (1) conducted key-word searches of criminal justice and social science research databases; (2) searched drug court program-related Web sites, such as those of the National Drug Court Institute and the National Association of Drug Court Professionals; (3) reviewed bibliographies, published summaries, meta-analyses, and prior GAO reports on drug court programs; and (4) asked drug court researchers and officials in agencies that fund drug court research to identify evaluations. Our literary search identified over 230 documents, which consisted of published and unpublished outcome evaluations, process evaluations that described program objectives and operations, manuals and guides related to drug court program operations, commentary from drug court practitioners, and summaries of multiple program evaluations. Next, we reviewed these documents and identified 117 evaluations of adult drug court programs in the United States that (1) were published between May 1997 and January 2004 and (2) reported recidivism, substance use relapse, or program completion outcomes. Finally, to select the evaluations we used in our in-depth review, we screened them to determine whether they met additional criteria for methodological soundness. Specifically, to assess recidivism and substance use relapse, we selected evaluations that used either an experimental design in which (1) eligible offenders were randomly assigned to different programs or conditions and (2) there was an acceptable level of attrition or a quasi-experimental design in which (1) all drug court program participants were compared with an appropriate group of comparable offenders who did not participate in the drug court program, and (2) appropriate statistical methods were used to adjust, or control, for group differences. If random assignment was not used, in an attempt to ensure that the groups were similar, aside from program participation (the intervention), the comparison group(s) should have been as alike as possible on a range of important characteristics. Statistical analyses can be used to further minimize differences between the program and comparison groups. Typically, statistical analyses to control for differences such as these are not necessary when study participants are randomly assigned to groups. To assess program completion, we also selected evaluations that compared the outcomes of participants (such as program graduates and those who dropped out) within a drug court program in order to determine what factors, if any, are associated with program completion. We selected those evaluations that used appropriate statistical methods to control for differences between the participant groups. Of the 117 evaluations we screened, we selected 27 evaluations for our in-depth review. The 27 evaluations we selected for our review reported information on 39 unique adult drug court programs that were implemented between 1991 and 1999. Table 1 lists the drug court program evaluated, researchers, and outcomes we used in our assessment of drug court program effectiveness. All of the evaluations we reviewed, as well as others consulted, is included in the bibliography. To obtain information on our outcomes of interest—that is, recidivism, substance use relapse, and program completion—we used a data collection instrument to systematically collect information about the methodological characteristics of the evaluations, the participants and components of the drug court programs, and the outcomes of the participants and other comparable groups. To assess the methodological strength of the 27 evaluations, we used generally accepted social science principles. For example, we assessed elements such as whether data were collected during or after program completion and the appropriateness of outcome measures, statistical analyses, and any reported results. Each evaluation was read and coded by a senior social scientist with training and experience in evaluation research methods. A second senior social scientist and other members of our evaluation team then reviewed each completed data collection instrument to verify the accuracy of the information included. Part of our assessment also focused on the quality of the data used in the evaluations as reported by the researchers and our observations of any problems with missing data, any limitations of data sources for the purposes for which they were used, and inconsistencies in reporting data; we incorporated any data problems noted into our appraisals. To assess the cost-benefit analyses of drug court program evaluations, we reviewed all of the evaluations selected for our structured review that reported cost or benefit information. Of the 27 evaluations included in our in-depth review, 8 reported information about program costs and 4 about benefits. The 8 evaluations we included in our cost review are shown in table 2. Four of the 8 evaluations reported sufficient data on both benefits and costs, which allowed us to assess the reported net benefits of the drug court programs. Specifically, we were able to determine whether the reduction of recidivism—the benefit—would outweigh the additional costs of a program. We used standard cost-benefit criteria to screen and assess these evaluations that reported cost and benefit information. Additionally, we reviewed methodologies describing approaches for conducting cost analyses of drug court programs. We selected the evaluations in our review based on their methodological strength; therefore, our results cannot be generalized to all drug court programs or their evaluations. Although the findings of the evaluations we reviewed are not representative of the findings of all evaluations of drug court programs, the evaluations consist of those published evaluations we could identify that used the strongest designs to assess drug court program effectiveness. Finally, we interviewed drug court program researchers and officials at the Department of Justice, the National Institute on Drug Abuse, and the Office of National Drug Control Policy. We conducted our work from October 2003 through February 2005 in accordance with generally accepted government auditing standards. Most of the evaluations included in our review used quasi-experimental comparison groups. The use of quasi-experimental comparison groups can result in comparisons between drug court participants and comparison group members that differ on key variables related to recidivism. Systematic difference between comparison groups and treatment groups, called selection bias, threatens the validity of evaluation findings. Though design and analysis strategies differed, each of the quasi- experimental evaluations in our review used some combination of design and statistical methods to address the issue of selection bias. Approaches to address selection bias in design generally included attempts to choose comparison groups that share key commonalities with treatment groups (usually including likelihood of eligibility to participate in a drug court were it available) and therefore were less likely to differ on observed and unobserved characteristics related to recidivism. The evaluations in our review also included attempts to minimize the effects of selection bias through the application of various statistical methods. The observed differences in recidivism that we discuss in this report could arise from measured and unmeasured sources of variation between drug court participants and comparison group members. If comparison group members differed systematically from drug court participants on variables or factors that are also associated with recidivism and these variables were not accounted for by the design or analysis used in the evaluation, then the observed differences in recidivism could be due to these sources of variation rather than participation in the drug court program. For example, if successful drug court participants differed systematically in their substance abuse addiction problems from the members of the group against which they are compared, and if the differences in substance abuse addiction are not explicitly assessed in the evaluation, these differences, and not necessarily participation in the drug court program, could explain any observed recidivism differences. Similarly, if participants differed from comparison group members in their motivation to complete the drug court, recidivism differences could arise from these. Evaluations generally do not have measures of variables, such as motivation, that can be explicitly included in the analysis of drug court outcomes. One way to address issues of selection bias is in the design of the comparison groups. Generally, random assignment of eligible participants to treatment and control groups addresses selection bias by randomly distributing individual differences between the treatment and control group. Another way to address selection bias is statistically—by forming comparison groups that consist of individuals that are as similar as possible to drug court participants and by using statistical techniques to control for observed differences, including sophisticated two-step procedures that attempt to address differences in selection into the drug court program. For the recidivism outcomes we reported, the evaluations in our review used design and statistical methods to address selection bias. Evaluations in our review were either experiments or quasi-experiments. The experiments randomly assigned eligible defendants to a drug court program or a control group of defendants who received conventional case processing. The quasi-experiments used one of two types of comparison groups: historical comparison group—formed from individuals who received conventional case processing during a period of time shortly before the drug court program was implemented and contemporaneous comparison group—formed from defendants (1) who were eligible for drug court but received conventional case processing during the same time period as the drug court program participants, (2) who were from a district within a court’s jurisdiction from which arrestees were not eligible to participate in the drug court program, or (3) who had similar charges and were matched on characteristics. If implemented as designed, experiments can provide strong evidence for the effectiveness (or lack thereof) of a drug court intervention. A key presumption of random assignment is that any individual factor that could be associated with the outcome of interest (recidivism) is randomly distributed between the experimental and control groups. Hence, if none of these factors are correlated systematically with assignment into groups, then the threat of selection bias is reduced. Presuming that the randomization is complete and that the experiment was carried out without high levels of attrition of subjects from either group, the observed differences in recidivism between the two groups would likely arise from the intervention, rather than observed or unobserved individual factors. Several of the evaluations used quasi-experimental designs in which the comparison group members were chosen from defendants who appeared before the court during a period of time shortly before the drug court program was introduced and, based on certain observable characteristics, were deemed to be eligible to participate in the drug court program. Theoretically, if there were no significant changes in processes that led to defendants appearing in a court during the pre- and drug court periods, and if their eligibility for drug court participation could be determined, a historical comparison group would consist of individuals that share characteristics with drug court defendants. Alternatively, if there were a significant difference in processing between the pre-drug court and drug court periods (for example, such as a shift in prosecution priorities toward particular types of offenses or a change in the nature of a community’s drug problem), then the historical comparison group could consist of defendants that differed systematically from the drug court participant group. The historical comparison group in the evaluations we reviewed generally consisted of defendants who were arrested and arraigned in the court that offered the drug court program in a period immediately prior to the implementation of the drug court. In one case, the comparison group members were chosen from defendants on probation in the period prior to the drug court. To minimize possible bias that could arise from changes in court practices or the composition of defendants entering a court in the period prior to the implementation of a drug court, in the evaluations we reviewed, the prior periods generally ended within 4 months to 1 year before the implementation of the drug court program. Another subset of the evaluations we included in our review used quasi- experiments with contemporaneous comparison groups. One comparison group consisted of defendants who met the drug court program’s eligibility requirements but who were arrested in areas within a court’s jurisdiction that did not enroll individuals in the drug court. Another comparison group was formed by selecting members from other, comparable jurisdictions. A third consisted of individuals who, while otherwise eligible to enroll in the drug court, were unable to participate because of logistical reasons. Others consisted of defendants charged with similar offenses as drug court participants and who were then matched on key variables. All of the quasi-experiments that we reviewed, whether they used historical or contemporaneous comparison groups, used various statistical methods to control for individual differences between drug court participants and comparison group members on observable variables. Several evaluations used sophisticated statistical methods that not only adjusted results for individual level differences but also attempted to correct for selection bias by modeling unobserved differences between drug court and comparison group members. These methods reduce but do not completely eliminate potential bias from unobserved confounding factors. The extent to which they reduce bias depends upon the richness and quality of the control variables that are used to estimate the models. These methods essentially use statistical models to predict individual probabilities of participation in the drug court program (regardless of actual participation). The models predict the probability, sometimes called a propensity score, based on variation in individual level characteristics (for example, criminal histories and demographic attributes) that offenders in a given group would participate. Some of the evaluations that we reviewed incorporated this technique as the first stage of a two-stage model that estimated recidivism differences. Others used it to identify the individuals to include in a comparison group. Other studies matched comparison group members to drug court participants on a more limited set of demographic and criminal justice variables (such as type of charge and prior criminal history) and then, post-matching, used various methods to control for differences on unmatched variables. A presumption behind these matching methods is that by selecting for inclusion in the comparison group only those defendants who matched drug court participants on these observed characteristics, the evaluation would create a comparison group that was similar in composition to the drug court participants. A limited set of matching variables that is based largely upon demographic variables may be unlikely to capture all of the individual-level sources of variation in recidivism that could account for differences between the drug court participants and comparison group members. Therefore, these evaluations attempted to control for differences in key, nonmatch variables (such as criminal justice risk-level differences) in their analysis. Some of the evaluations attempted to control for individual-level differences between the drug court participants and the comparison group members using various linear and nonlinear (e.g., logistic) regression methods. This approach relies on the assumption that the observed characteristics included in the regression are the key variables that predict recidivism. The regression models adjust the results for differences between the two samples on these characteristics and allow the researcher to avoid making incorrect inferences about recidivism differences because one group (either treatment or comparison) differs systematically from the other in the presence of the key variables that are associated with recidivism. Evaluations that used regression methods generally made efforts to control for variables that are known to be related to recidivism, such as criminal history, type and number of charges, and age. Table 3 describes the comparison groups used in the evaluations we reviewed and the methods used by these evaluations to address selection bias. In this appendix, we outline the criteria we used in assessing the cost- benefit analyses of drug court program evaluations, and we discuss how the evaluations we reviewed followed these criteria in their analyses. Cost-benefit analysis determines the costs associated with implementing or operating a program and weighs that cost against any benefits expected from the program. The results of a cost-benefit analysis can be represented as either a net benefit—calculated as total benefits minus total costs—or a benefit-to-cost ratio—calculated as total benefits divided by total costs. We used the net benefit—benefit minus cost—measure to represent the cost-benefit analyses of the drug court programs we reviewed. Net benefit can be used to evaluate the cost savings—that is, the benefits—for each participant in a drug court program relative to the cost savings for each offender processed by conventional case processing in the same jurisdiction. To assess the cost-benefit analyses of drug court program evaluations, we reviewed all of the evaluations selected for our structured review that reported cost or benefit information. Of the 27 evaluations included in the structured review, 8 reported information about program costs and 4 about benefits. The 8 evaluations we included in our cost-benefit review are shown in table 4. Four of the 8 evaluations reported sufficient data on both costs and benefits, which allowed us to assess the reported net benefits of the drug court programs. Specifically, we were able to determine whether the reduction of recidivism—the benefit—would outweigh the additional costs of a program. Conducting a cost-benefit analysis is theoretically straightforward— determine the monetary value of a program’s benefits and compare that value with the monetary value of the program’s costs. However, the analysis is more complicated in practice because of decisions that have to be made about who to include as recipients of the benefits and how to measure costs and benefits. On the basis of the general principles of cost- benefit analysis, we identified five criteria that we used in assessing the cost-benefit analyses of the drug court programs we reviewed. Table 5 describes these criteria; where further explanation is needed, we discuss them in the text that follows. A cost-benefit analysis should identify what the baseline program is. In the case of drug court programs, the analysis should state what would happen to an offender if the drug court program did not exist. The costs and benefits of this alternative program or case processing, called the baseline, are the standard by which the drug court program costs and benefits are judged. There is no single baseline against which all drug court programs are compared. For example, one jurisdiction may offer drug court program participation to defendants as an alternative for jail, another as an alternative for probation. A cost-benefit analysis should enumerate and assess all relevant costs. A cost-benefit analysis of a drug court program should consider two sets of costs—those associated with the program’s operation and those associated with the baseline. For both sets of costs, the analysis should determine which costs are relevant and then measure them. Drug court programs often require additional expenses from criminal justice system agencies, although these may differ from program to program. The true cost of a drug court program consists of these expenses—the resources that would have been used in their next best alternative use—if not used for the program. Drug court programs have several basic elements, one of which is ongoing monitoring of the participants by a judge or other personnel in the criminal justice system. Others are regular status hearings and drug testing, substance abuse treatment, and the prescription of sanctions for noncompliance with program requirements. While an analysis should consider all of these costs, it should not consider costs that were incurred before the program began (that is, sunk costs) since they are not relevant to the current decision to fund or continue funding the program. In addition to the program’s costs, the analysis should include the costs of the baseline. These depend on the types of offenders who are eligible for the drug court program in each jurisdiction and the alternative program or processing available for them. The baseline costs may include the cost of traditional adjudication (including a judge’s time during the early phases of judicial processing), the costs of jail time served, or costs of probation (such as the salaries of probation officers or any required monitoring). The various methods of measuring or estimating costs require varying amounts of time, data, and analysis. The most straightforward way of measuring the costs of both a drug court program and the baseline is to use budget or expenditure figures from the agencies involved to calculate average costs. While this method has the advantage of simplicity, average cost may be a misleading measure of resource costs if it does not accurately portray the resources defendants actually used. For example, a few very high cost program participants could skew the average cost to such an extent that actual cost appears to be larger than it is. This is especially important in assessing average costs associated with the baseline program. The average baseline defendant in a jurisdiction may be very different from participants in a drug court program. Therefore, comparing costs associated with the average defendant with those of the average drug court participant may not be valid. A less simple approach, requiring a more intensive use of time and data, is to conceptualize the cost of drug court program participation as a series of transactions within the criminal justice system agencies. Since a defendant interacts with a number of these agencies, this method requires obtaining data about each defendant from each agency. These data can be aggregated for individual defendants to determine the total amount of resources used per participant, and these resources can then be multiplied by their price. This approach has the advantage of allowing a better determination of the true cost of drug court participation, relative to participation in an alternative program. However, since it is more labor intensive, jurisdictions may not have records organized in a way that allows for tracking individuals across agencies. As with costs, all relevant benefits of the drug court program should be assessed relative to the baseline. The typical benefit attributed to drug court programs is avoided costs from reductions in recidivism. Other benefits that could be considered in the analysis include reduced medical costs and increased worker productivity stemming from reduced drug dependency. Reductions in recidivism can lead to benefits, or cost savings, for criminal justice agencies and for potential victims of crime. A reduction in the number of arrests would result in a reduction in expenditures for the agencies, including police, prosecutors, courts, corrections departments, and probation agencies. Potential victims benefit from reduced recidivism. Benefits accrued because of reduced victimization include direct monetary costs, such as the value of stolen property and medical expenses, and quality-of-life costs related to pain and suffering. Not including pain and suffering costs would underestimate the true cost of crime, but some researchers exclude these intangibles because it is difficult to assign appropriate dollar values to them. This results in a more conservative estimate of the benefit of the drug court program. In addition, the analysis should, to the extent possible, assess benefits to society from a reduction in substance abuse. These benefits may include avoided medical care costs, such as medical services a treated drug addict did not require. Additionally, if drug court programs are effective, the labor market outcomes of the participants may improve. For example, successful participants may be unemployed less often or may earn higher wages because of increased productivity. To the extent that they may therefore pay higher taxes, such taxes are benefits to taxpayers in general. The simplest approach to measuring the benefits of reduced recidivism is using arrest data. This approach has the appeal that an arrest is usually the beginning of expenditures by a criminal justice agency. Then, the value of this benefit per arrest can be estimated in the same way that costs are estimated—by using agency budgets to calculate the average savings from each avoided arrest. It is possible to measure criminal victimization by using arrest data as well. However, this approach requires the assumption that each criminal act results in an arrest. To the extent that this is not true, the estimation of victimization may be underestimated by a significant margin. Another approach to measuring victimization would be to use self-reported criminal behavior by the drug court program and comparison group participants. However, this approach relies on the forthrightness of the defendants, which is not ideal. Since both of these methods have drawbacks, any measurement of criminal victimization is much more uncertain than measures of expenditures by the agencies. Measuring the cost of crime may also be problematic. Estimates of the costs of crimes at the national level are available in the literature. For example, these can provide an estimate of the cost of a burglary, and a researcher can apply the cost to the number of burglaries by the participant. Ideally, an analysis would also include the costs of crimes specific to the drug court program’s jurisdiction to account for regional differences. For example, the average value of a car stolen in Miami might be different from the average value of a car stolen in Tallahassee. The uncertainty in most cost and benefit estimates is the result of imprecision in the underlying data used in the analysis and the assumptions on which the analysis is built. Assessing uncertainty can enhance the confidence in the estimates in the evaluation. Useful information for an analysis to report includes the estimates of costs and benefits and the sensitivity of the cost and benefit estimates to assumptions made in the analysis. Estimates of costs and benefits should take into account how likely it is that a particular outcome (for example, arrest for a crime that will warrant participation in the drug court program) will occur, in addition to the value (for example, the cost to victims, the criminal justice expenditures) of that outcome. Taking account of the likelihood of occurrence allows the researcher to provide a better assessment of how reliable the data—the costs and benefits—are. Uncertainty also derives from assumptions made in an analysis. In an assessment of an analysis, it is helpful to know which assumptions can be changed without altering the conclusion of the analysis. Sensitivity analysis, or systematically varying the assumptions to see what effect variations have on estimated outcomes, can be applied to several components of a drug court program’s cost analysis. For example, one assumption in the criminal justice area is the numerical relationship between the commission of a crime and the arrest rate. Not every crime results in an arrest, but a cost-benefit analysis may have been able to examine only arrest data, given the lack of other data available. Using arrest data assumes, in effect, that every crime does result in an arrest—an assumption that is likely to underestimate the change in crime that the public experiences. A sensitivity analysis could examine this possible understatement by increasing the study’s assumed rate of crime per arrest (for example, the Washington State study assumes that 20 percent of robberies result in an arrest) to better approximate the actual crime rate. We reviewed eight evaluations of 10 drug court programs for their use of the criteria. At least five of these evaluations made a concerted effort to assess all relevant costs as well as benefits. Only two evaluations assessed uncertainty. However, the two evaluations that conducted sensitivity analyses were two of the three that did not present costs and benefits separately so as to allow for an assessment of net benefits. Table 6 summarizes the evaluations we reviewed and their application of the five criteria in their assessments. All eight evaluations provided a statement of the purpose of the drug court program, although the purpose of the programs varied somewhat one from another. Evaluations of some of the programs—for example, those in Kentucky, Washington, D.C., and Multnomah County—explicitly stated that the purpose was to address the source of criminal behavior—drug addiction, dependency, and the like—in order to reduce recidivism. None of the evaluations, however, included reduced drug dependency in their calculation of net benefits. Of the eight evaluations we reviewed, five compared drug court program participants with the baseline of conventional case processing in the criminal courts in their jurisdictions. The evaluation of the D.C. Superior Court Drug Intervention Program used a different baseline for comparison. This program was an experiment that randomly assigned drug felony defendants to one of three court dockets. The standard docket offered defendants weekly drug testing, judicial monitoring, and encouragement to seek treatment in the community. The sanctions docket offered a program of graduated sanctions with weekly drug testing, judicial monitoring of drug use, and referral to community-based treatment. The treatment docket offered weekly drug testing and an intensive daily treatment program based in D.C. Superior Court. The evaluation studied the sanctions and treatment docket participants against the outcomes of the standard docket participants. The eight evaluations varied in the types of costs reported. All eight included only costs directly attributable to drug court programs. None attempted to allocate any shared costs incurred jointly by a drug court program and another activity not directly related to the program (such as policing). Further, no evaluation included sunk costs. All eight evaluations included direct program costs such as urine analysis. However, they varied in reporting costs for adjudication and sanctions. Six of the eight evaluations included costs of adjudication or sanctions. All of the evaluations, except for the Kentucky state evaluation, attempted to include the cost of the baseline in the analysis. Consequently, this evaluation did not report a net benefit value. The evaluations varied in how they estimated costs. The most common method was to compute average costs. The Multnomah County evaluation differed in that it determined the cost to the agencies directly from the administrative records on the drug court program participants and the comparison group. In addition, the researchers on this evaluation followed a smaller sample of drug court program and control group participants through the system, tracking the use of resources in the various transactions associated with them. For example, the researchers used a stopwatch at arraignments to determine how much time the judge and other drug court staff spent in this process. In their estimation of benefits attributable to the drug court programs, five of the evaluations included estimates of drug court program and criminal justice system costs and avoided victimization costs (Breaking the Cycle, the D.C. Superior Court Drug Intervention Program dockets, Kentucky, Multnomah County, and Washington State). The D.C. Superior Court and Breaking the Cycle evaluations differed substantially from the others in both the method of measuring victimization and the victim costs. Rather than using arrest as the indicator of crimes committed, the evaluations used self-reported criminal activity. In addition, the studies did not use “quality of life” costs in their measure of cost of crime. One evaluation estimated just drug court program costs avoided (Douglas County, Nebraska). One evaluation (Los Angeles County) included only criminal justice system costs avoided during the treatment period, but it did not cover recidivism. The Maricopa County evaluation only reported average costs. No evaluation we reviewed presented the likelihood of estimated costs and benefits, and only two evaluations conducted sensitivity analysis. In the Los Angeles County Drug Court evaluation, participants were partitioned into low-risk, medium-risk, and high-risk groups, on the basis of criminal history and other risk factors. The Kentucky Drug Court Program evaluation placed its estimates of the benefit-to-cost ratio within ranges that depended on whether accounting or economic costs were included, and it included earnings improvements. This appendix provides a general description of drug court program components and describes program participants in the evaluations we reviewed. Drug court programs rely on a combination of judicial supervision and substance abuse treatment to motivate defendants’ recovery. Judges preside over drug court proceedings, which are called status hearings; monitor defendants’ progress with mandatory drug testing; and prescribe sanctions and rewards, as appropriate in collaboration with prosecutors, defense attorneys, treatment providers, and others. Drug court programs can vary in terms of the substance abuse treatment required. However, most programs offer a range of treatment options and generally require a minimum of about 1 year of participation before a defendant completes the program. Drug court program participants can vary across programs according to differences in eligibility requirements and jurisdictions. The participants in the drug court programs we reviewed were predominantly male, generally unemployed at the time of program entry, and had prior involvement in the criminal justice system. This section describes typical drug court program approaches, screening processes and participant eligibility requirements, completion requirements, treatment components, and sanctions. Drug court programs generally have taken two approaches to processing cases: (1) deferred prosecution (diversion) and (2) post-adjudication. In the diversion model, the courts defer prosecution dependent on the offender’s agreement to participate in the drug court program. Deferred adjudication models do not require the defendant to plead guilty. Instead the defendant enters the drug court before pleading to a charge. Defendants who complete the treatment program are not prosecuted further or their charges are dismissed. Failure to complete the program results in prosecution for the original offense. This approach is intended to capitalize on the trauma of arrest and offers defendants the opportunity to obtain treatment and avoid the possibility of a felony conviction. In contrast, offenders participating in a post-adjudication (post-plea) drug court program plead guilty to the charge(s) and their sentences are suspended or deferred. Upon successful completion of the program, sentences are waived and in many cases records are expunged. This approach provides an incentive for the defendant to rehabilitate because progress toward rehabilitation is factored into the sentencing determination. Both of these approaches provide the offender with a powerful incentive to complete the requirements of the drug court program. Some drug court programs use both deferred prosecution and post- adjudication approaches and assign defendants to an approach depending on the severity of the charge. Additionally, drug court programs may also combine aspects of these models into a hybrid, or combined, approach. Defendants reach the drug court program from different sources and at varying points in case processing. Screening defendants to determine eligibility for a drug court program generally includes screening them for legal and clinical eligibility. Initially, defendants are screened for legal eligibility, based on criminal history and current case information. Depending on the program, an assistant district or prosecuting attorney, court clerk, or drug court coordinator typically conducts the review. Criteria for legal eligibility typically include charging offense, prior convictions, pending cases, and supervision status. Drug courts generally accept defendants charged with drug possession or other nonviolent offenses such as property crimes. Some drug court programs allow defendants who have prior convictions to participate, and others do not. Federal grants administered under Title II of the 21st Century Department of Justice Appropriations Authorization Act are not supposed to be awarded to any drug court program that allows either current or past violent offenders to participate in its program. After defendants are determined to be legally eligible for the program, treatment providers or case managers will typically determine defendants’ clinical eligibility. This can be determined through structured assessment tests, interviews, or even preliminary drug test results. While drug courts generally only accept defendants with substance abuse problems, they vary in the level of addiction or type of drug to which defendants are addicted. For example, some programs do not accept defendants who only have addictions to marijuana or alcohol, while others do. Clinical eligibility can also include factors such as medical or mental health barriers and motivation or treatment readiness. In several drug court programs in our review, the drug court judge’s satisfaction with or assessment of an offender’s motivation and ability to complete the program was a factor used to screen defendants. Drug court programs typically require defendants to complete a 1-year treatment program in order to graduate from or complete the program. Some programs impose other conditions that participants must meet in addition to treatment. These conditions could include remaining drug-free for a minimum amount of time, not being arrested for a specified period of time, maintaining employment or obtaining an educational degree or certification, or performing community service. The central element of all drug court programs is attendance at the regularly scheduled status hearings at which the drug court judge monitors the progress of participants. Monitoring is based on treatment provider reports on such matters as drug testing and attendance at counseling sessions. The judge is to reinforce progress and address noncompliance with program requirements. The primary objectives of the status hearing are to keep the defendant in treatment and to provide continuing court supervision. More broadly, judicial supervision includes regular court appearances and direct in-court interaction with the judge, as well as scheduled case manager visits. Monitoring participants’ substance use through mandatory and frequent testing is a core component of drug court programs. Programs vary in the specific policies and procedures regarding the nature and frequency of testing. For example, in some programs in our review participants were required to call to find out whether they are required to be tested in a given period or on a randomly selected day of the week. The frequency of testing generally varied depending on the stage or phase of the program that participants were in. In most drug court programs, treatment is designed to last at least 1 year and is generally administered on an outpatient basis with limited inpatient treatment, as needed, to address special detoxification or relapse situations. Many of the programs operate with the philosophy that because drug addiction is a disease, relapses can occur and that the court must respond with progressive sanctions or enhanced treatment, rather than immediate termination. Treatment services are generally divided into three phases. Detoxification, stabilization, counseling, drug education, and therapy are commonly provided during phases I and II, and in some instances, throughout the program. Other services relating to personal and educational development, job skills, and employment services are provided during phases II and III, after participants have responded to initial detoxification and stabilization. Housing, family, and medical services are frequently available throughout the program. In some instances, a fourth phase consisting primarily of aftercare-related services is provided. The objectives of drug court program treatment are generally to (1) eliminate the program participants’ physical dependence on drugs through detoxification; (2) treat the defendant’s craving for drugs through stabilization (referred to as rehabilitation stage) during which frequent group or individual counseling sessions are generally employed; and (3) focus on helping the defendant obtain education or job training, find a job, and remain drug free. Drug court programs can also either directly provide or refer participants to a variety of other services and support, and they may include medical or health care, mentoring, and educational or vocational programs. The use of community-based treatment self-help groups, such as Alcoholics Anonymous (AA) and Narcotics Anonymous (NA), and aftercare programs also varies across drug court programs. Judges generally prescribe sanctions and rewards as appropriate in collaboration with prosecutors, defense attorneys, treatment providers, and others. Typical sanctions for program noncompliance include oral warnings from the judge; transfer to an earlier stage of the program; attendance at more frequent status hearings, treatment sessions, or drug tests; and serving jail time for several days or weeks. The approach or philosophy for how a drug court judge prescribes sanctions can vary. For example, some judges use a graduated sanctions approach, where sanctions are applied in increasing severity. Other judges may use discretion in prescribing sanctions, assessing participants’ noncompliance on a case-by-case basis. Drug court programs typically use various criteria for ending a defendant’s participation in the program before completion. These criteria may include a new felony offense, multiple failures to comply with program requirements such as not attending status hearings or treatment sessions, and a pattern of positive drug tests. Before terminating a defendant for continuing to use drugs, drug court programs generally will use an array of treatment services and available sanctions. There are no uniform standards among all programs on the number of failed drug tests and failures to attend treatment sessions that lead to a participant’s termination. Drug court program judges generally make decisions to terminate a program participant on a case-by-case basis, taking into account the recommendations of others, including the treatment provider, prosecutor, and defense counsel. Relapses are expected, and the extent to which noncompliance results in terminations varies from program to program. Once a defendant is terminated, he or she is usually referred for adjudication or sentencing. All of the evaluations we reviewed reported some basic substance use or demographic data about the drug court program participants. However, not every evaluation reported the same types of information or provided equivalent levels of detail. The types of drugs participants reported using varied. Cocaine (crack or powder) was selected by the highest percentage of participants as the primary drug of choice in most of the programs reporting these data. However, in Baltimore, 77 percent reported heroin was primary drug of choice, whereas 43 percent of participants in the Tacoma, Washington, Breaking the Cycle program reported using methamphetamine, suggesting regional differences in drugs of choice. Participants in evaluations we reviewed did not always report “hard” drugs as their primary substances of choice. In several evaluations we reviewed, participants reported that alcohol or marijuana was their primary drug of choice. For example, in Chester County, 47 percent of participants reported marijuana, and 84 percent of participants in Maricopa County reported alcohol as primary drugs of choice. The participants in the drug court programs we reviewed were generally in their early 30s, predominantly male, and generally unemployed at the time of program entry. Participants’ average age at program entry ranged from 24 years (in a misdemeanor-only drug court program in New Castle County, Delaware) to 36 years (in the Baltimore City Drug Treatment Court). In the majority of these evaluations, participants were, on average, between 30 and 35. Participants were also predominantly male. The percentage of male defendants participating in drug court programs we reviewed ranged from 46 percent in Bakersfield, California, to 88 percent in the Jacksonville, Florida, Breaking the Cycle program. Generally, however, about 60 to 80 percent of participants in these programs were male, which corresponds to other reviews estimating that across multiple drug court programs about 70 percent of participants are male. In about half of the evaluations we reviewed that reported data about participants’ race or ethnicity, the majority (50 percent or more) of the participants were white. However, in some programs, most participants (that is, over 75 percent of the sample) were predominantly of one racial or ethnic background. For example, in five of the six Washington state drug court programs, between 79 and 92 percent of participants were white; and in the Baltimore and Washington, D.C., drug court programs, between 89 and 99 percent of participants were black. Some other programs reported that the participants were not predominantly of one racial or ethnic background. For example, in Los Angeles County, 23 percent of participants were white, 30 percent black, and 43 percent Hispanic or other. Drug court participants in the evaluations we reviewed that reported data on employment and educational status were generally unemployed with less than a high school education at program entry. Between 16 to 82 percent of participants were employed at the time of entry into the program. However, in about two-thirds of the programs reporting these data, less than half of the participants were employed at the time of entry into the drug court program. Similarly, in evaluations that reported information about participants’ educational status, 24 to 75 percent of participants reported that they had less than a high school education at the time of entry into the program. Generally, about 30 to 60 percent had less than a high school education at the time of entry into the program. For example, in the New York State evaluation, across the programs, the median percentage of participants who have received a general equivalency diploma (GED) or high school diploma is 45 percent; similarly, the median percentage employed or in school is 34 percent. Participants’ prior involvement in the criminal justice system, as reported in evaluations we reviewed, was generally considerable. Most participants were not first-time offenders. However the types and severity of involvement varied. In the evaluations that reported these data, the average number of prior arrests (of any kind) ranged from about 1 to about 13 per participant over different time periods, ranging from 1 to 5 years before entry into the drug court program. Additionally, in the six New York State drug court programs, the percent of participants with prior convictions ranged from 19 percent in Queens, which says it does not accept participants with prior felony convictions, to 69 percent in Syracuse. However, the researchers note that less than one-third of prior convictions in all courts were drug-related, which indicates that participants are involved in a wider range of criminal activity. This description of drug court participants’ criminal justice system involvement in the evaluations we reviewed was similar to descriptions reported by other sources. For example, the Drug Court Clearinghouse and Technical Assistance Project’s 2001 survey of responding adult drug court programs reported that 9 percent of participants had no prior felony convictions and 56 percent had been previously incarcerated. In most of the evaluations we reviewed, adult drug court programs led to statistically significant recidivism reductions during periods of time that generally corresponded to the length of the drug court program—that is, within-program. For the remaining programs, evaluations showed no significant differences in recidivism. Our analysis of the evaluations showed lower percentages of drug court program participants than comparison group members were rearrested or reconvicted. Program participants also had fewer recidivism events—that is, incidents of rearrests or reconvictions—and a longer time until rearrest or reconviction than comparison group participants. Recidivism reductions were observed for any felony offense and for drug offenses, whether they were felonies or misdemeanors. However, the evaluations did not provide conclusive evidence that specific drug court program components, such as the judge’s characteristics or behavior, the amount of treatment received, the level of supervision provided, and the sanctions for noncompliance with program requirements, affect participants’ within-program recidivism. In most of the programs that reported post-program data, recidivism reductions occurred for some period of time after participants completed the drug court program. The 27 evaluations we reviewed provided within-program recidivism comparisons between drug court program participants and an appropriate control or comparison group of non-drug court defendants in 23 different drug court programs. Regardless of the type of comparison group used, within-program recidivism reductions occurred for various measures of recidivism, such as rearrests and reconvictions, and prevailed across different types of offenses. Less clear, however, was the effect that certain drug court program components, such as treatment options or sanctions, had on participants’ recidivism outcomes. Drug court programs included in our review were associated with reductions in overall rearrest rates—that is, the percentage of a group arrested for any new offense (felony or misdemeanor) in a given period of time. Thirteen drug court programs reported overall rearrest data. Ten of these found statistically significant reductions in overall rearrest rates for drug court program participants. Across studies showing rearrest reductions, rates of drug court program participants generally ranged from about 10 to 30 percentage points below those of the comparison group. Table 7 shows these differences in rearrest rates by the length of the time frame covered. In two drug court programs (Breaking the Cycle program in Jacksonville and the D.C. Superior Court treatment docket), the program did not lead to significant differences in overall within-program recidivism. The absence of a significant difference suggests that these drug court programs did not necessarily lead to recidivism reductions. In one of these programs (the D.C. Superior Court), two types of drug court program interventions were examined—(1) a treatment docket, consisting of an intensive treatment-only intervention and (2) a sanctions docket, consisting of an intervention that combined referrals to treatment with graduated sanctions. The evaluation showed statistically significant reduction in rearrests for the sanctions docket. However, results for the treatment docket (treatment alone) were not statistically significant. A third drug court program in Escambia County showed mixed results. The drug court program showed no differences for overall rearrest rates (both felonies and misdemeanors) but showed a significant reduction in felony rearrest rates. The evaluations we reviewed showed lower reconviction rates—the percentage of a group convicted for a new offense in a given period of time—for drug court program participants than for comparison group members. Almost all of the programs (10 of 12) with these data reported a statistically significant reduction in reconviction rates for drug court program participants in one of the time frames covered. The two programs that did not show statistically significant reductions in reported results in a direction of reconviction reduction, but not at a statistically significant level. Table 8 displays the differences in reconviction rates between drug court participants and comparison group members after up to 1 year and after 2 to 3 years of entry into their respective programs. The significant differences in reconviction rates between drug court participants and comparison group members ranged from 8 to 21 percentage points within 1 year of entry into the drug court program. For the eight drug court programs in which follow-up data were provided for more than 1 year, statistically significant differences in reconviction rates ranged from between 5 and 25 percentage points. In five of these drug court programs, the differences in reconvictions generally increased or remained about the same as the length of the follow-up period increased. Consistent with reducing the percentage of participants rearrested or reconvicted within a given period, drug court program effects may occur by reducing the number of recidivism events—the number of arrests or convictions within a particular period—that participants commit and by increasing the time until participants commit an offense leading to an arrest or reconviction. In 8 of 12 programs, drug court participants had fewer rearrests and in 7 of 9 programs they had fewer reconvictions than comparison group members. Per 100 drug court program and comparison group participants, drug court program participants had between 9 and 90 fewer arrests during the program, or between 18 and 89 fewer convictions during the program, depending on the recidivism measure reported. Drug court program participants also generally had longer times to first arrest or conviction than comparison group members. In 11 of 16 programs in which the time to first event was reported, drug court program participants had longer times to the first recidivism event. Evidence showed recidivism reductions for different types of offenses, specifically for felonies and drug offenses, which may indicate decreased involvement in substance abuse. In almost all (9 of 12) of the programs that reported data on felony offense recidivism rates (either rearrest or reconviction), drug court participants had lower felony recidivism rates or fewer recidivism events. Similarly, in 11 of 14 of the programs that reported data on drug offense recidivism, drug court participants were either rearrested or reconvicted for drug offenses at lower rates. One drug court program that found no significant reduction in rearrests did find a reduction in specific offense types. Specifically, the evaluation of the drug court program in Escambia County found that there were no significant differences in overall rearrests between drug court participants and comparison group members within 2 years. However, the evaluation found significant reductions in felony rearrests that amounted to a 28- percentage point reduction in felony rearrest rates for drug court program participants. Because of relatively low participation rates, the researchers cautioned against interpreting such a large reduction in felony rearrests as indicative of the amount of a reduction that could be expected from the drug court program were it to be implemented for a larger numbers of defendants. A limited number of evaluations reported information about whether specific drug court program components affect participants’ recidivism, and their results were mixed. Drug court program judges, the treatment programs prescribed for participants, and the sanctions used to enforce compliance with drug court program procedures are three of the basic components of drug court programs. Two evaluations that we reviewed provided data on the effects of treatment and supervision on drug court program participants’ within-program recidivism outcomes. None of the evaluations we reviewed explicitly studied the effect of the judge—considered to be a critical component of a drug court program— on participants’ within-program recidivism. In evaluations of two drug court programs, the effects of two basic components of drug courts—substance abuse treatment and sanctions— were assessed, and their results varied. The specific types of substance abuse treatment provided to participants differ among programs. In general, treatment, depending on the needs of the participant, can include detoxification, individual and group counseling on an outpatient basis, substance use prevention education, as well as other health, educational, vocational, or medical services. Similarly, the types and severity of sanctions that judges use to enforce compliance with program requirements vary. These sanctions can include writing an essay, observing drug court proceedings for several days from the jury box, community service, or short jail stays. Drug court programs may use a graduated sanctions approach, where successive infractions are met with increasingly severe sanctions. In other programs, judges may have discretion to apply sanctions, as needed, according to the specifics of the case. To assess the effect of treatment, two evaluations examined recidivism differences between drug court program participants who attended more treatment sessions than other participants. In one of the two evaluations that provided the strongest designs for measuring treatment effects—the evaluation of the Baltimore City Drug Treatment Court—treatment contributed to reductions in recidivism. However, in the other—the evaluation of the D.C. Superior Court treatment docket—there were no differences between the treatment docket and the control group, who were assigned to a standard docket, consisting of some level of supervision, monitoring, and access to services. In both of these evaluations, drug court program participants and control group members participated in treatment, but the drug court program participants received more treatment and were engaged in a more structured treatment regime than were control group members. Findings from these two evaluations also indicate that treatment requirements combined with supervision and sanctioning contribute the most to recidivism reductions. While the evaluation of the Baltimore City Drug Treatment Court measured a distinct difference between those who participated more heavily in treatment, it also found that those who received treatment combined with supervision had the lowest recidivism rates. The D.C. Superior Court sanctions docket, which also combined judicial supervision with treatment, specifically engaged participants in a program in which judges applied an established set of graduated sanctions for failing to comply with drug court program requirements. The sanctions docket led to fewer rearrests for participants as compared with the control group assigned to the standard docket. The Baltimore City and D.C. Superior Court sanctions docket results support the contention that treatment requirements that are combined with supervision and sanctioning contribute the most to recidivism reductions. While fewer evaluations reported post-program, rather than within- program, recidivism results, those that did indicate that recidivism reductions endure beyond the time that participants are engaged in the program. Evaluations we reviewed reported post-program recidivism for longer periods for 17 drug court programs. In 13 of these 17 programs, drug court program participants had lower rearrest or reconviction rates than comparison group members. One evaluation of 6 programs defined explicit post-program periods of about 1 year, and drug court program participants had lower recidivism than comparison group members of 5 of the 6 programs. Among drug court program participants, graduates had lower post-program recidivism than dropouts. Significantly lower rearrest and reconviction rates were reported for participants in 13 of the 17 drug court programs that reported these data. As shown in table 9, for 6 drug court programs reporting significant reductions, the differences in rearrest rates between drug court program participants and comparison group members ranged between 4 to 20 percentage points. The evaluation of the Multnomah County drug court program reported only the mean number of arrests (recidivism events), but it also showed post-program reduction in recidivism using this measure. For the 9 drug court programs reporting significant reductions, the differences in reconviction rates between drug court participants and comparison group members ranged from 5 to 25 percentage points. Evidence that a gap in post-program recidivism can increase over time comes from the evaluation of the Maricopa County drug court program. In Maricopa County, there were no reported differences in recidivism (specifically rearrest rates) during the 1-year within-program time frame. However, a 3-year follow-up evaluation found a significant (11 percentage point) difference in rearrest rates between the drug court program participants and control group members. The difference arose largely from an increase in the percentage of control group members that had been rearrested by the end of the 3-year follow-up; 32 percent of the control group were rearrested by 1 year, but by 3 years, 44 percent were rearrested. For the drug court participants, 31 percent had been rearrested at the end of 1 year, but only an additional 2 percent of the sample were rearrested within 3 years. For the six programs with explicit post-program periods, participants had significantly lower recidivism than comparison group members in all but one program, as shown in table 10. In five of the six New York state drug court programs, drug court participants had significantly lower post-program recidivism rates than did their comparison group members. The differences in recidivism, measured by the percentage with a new arrest leading to a conviction for 1 year after drug court participants left the program, ranged from 6 to 13 percentage points, as shown in table 10. These findings indicate that for some programs at least, drug court programs can be effective at reducing criminal behavior while participants are engaged in the program and after they leave the program. The evaluation of these drug court programs reports that there is evidence that the effects of drug court programs on recidivism do not diminish over time, but because the post-program period was only 1 year, that improved understanding of the longer-run effects of drug courts on recidivism would benefit from additional research that extends the post-program follow-up time frames. Several evaluations compared post-program recidivism outcomes for drug court program graduates with those of dropouts—those participants who were terminated from the program either by their withdrawal or by sanctioning. These evaluations show that the post-program recidivism differences observed between all drug court program and comparison group participants arise primarily from the large recidivism differences between drug court program graduates and dropouts. Evaluations that reported these comparisons showed large differences in the recidivism rates of drug court program graduates compared with both dropouts and comparison group members. For example, in three New York state drug court programs, dropouts were four to seven times more likely to be reconvicted than graduates. Specifically, in the Bronx Treatment Court, 29 percent of dropouts were reconvicted during the first year after the program as compared with only 4 percent of graduates. In addition, the post-program recidivism rates of dropouts were generally no different from (or in some cases greater than) the recidivism rates of comparison group members. Evidence about the effectiveness of drug court programs in reducing participants’ substance use relapse is limited and mixed. The evidence on substance use relapse outcomes is limited to data available from eight drug court programs included in our review. The data include drug test results and self-reported drug use; both measures were reported for some programs. Drug test results generally showed significant reductions in use during participation in the program, while self-reported results generally showed no significant reductions in use. The evidence on substance use relapse outcomes is limited to data available from eight drug court programs. Evaluations in these eight drug court programs used two different measures of substance use—drug test results and self-reported use—to compare relapse between participants and comparison group members. Three of these programs used both measures to assess relapse. Table 11 shows the comparison group used in the programs’ evaluation and the results for the eight drug court programs that presented substance use relapse data, grouped by the type of drug use measure used. All of the drug test results on substance use relapse were limited to those obtained when participants were still engaged in the drug court program. After drug court program participants exit the control of the criminal justice system, researchers must obtain participants’ voluntary consent to obtain substance use relapse data, regardless of whether the data are self- reported or based upon urinalysis or other testing methods. The results on substance use relapse are mixed. Four of the five drug court programs that used drug test results reported reductions in use. However, self-reported data on substance use relapse show contradictory results. In both of the D.C. Superior Court dockets, prior to sentencing, drug court program participants had better drug test results than did defendants who had been randomly assigned to the control group. Similarly, in the Chester County Drug Court, within-program drug relapse was observed using drug test results. Drug court program participants had fewer positive urinalysis results than did a comparison group of eligible offenders who had been placed on probation in the 10 months prior to the implementation of the drug court program. These results indicate decreased relapse among participants despite the fact that drug court program participants were tested more frequently than were the members of the comparison group. However, in Maricopa County, there were no significant differences in the percentage of drug court participants and the combined control groups that tested positive. One evaluation we reviewed provided limited evidence that judicial status hearings reduce within-program substance use for certain types of drug court program participants. This evaluation, an experiment in a drug court program in New Castle County, randomly assigned eligible drug court program defendants to one of two forms of judicial status hearing conditions: (1) biweekly, regularly scheduled status hearings or (2) status hearings scheduled on an as-needed basis. During the 14-week drug court program, participants with antisocial personality disorder (APD) and those who had prior drug treatment episodes had significantly lower levels of substance use—as measured by drug test results—when they were assigned to regularly scheduled biweekly judicial status hearings as compared with when they were assigned to status hearings on an as- needed basis. Four of the six programs with self-reported drug use results reported no significant reductions in use. For example, after sentencing, neither of the D.C. Superior Court dockets’ participants reported lower rates of substance use. Alternatively, in two of the three Breaking the Cycle programs, participants reported lower rates of substance use than did comparison group members. These differences in self-reported substance use persisted even after the researchers controlled for sample differences and selection. The use of self-reported data presents challenges related to underreporting that we have discussed in a prior report. As discussed in appendix V, participants who completed the drug court program (that is, graduates) had much lower recidivism rates than those who dropped out. Further, the recidivism rates for drug court program dropouts are comparable to the rates of comparison group members. Completion rates—an indicator of the extent to which participants successfully complete their drug court program requirements—for participants in selected programs we reviewed ranged from about 30 percent to about 70 percent. In evaluations of 16 drug court programs in which completion was assessed, one factor—drug court program participants’ compliance with program procedures—was consistently associated with program completion. These program procedures include attending treatment sessions, producing drug-free urinalysis test results, and appearing at status hearings. No other program factor, such as the severity of the sanction that would be imposed if participants failed to complete the program or the manner in which judges conducted status hearings, predicted participants’ program completion. Several characteristics of the drug court program participants themselves were also associated with an increased likelihood of program completion. These characteristics, while not assessed in all of the programs, include lower levels of prior involvement in the criminal justice system, and age, as older participants were more likely to complete drug court programs than younger ones. Completion rates ranged from 27 percent to 66 percent for 16 drug court programs included in evaluations that assessed program completion. These rates, while consistent with rates reported in other reviews of multiple drug court programs, are not directly comparable because drug court programs have different program completion requirements, the rates were measured over varying time periods, and study designs can affect the completion measures. Participants are not only required to attend treatment, but also to appear in court on a regular basis and follow other rules that are specific to the drug court program. These rules can involve, but are not limited to, policies on drug testing, case manager or probation officer visits, school or job attendance, support groups, and graduation requirements. Table 12 shows completion rates for the drug court programs whose evaluations assessed program completion. As noted earlier, drug court programs vary in their specific program completion requirements. Similarly, programs may terminate defendants’ program participation for a variety of reasons. Drug court program participants can be terminated from the program if they are arrested for a new offense, especially for felony offenses; if they regularly fail to appear at status hearings or treatment sessions; or if they repeatedly do not comply with program procedures. Consecutive positive drug test results do not always lead to program termination; in some programs, this could lead to a change in the treatment services provided or sanctions. Drug court participants’ compliance with drug court procedures, such as appearing at treatment sessions and remaining drug-free during the program, was generally a strong predictor of program completion. The level of program compliance is typically indicated by the degree to which participants follow rules and procedures determined by the drug court program, attend required meetings and treatment sessions, and generally progress toward recovery. Given that program completion is related to compliance with drug court procedures, we sought to assess whether the evaluations included in our review provided conclusive evidence about specific aspects of compliance that were associated with program completion. For example, in some drug court programs greater levels of participation were associated with greater likelihood of completing the drug court program. Similarly, participants having more within-program arrests, more instances of warrants for failure to appear, and more positive drug tests were generally less likely to complete the program than those having fewer of these. However, for some of the specific aspects of compliance, the findings were not consistent across drug court programs and in some cases they were even contradictory. For example, one evaluation of four drug court programs (Bakersfield, Creek County, Jackson County, and St. Mary’s Parish) found an inconsistent relationship between drug test results and program completion across the four drug court programs. This cross-site evaluation assessed how various aspects of compliance predicted completion in each program using the same measures of compliance in each drug court program. In relation to the association between drug test results and completion, the evaluation found that in one program (Creek County) drug test results did not have a significant effect on completion. In another drug court program (St. Mary’s Parish), drug court program participants that had no positive drug test results were more likely to complete the program than were participants who had comparatively “low” or “moderate” numbers of positive test results, whereas there were no differences in program completion between those participants who had comparatively “high” numbers of positive test results and those who had no positive test results. In the two remaining programs (Bakersfield and Jackson County), participants with positive test results in the low range actually had a higher likelihood of completing the program than participants with no positive test results, while in Jackson County, participants with positive test results in the high range were less likely than those with no positive test results to complete the program. This evaluation suggests that there may not be consistency across every drug court program in the relationship between drug test results and program completion. Some research studies indicate that drug court participants’ first few weeks in treatment are predictive of success. Early engagement in the drug court program has been measured by whether participants attended treatment during the first few weeks after program entry and whether participants have fewer indications of noncompliance (such as failure to appear or warrants) during the first month of program participation. One evaluation of five drug court programs in New York State assessed the role of participants’ early engagement in the drug court program on their chances of completing the program. It found, consistently among the drug court programs, that early engagement by participants, measured by whether the participant absconded from program contact within 30 days of program entry significantly predicted program completion. Across the five New York drug court programs, participants that received warrants within 30 days of program entry were from about three to eight times more likely to fail to complete the drug court program than were participants who did not receive a warrant within 30 days of program entry. Further, in the Brooklyn Treatment Court, participants’ compliance with drug court program procedures early in the program also contributed to participants’ completing 90 days of drug treatment. Those participants who disappeared from contact, prompting the issuance of a police warrant, had lower chances of completing 90 days of treatment than those participants who did not. Additionally, those participants who attended at least 1 day of treatment within 30 days of entering the program also had greater chances of completing 90 days of treatment than those who did not attend at least 1 day of treatment within the first 30 days of entering the program. Other aspects of compliance, such as treatment attendance during the entire program, appearance at status hearings, within-program arrests, and failure to appear, were generally but not consistently associated with completion. For example, the percentage of expected treatment sessions attended increased the likelihood of completion in five of seven drug court programs. On the other hand, in two of the four drug court programs in which they were measured, within-program arrests and failure to appear decreased the likelihood of completion but had no effect in the other two drug court programs. Alternatively, in both drug court programs (Multnomah and Clark Counties) in which the effects of within-program sanctions were assessed, an increase in the number of sanctions ordered led to a decrease in the likelihood of completion. In several drug court programs, the effects of various drug court program components were examined to determine the factors that predict program completion. Among the components that were examined were various consequences of program failure and the role of the judge and judicial status hearings. In a few drug court programs, efforts were made to examine the role of individual motivation to participate in drug court programs. In addition, evaluations of a few drug court programs examined the role of social factors in predicting program completion. Specifically, these evaluations assessed variables that measured individual’s attachments to other individuals and social institutions. Several drug court program evaluations in our review assessed the effects of different sanctions, or legal consequences of failure, on program completion. For example, if a drug court program participant fails to complete the program, he or she may be sentenced to a predetermined incarceration alternative. The results of these evaluations were mixed and not directly comparable. For example, in the Brooklyn Treatment Court, participants with a more serious treatment mandate—that is, those participants that faced longer jail or incarceration terms if they failed to comply with program requirements—were more likely to complete the drug court program than those with a less serious treatment mandate. The predetermined jail sentences for Brooklyn participants who failed to complete the program ranged from 6 months to 4½ years, depending on the severity of the charge (felony or misdemeanor) and prior criminal history (whether it was a first felony offense or not). Alternatively, in the drug court program in Suffolk County, the length of the incarceration sentence faced by those who failed to complete the drug court program did not contribute to program completion. The predetermined sentence (minimum length of the most common prison alternative) in Suffolk County ranged from 6 months to 1 year. The researchers who conducted the evaluation did not provide an explanation for these differing effects. The judge has been described in the research as a key component of a drug court program. The presence of consistent judicial monitoring of participants is also described as a distinguishing component of drug court programs, compared with other court-based treatment programs, such as probation. Several evaluations in our review examined the effect of drug court judges on program completion. Each demonstrated that judges can play an important role in contributing to participants’ program completion. However, the effect of the judge on program completion is difficult to distinguish from the requirement to attend judicial status hearings. For example, one evaluation of drug court program completion in Broward County attempted to assess the types of comments that judges made and the effect of these comments on program completion. Court monitoring comments made by judges that the researchers classified as supportive were found to contribute to program completion. On the other hand, in one evaluation of two drug court programs (Clark and Multnomah Counties), the number of appearances before the drug court judge was found to increase the likelihood of program completion. A third evaluation (in New Castle County, Delaware) assessed the regularity of judicial status hearings on program completion. Participants were randomly assigned to one of two different forms of judicial status hearings: (1) biweekly hearings and (2) hearings on an as-needed basis, as determined by court officials. There were two main findings from this evaluation (that were also supported by the two replications of the experiment). First, there was no “direct” effect of the different status hearing schedules on program completion rates. Second, there was an interaction between client characteristics and program completion. Drug court program participants that either had prior treatment experiences or were diagnosed as having APD were more likely to complete the program when they were assigned to the regular biweekly status hearings as compared with when they were assigned to status hearings on an as- needed basis. Conversely participants who were not diagnosed as having APD or who had no prior treatment experiences were more likely to complete the program when they were assigned to status hearings on an as-needed basis, as compared with the biweekly condition. The results from the New Castle County experiment show that the regularity of the schedule of status hearings can contribute to program completion for distinct subpopulations of drug court program participants and that there may not be a “one size fits all” approach to scheduling court appearances for all participants. The evaluations of several drug court programs examined some of the attributes of drug court participants and how those factors were related to program completion. Attributes such as prior substance abuse treatment, prior criminal history, type of drug used, demographic characteristics, and employment and education were assessed. Prior criminal history, whether measured by the number of arrests or convictions prior to program entry, and age, as older were generally likely to complete their drug court programs than were younger participants to complete the program, were related to program completion. The other attributes were generally found not to be significant predictors of completion, although among various minorities of drug court programs in which they were assessed, they were significant predictors of completion. For example, prior substance abuse treatment was a significant predictor of completion in three of seven drug court programs, but it was not significant in the other four drug court programs. Similarly, other attributes such as type of drug use, race or ethnicity, gender, or employment or education level were not observed to consistently predict completion. The evaluations that assessed participants’ attributes were correlational— that is, they that examined the associations among these factors and the probability of program completion. Although not all of the evaluations included all of the same factors in their analyses, some general patterns about the correlates of program completion emerged. Attributes of drug court program participants that were associated with an increased likelihood of program completion include the following: (1) lower levels of prior criminal history; (2) substance use other than cocaine or heroin; (3) employment or school attendance at the time of program intake, along with higher levels of education; and (4) age, as older participants were more likely to complete the programs. One evaluation attempted to measure the effect of motivation and readiness for treatment on program completion. It found that those participants who were better able to recognize their problems, recognize external problems, and were ready for treatment, were more likely to complete the drug court program. Four evaluations in our review included sufficient information about seven drug court programs’ costs and benefits to estimate their net benefits— that is, benefits less costs. All but one of the evaluations found that drug court programs were more expensive than conventional case processing. The costs to operate drug court programs above the costs of providing conventional case processing services ranged from about $750 to about $8,500 per participant. However, taking into account the drug court programs’ benefits, especially the reduced costs of crime associated with reductions in recidivism, all four evaluations we reviewed reported net benefits ranging from about $1,000 per participant to about $15,000, mostly because of reduced victimization. Additionally, these benefits may underestimate drug court programs’ true benefits because the evaluations did not include indirect benefits (such as reduced medical costs of treated participants). All but one of the evaluations we reviewed found drug court programs more expensive than conventional case processing. A combination of judicial supervision, monitoring, and treatment services that drug court programs typically provide to their participants—services in addition to those provided to offenders who are processed with conventional procedures receive—result in additional expense to criminal justice agencies. Table 13 shows the net costs of drug court programs—costs above normal court costs—including supervision, monitoring, and treatment, ranging from somewhat less than $800 to about $8,700 per participant. The Multnomah County drug court program cost about $1,400 less than normal court procedures. Rather than relying on administrative records and budgets, the program’s evaluation used a methodology that closely followed participants through treatment and court adjudication. It calculated the amount of time that each drug court participant and comparison group participant spent in different activities, such as treatment and court hearings. The evaluation found that the judge spent less time with offenders in the drug court program than in normal court processing and that this led to the estimated decreased costs. The monetary benefits of reduced recidivism can be placed in two categories: (1) reduced future expenditure by criminal justice agencies and (2) reduced future victimization. Any arrest is a cost to a number of criminal justice agencies, including police, prosecutors, courts, corrections departments, and probation agencies. Reducing arrests by reducing recidivism would benefit these agencies. The justice system’s benefits in the seven drug court programs we reviewed ranged from none to about $3,800 per participant. Some of the range is due to methodological differences among the evaluations, but some may be due to differences among the communities served by the courts. A reduction in recidivism also benefits people who might otherwise be victimized. The costs to potential victims of crime that are thus avoided include direct monetary costs, such as the value of property that is not stolen and expenses for health care that are not incurred, and quality-of-life costs, such as costs for pain and suffering that are not experienced. Benefits to potential victims reported in the evaluations we reviewed ranged from about $500 to $24,000 per participant. The evaluations we reviewed monetized the benefits from averted crime inconsistently, but the differences in method do not explain the range of benefits. For example, excluding the cost of pain and suffering from victimizations that do not occur underestimates the true cost of crime. However, the two evaluations that did not include these costs—D.C. Superior Court and Breaking the Cycle—found the highest and lowest per participant dollar values of reduced recidivism, at $24,000 and $500, respectively. Although six of the seven drug court programs were more costly than conventional case processing, the monetary value of the benefits from reduced recidivism—to the justice system and potential victims—was greater than the costs, producing positive net benefits in all seven programs. The net benefits of the seven drug court programs we reviewed ranged from about $1,000 to about $15,000 per participant. Table 14 presents benefits to the justice system and to potential victims, the net costs of the programs (costs of the drug programs above conventional case processing), and net benefits, or benefits minus net costs. However, as shown in the last two columns of table 14, the financial cost savings due to reductions in recidivism for the criminal justice agencies were not always positive. Positive financial cost savings for the criminal justice agencies were indicated for only two programs—Breaking the Cycle Program in Birmingham and the drug court program in Multnomah County. None of the evaluations included indirect, or secondary, benefits to society derived from a reduction in participants’ substance abuse. Indirect benefits might include costs avoided because treated drug addicts did not use medical services that would otherwise have been required. After successful drug treatment, such individuals might have fewer periods of unemployment and might be more productive, earning higher wages. To the extent that they pay higher taxes as a result, these are benefits to taxpaying members of society. While these benefits are difficult to quantify in assessing a drug court program, their absence suggests that reported net benefits are understated. In addition to those named above, Mary Catherine Hult, David P. Alexander, Michele C. Fejfar, Benjamin A. Bolitzer, Harold J. Brumm, Jr., Wayne A. Ekblad, Ann H. Finley, Ronald La Due Lake, Jean L. McSween, Albert Schmidt, Barry J. Seltser, Douglas M. Sloane, Shana B. Wallace, and Kathryn G. Young made key contributions to this report. Anspach, D. F., and A. S. Ferguson. Assessing the Efficacy of Treatment Modalities in the Context of Adult Drug Courts: Final Report. Portland, Maine: University of Southern Maine, Department of Sociology, 2003. Aos, S., P. Phipps, R. Barnoski, and R. Lieb. The Comparative Costs and Benefits of Programs to Reduce Crime, A Review of National Research Findings with Implications for Washington State. Olympia, Wash.: Washington State Institute for Public Policy, May 1999. Banks, D. and D. C. Gottfredson. “The Effects of Drug Treatment and Supervision on Time to Rearrest among Drug Treatment Court Participants.” Journal of Drug Issues, vol. 33, no. 2 (2003): 385-414. Barnoski, R., and S. Aos. Washington State’s Drug Courts for Adult Defendants: Outcome Evaluation and Cost-Benefit Analysis. Olympia, Wash.: Washington State Institute for Public Policy, 2003. Belenko, S. Research on Drug Courts: A Critical Review 2001 Update. The National Center on Addiction and Substance Abuse at Columbia University, June 2001. Breckenridge, J. F., L. T. Winfree, J. R. Maupin, and D. L. Clason. “Drunk Drivers, DWI ‘Drug Court’ Treatment, and Recidivism: Who Fails?” Justice Research and Policy, vol. 2, no. 1 (2000): 87-105. Brewster, M. P. “An Evaluation of the Chester County (PA) Drug Court Program.” Journal of Drug Issues, vol. 31, no. 1 (2001): 177-206. Carey, S., and M. Finigan. A Detailed Cost Analysis in a Mature Drug Court Setting: A Cost-Benefit Evaluation of the Multnomah County Drug Court. Portland, Ore.: Northwest Professional Consortium, July 2003. Craddock, A. North Carolina Drug Treatment Court Evaluation: Final Report. Washington, D.C.: U.S. Department of Justice, Office of Justice Programs, Drug Court Program Office, 2002. Croxton, F. E., D. J. Cowden, and S. Klein. Applied General Statistics, 3rd edition. (Englewood Cliffs, N.J.: Prentice-Hall, 1967). Deschenes, E. P., L. Cresswell, V. Emami, K. Moreno, Z. Klein, and C. Condon. Success of Drug Courts: Process and Outcome Evaluations in Orange County, California, Final Report. Submitted to the Superior Court of Orange County, California, September 20, 2001. Deschenes, E. P., I. Imam, T. L. Foster, L. Diaz, V. Moreno, L. Patascil, D. Ward, and C. Condon. Evaluation of Orange County Drug Courts for Orange County Superior Courts. Richmond, Calif.: The Center for Applied Local Research, 1999. Deschenes, E. P., I. Imam, E. Castellonos, T. L. Foster, C. Ha, D. Ward, C. Coley, and K. Michaels. Evaluation of Los Angeles County Drug Courts. Richmond, Calif.: The Center for Applied Local Research, 2000. Deschenes, E. P., S. Turner, P. Greenwood, and J. Chiesa. An Experimental Evaluation of Drug Testing and Treatment Interventions for Probationers in Maricopa County, Arizona. Santa Monica, Calif.: RAND, July 1996. Festinger, D. S., D. B. Marlowe, P. A. Lee, K. C. Kirby, G. Bovasso, and A. T. McLellan. “Status Hearings in Drug Court: When More Is Less and Less Is More.” Drug and Alcohol Dependence, vol. 68 (2002): 151-157. Fielding, J. E., G. Tye, P. L. Ogawa, I. J. Imam, and A. M. Long. “Los Angeles County Drug Court Programs: Initial Results.” Journal of Substance Abuse Treatment, vol. 23 (2002): 217-224. Finigan, M. An Outcome Program Evaluation of the Multnomah County S.T.O.P. Drug Diversion Program. Portland, Ore.: Northwest Professional Consortium, 1998. Goldkamp, J. S., M. D. White, and J. B. Robinson. “Do Drug Courts Work? Getting Inside the Drug Court Black Box.” Journal of Drug Issues, vol. 31, no. 1 (2001): 27-72. Goldkamp, J. S., M. D. White, and J. B. Robinson. From Whether to How Drug Courts Work: Retrospective Evaluation of Drug Courts in Clark County (Las Vegas) and Multnomah County (Portland)—Phase II Report from the National Evaluation of Drug Courts. Philadelphia, Pa.: Crime and Justice Research Institute, 2001. Goldkamp, J. S., M. D. White, and J. B. Robinson. Retrospective Evaluation of Two Pioneering Drug Courts: Phase I Findings from Clark County, Nevada, and Multnomah County, Oregon: An Interim Report of the National Evaluation of Drug Courts. Philadelphia, Pa.: Crime and Justice Research Institute, 2000. Gottfredson, D. C., S. S. Najaka, and B. Kearley. “Effectiveness of Drug Treatment Courts: Evidence from a Randomized Trial.” Criminology and Public Policy, vol. 2, no. 2 (2003): 171-196. Gottfredson, D. C., and M.L. Exum. “The Baltimore City Drug Treatment Court: One-Year Results from a Randomized Study.” Journal of Research in Crime and Delinquency, vol. 39, no. 3 (2002): 337-356. Harrell, A., S. Cavanagh, and J. Roman. Final Report: Findings from the Evaluation of the D.C. Superior Court Drug Intervention Program. Washington D.C.: Urban Institute, 1998. Harrell, A. and J. Roman. “Reducing Drug Use and Crime among Offenders: The Impact of Graduated Sanctions.” Journal of Drug Issues, vol. 31, no. 1 (2001): 207-232. Harrell, A., O. Mitchell, J. Merrill, and D. Marlowe. Evaluation of Breaking the Cycle. Washington D.C.: The Urban Institute, February 2003. Harrell, A., O. Mitchell, A. Hirst, D. Marlowe, and J. Merrill. “Breaking the Cycle of Drugs and Crime: Findings from the Birmingham BTC Demonstration.” Criminology and Public Policy, vol. 1, no. 2 (2002): 189- 216. Listwan, S. J., J. L. Sundt, A. M. Holsinger, and E. J. Latessa. “Effect of Drug Court Programming on Recidivism: The Cincinnati Experience.” Crime & Delinquency, vol. 49, no. 3 (2003): 389-441. Logan, T. K, W. Hoyt, and C. Leukefeld. Kentucky Drug Court Outcome Evaluation: Behavior, Costs, and Avoided Costs to Society. Lexington, Ky.: Center on Drug and Alcohol Research, University of Kentucky, October 2001. Marlowe, D. B., D. S. Festinger, and P. A. Lee. “The Judge Is a Key Component of Drug Court.” National Drug Court Institute Review, vol. IV, no. 2 (2004): 1-34. Marlowe, D. B., D. S. Festinger, P. A. Lee, M. Schepise, J. E. R. Hazzard, J. C. Merrill, F. D. Mulvaney, and A. T. McLellan. “Are Judicial Status Hearings a Key Component of Drug Court? During-Treatment Data from a Randomized Trial.” Criminal Justice and Behavior, vol. 30, no. 2 (2003): 141-162. Martin, T. J., C. Spohn, R. K. Piper, and E. Frenzel-Davis. Phase III Douglas County Drug Court Evaluation: Final Report. Omaha, Neb.: Institute for Social and Economic Development, May 2001. Miethe, T. D., H. Lu, and E. Reese. “Reintegrative Shaming and Recidivism Risks in Drug Court: Explanations for Some Unexpected Findings.” Crime & Delinquency, vol. 46, no. 4 (2000): 522-541. Office of Management and Budget Circular A-94, “Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs.” (October 29, 1992, revised January 29, 2002) Peters, R. H., and M. R. Murrin. “Effectiveness of Treatment-Based Drug Courts in Reducing Criminal Recidivism.” Criminal Justice and Behavior, vol. 27, no. 1 (2000): 72-96. Peters, R. H., and M. R. Murrin. Evaluation of Treatment-Based Drug Courts in Florida’s First Judicial Circuit. Tampa, Fla.: Department of Mental Health Law and Policy, Louis de la Parte Florida Mental Health Institute, University of South Florida, 1998. Peters, R. H., A. L. Haas, and M. R. Murrin. “Predictors of Retention and Arrest in Drug Courts.” National Drug Court Institute Review, vol. II, no. I (1999): 30-57. Rempel, M., and C. DeStefano. “Predictors of Engagement in Court- Mandated Treatment: Findings from the Brooklyn Treatment Court, 1996- 2000.” Journal of Offender Rehabilitation, vol. 33, no. 4 (2001): 87-124. Rempel, M., D. Fox-Kralstein, A. Cissner, R. Cohen, M. Labriola, D. Farole, A. Bader, and M. Magnani. The New York State Adult Drug Court Evaluation: Policies, Participants, and Impacts. New York: Center for Court Innovation, 2003. Roman, J., and A. Harrell. “Assessing the Costs and Benefits Accruing to the Public from a Graduated Sanctions Program for Drug-Using Defendants.” Law and Policy, vol. 23, no. 2 (2001): 237-268. Roman, J., J. Woodard, A. Harrell, and S. Riggs. Final Report: A Methodology for Measuring Costs and Benefits of Court-Based Drug Intervention Programs Using Findings from Experimental and Quasi- experimental Evaluations. Washington D.C.: The Urban Institute, December 1998. Schiff, M., and W. C. Terry, III. “Predicting Graduation from Broward County’s Dedicated Drug Treatment Court.” The Justice System Journal, vol. 19, no. 3 (1997): 291-310. Senjo, S. R., and L. A. Leip. “Testing and Developing Theory in Drug Court: A Four-Part Logit Model to Predict Program Completion.” Criminal Justice Policy Review, vol. 12, no. 1(2001): 66-87. Spohn, C., R. K. Piper, T. Martin, and E. D. Frenzel. “Drug Courts and Recidivism: The Results of an Evaluation Using Two Comparison Groups and Multiple Indicators of Recidivism.” Journal of Drug Issues, vol. 31, no. 1 (2001): 149-176. Stokey, E., and Zeckhauser. A Primer for Policy Analysis. (New York: W.W. Norton and Company, 1978). Truitt, L., W. M. Rhodes, N. G. Hoffman, A. M. Seeherman, S. K. Jalbert, M. Kane, C. P. Bacani, K. M. Carrigan, and P. Finn. Evaluating Treatment Drug Courts in Kansas City, Missouri and Pensacola, Florida: Final Reports for Phase I and Phase II. Cambridge, Mass.: Abt Associates Inc., 2002. Turner, S., P. Greenwood, T. Fain, and E. Deschenes. “Perceptions of Drug Court: How Offenders View Ease of Program Completion, Strengths and Weaknesses, and the Impact on Their Lives.” National Drug Court Institute Review, vol. II, no. 1(1999): 58-81. Wilson, D. B., O. Mitchell, and D. L. MacKenzie. “A Systematic Review of Drug Court Effects on Recidivism.” Forthcoming. Wolfe, E., J. Guydish, and J. Termondt. “A Drug Court Outcome Evaluation Comparing Arrests in a Two Year Follow-Up Period.” Journal of Drug Issues, vol. 2, no. 4 (2002): 1155-1172.
Drug court programs, which were established in the late 1980s as a local response to increasing numbers of drug-related cases and expanding jail and prison populations, have become popular nationwide in the criminal justice system. These programs are designed to reduce defendants' repeated crime (that is, recidivism), and substance abuse behavior by engaging them in a judicially monitored substance abuse treatment. However, determining whether drug court programs are effective at reducing recidivism and substance use has been challenging because of a large amount of weak empirical evidence. he 21st Century Department of Justice Appropriations Authorization Act requires that GAO assess drug court program effectiveness. To meet this mandate, GAO conducted a systematic review of drug court program research, from which it selected 27 evaluations of 39 adult drug court programs that met its criteria for, among other things, methodological soundness. This report describes the results of that review of published evaluations of adult drug court programs, particularly relating to (1) recidivism outcomes, (2) substance use relapse, (3) program completion, and (4) the costs and benefits of drug court programs. DOJ reviewed a draft of this report and had no comments. Office of National Drug Control Policy reviewed a draft of this report and generally agreed with the findings. Most of the adult drug court programs assessed in the evaluations GAO reviewed led to recidivism reductions during periods of time that generally corresponded to the length of the drug court program. GAO's analysis of evaluations reporting these data for 23 programs showed the following: (1) lower percentages of drug court program participants than comparison group members were rearrested or reconvicted; (2) program participants had fewer recidivism events than comparison group members; (3) recidivism reductions occurred for participants who had committed different types of offenses; and (4) there was inconclusive evidence that specific drug court components, such as the behavior of the judge or the amount of treatment received, affected participants' recidivism while in the program. Recidivism reductions also occurred for some period of time after participants completed the drug court program in most of the programs reporting these data. Evidence about the effectiveness of adult drug court programs in reducing participants' substance use relapse is limited to data available from eight drug court programs. Evaluations of these eight drug court programs reported mixed results on substance use relapse. For example, drug test results generally showed significant reductions in use during participation in the program, while self-reported results generally showed no significant reductions in use. Completion rates, which refer to the percentage of individuals who successfully completed a program, in selected adult drug court programs ranged from 27 to 66 percent. Other than participants' compliance with drug court program procedures, no other program factor (such as the severity of the sanction that would be invoked if participants failed to complete the program) consistently predicted participants' program completion. A limited number of evaluations--four evaluations of seven adult drug court programs--provided sufficient cost and benefit data to estimate their net benefits. Although the cost of six of these programs was greater than the costs to provide criminal justice services to the comparison group, all seven programs yielded positive net benefits, primarily from reductions in recidivism affecting judicial system costs and avoided costs to potential victims. Financial cost savings for the criminal justice system (taking into account recidivism reductions) were found in two of the seven programs.
In any real estate transaction, the lender providing the mortgage needs a guarantee that the buyer will have clear ownership of the property. Title insurance is designed to provide that guarantee by generally agreeing to compensate the lender (through a lender’s policy) or the buyer (through an owner’s policy) up to the amount of the loan or the purchase price, respectively. Lenders also need title insurance if they want to sell mortgages on the secondary market, since they are required to provide a guarantee of ownership on the home used to secure the mortgage. As a result, lenders require borrowers to obtain title insurance for the lender as a condition of granting the loan (although the buyer, the seller, or some combination of both may actually pay for the lender’s policy). Lenders’ policies are in force for as long as the loan is outstanding, but end when the loan is paid off (e.g., through a refinancing transaction); however, owners’ policies remain in effect as long as the purchaser of the policy owns the property. Title insurance is sold primarily through title agents, although insurers may also sell policies themselves. Before issuing a policy, a title agent checks the history of a title by examining public records, such as deeds, mortgages, wills, divorce decrees, court judgments, and tax records. If the title search reveals a problem, such as a tax lien that has not been paid, the agent arranges to resolve the problem, decides to provide coverage despite the problem, or excludes it from coverage. The title policy insures the policyholder against any claims that might have existed at the time of the purchase but were not identified in the public record. The title policy does not require that title problems be fixed, but compensates policyholders if a covered problem arises. Except in very limited instances, title insurance does not generally insure against title defects that arise after the date of sale. Title searches are generally carried out locally because the public records to be searched are usually only available locally. Title agents or their employees conduct the searches. The variety of sources that agents must check during a title search has fostered the development of privately owned, indexed databases called “title plants.” These plants contain copies of the documents obtained through searches of public records, and they index the copies by property address and update them regularly. Insurers, title agents, or a combination of entities may own a title plant. In some cases, owners allow other insurers and agents access to their plants for a fee. Title insurance premiums are paid only once, at the time of sale or refinancing, to the title agent. In what is called a premium split, agents retain or are paid a portion of the premium amount as a fee for conducting the title search and related work and for their commission. Agents have a fiduciary duty to account for premiums paid to them, and insurers generally have the right to audit the agents’ relevant financial records. The party responsible for paying for the title policies varies by state and even by areas within states. In many cases, the seller pays for the owner’s policy and the buyer pays for the lender’s policy, but the buyer may also pay for both policies or split some or all of the costs with the seller. In most cases, the owner’s and lender’s policies are issued simultaneously by the same insurer, so that the same title search can be used for both policies. The price that the consumer pays for title insurance is determined by applying a rate set by the underwriter or state to the loan value (for the lender’s policy) and home price (for the owner’s policy). In a recent nationwide survey, the average cost for simultaneously issuing lender’s and owner’s policies on a $200,000 loan, plus other associated title costs, was approximately $859, or approximately 28 percent of the average total loan origination and closing fees. Title insurance differs from other types of insurance in key ways. First, in most property and casualty lines, losses incurred by the underwriter account for most of the premium. For example, property-casualty insurers’ losses and loss adjustment expenses accounted for approximately 73 percent of written premiums in 2005. In contrast, losses and loss adjustment expenses incurred by title insurers as a whole were approximately 5 percent of the total premiums written, while the amount paid to or retained by agents (primarily for work related to title searches and examinations and for commissions) was approximately 70 percent. Second, title agents’ roles and responsibilities differ from those of agents for other lines of insurance. Agents in lines of insurance other than title insurance primarily serve as salespeople, while title agents’ work can be a labor-intensive process of searching, examining, and clearing property titles as well as underwriting and traditional sales and marketing. Title agents access and examine numerous public documents, among them tax records, liens, judgments, property records, deeds, encumbrances, and government documents, and then clear or exclude from coverage any title problems that emerge. Depending on the level of technology used, the accessibility of public documents, the relative efficiency of local government recorders’ offices, and other factors, this process can take from a few minutes up to a few weeks or more. In some states, title agents also are responsible for claims up to a specific dollar amount. Most title agents also handle the escrow and closing processes and document recordation after the closing. In general, title agents issue the actual insurance policy and, after deducting expenses, remit the title insurer’s portion of the premium. Third, unlike premiums for other types of insurance, title insurance premiums are nonrecurring. That is, title insurers have only one chance to capture the cost of the product from the consumer, unlike other types of insurers that collect premiums at regular intervals for providing ongoing coverage. The title insurance premium amount must cover losses for any future problems that were either not uncovered in the title agent’s search or, for a small number of policies, problems that emerge after the day of closing. Fourth, title insurance has a different coverage period than other types of insurance. With title insurance, coverage begins on the day of closing and goes back in time. Most policies cover events that occurred in the past, including unpaid tax liens, judgments, issues with missing heirs, and forgeries in the document chain of title. The purpose of the title agent’s search is to turn up these problems before closing so that they can be cleared or excluded from coverage. However, if a problem occurred in the past but only emerged after the day of closing and was not excluded from coverage, then the policy would offer protection to the lender and home owner. The comprehensiveness of the agent’s search can be a factor in minimizing such losses. For this reason, title insurance is often referred to as loss prevention insurance, in contrast to other types of insurance that attempt to prospectively minimize exposure to claims. Finally, the title insurance market’s business cycle is more closely related to the real estate market and to interest rates than the business cycle for other types of insurance. Typically, this relationship is inverse, so that the revenues of title companies rise when interest rates fall, largely because lower interest rates usually lead to a surge in home buying and refinancing and thus increase demand for title services and products. Under current federal law, the regulation of insurance, including title insurance, is primarily the responsibility of the states. However, title insurance entities are also subject to RESPA, a federal law intended to improve the settlement process for residential real estate. Section 8 of RESPA generally prohibits the giving or accepting of kickbacks and referral fees among persons involved in the real estate settlement process. Section 8 also lays out the conditions under which ABAs are permissible. First, the affiliation must be disclosed to the consumer, along with a written estimate of charges. Second, ABA representatives may not require consumers to use a particular settlement service provider. Third, the only thing of value that ABA owners may receive, other than payment for services rendered, is a return on their ownership interest. In addition, HUD has issued policy statements that describe multiple factors, including what it considers to be core title services, that HUD will use in determining if an entity is a bona fide provider of settlement services. HUD is responsible for administering section 8 of RESPA, but its enforcement authority is limited to seeking injunctions against potential violations. Unlike other sections of RESPA (e.g., section 10, which authorizes HUD to assess civil money penalties for certain violations by entities that fail to provide escrow account statements), section 8 of RESPA does not authorize HUD to levy civil money penalties for violations. Title insurance markets can be described by various characteristics, such as the following: While high market concentration exists among national title insurers, they market insurance through large numbers of independent and affiliated agents, with the mix varying across states. The use of ABAs—in which a real estate professional, such as a real estate agent, owned a share of a title agency—varied. Processes used by agents to conduct searches and examinations in some states were more efficient than others, and the responsibilities of title agents also varied. Premiums across states are difficult to compare, but they appeared to vary significantly. Nationally, five title insurers, or underwriters, captured about 92 percent of the market in 2005 (see fig. 1). Most states were dominated by a group of two or three insurers, sometimes including a regional insurer. For example, in California, about 66 percent of the market share in 2005 was split nearly evenly between the largest two insurers—First American and Fidelity. The remaining approximately 33 percent of the market was predominantly split among the other three national insurers (25 percent) and five regional independent insurers (8 percent). Although they are national insurers, these five major underwriters sell and market title insurance in local markets through networks of direct operations, partial or full ownership of affiliates, and contracts with independent agents. According to the annual reports of the four largest title insurers, they each use between 8,000 and 11,000 agencies to sell their insurance nationwide. Most state markets have two types of title agents: affiliated and independent. Title insurers use both types of agents, depending on conditions in the local market, including local tax policies and established market practices, as well as the level of service the underwriter provides to the agents. Affiliated agents carry higher fixed costs to the insurer as owner, and underwriters told us that these costs were especially challenging when the market softened and the insurer’s tax liability for affiliated agents rose. However, insurers also said that with affiliated agents they had more control over the premium split and, because the agents were closely aligned with the underwriter, did not have to provide as much in services, such as training. Underwriters noted that they also benefited from contracting with independent agents because doing so kept their fixed costs low and allowed them to benefit from some tax advantages. However, according to the insurers, contracting has its disadvantages, by obliging the insurers to negotiate a competitive premium split (in nonpromulgated states) or risk having the agent establish a relationship with another underwriter. Independent agents, who work with several underwriters, also may not provide the guaranteed flow of business, and thus the same revenue stream, as affiliated agents. Underwriters balance these benefits and risks when determining which agents they will use in each state. Two underwriters told us that they strive to maintain about an equal balance between affiliated agents and independent agents. Other insurers told us that, because their expenses can be higher by virtue of their ownership interest in affiliated agents, they were reluctant to take on too many affiliated agents and preferred to contract with independent agents, especially when market conditions declined. However, several industry participants told us that underwriters’ purchase and use of affiliated agents in some states had increased significantly over the last 5 years. As shown in figure 2, affiliated agents dominated the market in California, the state with the largest total of premiums written, while independent agents capture the majority of the markets in Colorado, Illinois, and New York. Conversely, the Texas market was relatively more evenly balanced, with insurers, affiliated agents, and independent agents sharing the number of premiums written. In Iowa, the state-run Title Guarantee Division of the Iowa Finance Authority has a slight majority of the market and independent agents have most of the remainder. We found that the use of ABAs varied by insurer and location. ABAs generally involved a referring entity, such as a real estate or mortgage professional, or builder, having full or partial ownership of an agency (see fig. 3). For example, a mortgage lender and a title agent might form a new jointly owned title agency, or a builder might buy a portion of a title agency. The owners of ABAs are to split the revenues in proportion to their ownership shares to satisfy antirebating laws. Nationally, the use of ABAs appears to be growing. For example, according to a study done for the Real Estate Services Providers Council (RESPRO), affiliated title agents accounted for approximately 26 percent of title-related closing costs in 2005, up from about 22 percent in 2003.Although precise data showing state-by-state growth were not available, industry participants in some states—especially Colorado, Illinois, Minnesota, and Texas—told us that the number of ABAs in their states had grown significantly. We found that while the basic title search and examination process shared certain elements across states, the process was more efficient in some states than in others. Figure 4 describes the common elements of the title search and examination process, which begins with a request from the consumer’s representative and intake by the title agent. The agent then performs the search, and a title examiner hired by the title agent analyzes the collected documents to identify any potential problems to be cleared. Once any identified problems are cleared, exempted from coverage, or insured over, the title agent prepares the closing documents and collects and disburses checks at the closing. Finally, the agent deposits collected funds in escrow accounts, records the deed or title with the relevant local government offices, and submits the title commitment to the insurer for policy issuance. Agents in some states use primarily automated processes, either owning or purchasing access to a title plant. Because of these plants, the title search process in these states can be very efficient, which can decrease the amount of time required to issue a title insurance policy. Some of the most advanced of these title plants have documents scanned from local government sources, indexed and cross-referenced by various types of identifying information. Four of the title data centers we visited had electronic records going back 20 years or more. During a tour of one title plant in Texas, we observed a title examiner obtain nearly all documents pertinent to the title search and examination in electronic format within seconds. If the title examiner did not have immediate access to a necessary document, she would e-mail the owner of that information and have it sent electronically or through the mail from one of the search services to which the plant subscribed, usually within 1 day or less. For this plant, typical turnaround time for a completed title search, examination, and commitment for a title examiner simultaneously working on several titles was 2 to 3 days. In another highly automated plant located in a large urban center, we were told that the typical title search and examination took about 25 minutes. One of the nation’s largest title insurers, First American, recently announced that with new software developments, its agents could produce a fully insured title commitment in 60 seconds for many refinance transactions. In contrast, in a less-efficient process, agents in some states must physically search public records, which can add to the time required to issue a policy. In New York, for example, title plants are rare, and title agents commonly employ abstractors and independent examiners who must go to various county offices and courthouses to manually conduct searches. Including the process of clearing title problems and attorney review, one underwriter told us that in New York, the typical title insurance issuance took 90 to120 days for a purchase and 30 to 45 days for a refinance. Most historical data are proprietary to each underwriter and are based on previously insured titles. At an underwriter-owned title plant in an East Coast city, described as typical for the region, we saw that although the plant held approximately 1.5 million records of previously insured titles, few records were updated when a new search came in on that same property. Personnel at the plant said that it was too labor- intensive to consolidate all of the files, although not updating the files resulted in a large number of redundancies in records across the plant. Also, in some states, industry participants told us that delays in recording and processing at local government offices contributed greatly to inefficiencies in the issuance process. We found that the extent of title agents’ responsibility for claims losses, involvement in the closing process, and ability to set premiums varied widely across states. For example, in some states, agents are responsible for a specific portion of losses on claims. In California and Colorado, the underwriter-agent agreement stipulates that title agents are responsible for up to the first $5,000 of a title claim. Underwriters said that this deductible gave agents an incentive to conduct more diligent searches and examinations. In other states, agents are not responsible for a specific portion of a claim but may take responsibility for some part or all of it, especially if the claim is small. According to agents in New York and Minnesota, it is faster, more efficient, and more customer-friendly for the agent to handle smaller claims rather than passing them on to the underwriter. An industry organization said that current, informal agent claims practices show that agents generally take responsibility for claims under $2,500. Independent agents told us that the industry is moving toward more risk borne by the agents. In fact, agent application and review documents that we obtained from underwriters showed that the number and amount of claims the agent was responsible for were criteria insurers used when deciding whether to retain independent agents. One underwriter told us that although their agents did not have deductibles, the insurer was able to recover about $10 million in funds from agents on claims the underwriter had already paid through aggressive follow-up on and investigation into possible errors on previously paid claims. Some agents are also involved in more aspects of the closing process. We found that some agents handled the entire closing process, including the escrow, while others did not handle the escrow portion. These practices varied within as well as across states. In California, for example, title agencies have both underwriter and agent-controlled escrow companies that handle the full escrow process and actively market those services. These agencies offer a full package of closing services, from title search, examination, and clearance to document preparation and disbursement of funds at the closing. Other title agents were independent from escrow companies. In some states, such as New York, where it is customary for the home buyer and seller to have a lawyer present at the closing, title agents employ closers, whose chief duty is to handle the checks for taxes and escrow and to record the deed. Similarly, in Illinois, the lawyers actually serve as attorney-agents and are prohibited by the underwriter from handling the escrow. Finally, in some states, title agents determine the amount to charge consumers for the search and examination portion of the premium, while in other states, they do not. The states where they do are referred to as “risk-rate” states because only the insurance, or risk-based, portion of the premium is regulated. In these states, state regulators review underwriters’ rates for the risk-based portion of the premium, but the agents set the fees for search and examination services (generally the larger part of the cost to consumers) without regulatory review. According to ALTA, 30 states plus the District of Columbia are considered risk-rate states. The rest of the states, excluding Iowa, are considered to be all-inclusive because they incorporate charges for the risk-based portion of title insurance and other fees, such as those for the search and examination, in the regulated premium. The premium may or may not include settlement and closing costs. In these all-inclusive states, agents are not able to determine the price they will charge for searches and examinations, because they are required to charge the rates set by the state or the underwriter. Insurers set their premium rates based on their own expected costs and how much of the premium they have agreed to split with the agent. Because title insurance premium rates depend on the amount of the loan or value of the home being insured, premiums differ widely across states. Figure 5 shows the premium rates for median-priced homes in major cities in our sample states. One reason title insurance premium rate comparisons are difficult is because, as we previously mentioned, items included in the premium varied by state. A study from insurance regulators in Florida, where rates are promulgated and include the risk portion only, noted that what all- inclusive rates include varies even among the all-inclusive states. According to the study, in Texas and Pennsylvania, the premium includes the risk portion, search and examination costs, and settlement fees, while in California, the all-inclusive rate does not include settlement and closing costs. The Florida study also noted that one state (Utah) includes closing costs but not searches and examinations, and another state (Illinois) allows the entire rate to be determined competitively as either risk-based or all-inclusive. A national survey conducted by Bankrate.com in 2006 also showed significant differences in title premiums across states. This survey of the 50 states and the District of Columbia compiled average mortgage closing costs, including title insurance, search and examination and settlement costs, and origination fees, using data obtained from as many as 15 of the largest national lenders’ online quote systems. The survey calculated costs for a standard $200,000 loan in one Zip Code of the largest urban center in each state. The data showed costs ranging from a high of $3,887 to a low of $2,713, with a national average of $3,024. Bankrate.com representatives attributed most of the difference across states to wide disparities in the cost of title insurance, which they found varied almost 64 percent, from a high of $1,164 to a low of $418. The average was $663. However, these data must be viewed with caution because they do not account for differences in what could be included in the premium. Moreover, since these data came from only one Zip Code per state, they may not be representative of other localities. Industry officials said that rates vary because of differences in what was included in the rate and in standard business costs in each area. Nearly all of the industry participants we spoke with emphasized that title insurance is a local business, varying both within and across states. They said that state property, trustee, probate, and estate laws could partially explain the rate differences. In some states, these requirements make it much more expensive to do the search and examination work and clear all of the risks through the examination process. Experts told us that trying to compare rates across states would not be meaningful because of the differences in the components of the premium. Among the factors raising questions about the existence of price competition and the resulting prices paid by consumers within the title insurance industry are the following: consumers find it difficult to shop for title insurance, therefore, they put little pressure on insurers and agents to compete based on price; title agents do not market to consumers, who pay for title insurance, but to those in the position to refer consumers to particular title agents, thus creating potential conflicts of interest; a number of recent investigations by HUD and state regulatory officials have identified instances of alleged illegal activities within the title industry that appear to reduce price competition and could indicate excessive prices; as property values or loan amounts increase, prices paid for title insurance by consumers appear to increase faster than insurers’ and agents’ costs; and in states where agents’ search and examination services are not included in the premium paid by consumers, it is not clear that additional amounts paid to title agents are fully supported by underlying costs. Disagreement exists between title industry officials and regulators over the actual extent of price competition within title insurance markets, with industry officials asserting that such competition exists and a number of regulators stating that a lack of competition ultimately results in excessive prices paid by consumers. For several reasons, consumers find it difficult to shop for title insurance based on price, raising questions about the existence of price competition in title insurance markets. First, most consumers buy real estate—and with it, title insurance—infrequently. As a result, they are not familiar with what title insurance is, what reasonable prices might be, or which title agents might provide the best service. According to a study commissioned by the Fidelity National Title Group, Inc., in response to proposed regulatory changes in California, it is typically not worth an individual’s time to become more educated about title insurance, because any resulting savings would likely be relatively small. That is, the cost to consumers of becoming sufficiently educated to make an informed decision is potentially higher than the risk of paying more to a title agent suggested by a real estate or mortgage professional. However, one potential consequence of a failure to shop around was noted by several of the state insurance regulatory officials that we spoke with, who expressed concern that consumers may not be getting the discounts for which they are eligible. For instance, insurers may give (1) discounts on mortgage refinance transactions because the previous search and examination were fairly recent and (2) discounts to first-time home buyers or senior citizens. Several title industry officials agreed that consumers might not be aware of such discounts and may, in some cases, not be receiving discounts to which they are entitled. Second, consumers may have difficulty comparing price information from different title agents because many title agents also charge for services that are not included in the premium rate, such as fees related to real estate closing and other administrative fees. In states where title agents charge separately for search and examination services, such charges can be as large as the title insurance premium itself. Thus, even if consumers collected and compared premium rates, which are posted on some states’ Web sites, they might not get an accurate picture of all the title-related costs they might pay when using a particular agent. Third, title insurance is a smaller but required part of a larger transaction that consumers are generally unwilling to disrupt or delay. As we have seen, lenders generally require home buyers to purchase title insurance as part of any real estate purchase or mortgage refinancing transaction. However, purchasing title insurance is a relatively small part of such transactions. For example, according to an analysis by the Fidelity National Title Group, Inc., in 2005 in California, on a transaction with a sales price of $500,000 and a loan amount of $450,000, title insurance costs, on average, amounted to only 4 percent of total closing costs, including the real estate agent’s commission (see fig. 6). Even when the seller pays the real estate agent’s commission, title insurance costs are still small compared with the size of the buyer’s transaction. In addition, it appears that by the time consumers receive an estimate from the lender of their title insurance costs as part of the Good Faith Estimate, a title agent has already been selected, and the title search has already been requested or completed. To shop around for another title insurer at that point in the process could also threaten to delay the scheduled closing. According to a number of title industry officials and state insurance regulators we spoke with, most consumers place a higher priority on completing their real estate transaction than on disrupting or delaying that transaction to shop around for potentially small savings. HUD publishes an informational booklet designed to help fulfill RESPA’s goal of helping consumers become better shoppers for mortgage settlement services, including title insurance. Although this document provides much useful information, it is generally distributed too late in the home-buying process to help consumers with respect to title insurance, and it lacks some potentially useful information. RESPA currently requires lenders to provide the booklet to consumers within 3 days of the loan application. HUD officials recognize the need to get this information to consumers earlier and recommended in a 1998 study that real estate agents, as well as lenders, provide the information at first contact. Furthermore, RESPA only requires the information to be distributed in a transaction involving a real estate purchase, and not in other transactions, such as mortgage refinances, where title insurance is also required by lenders. The usefulness of the informational booklet is further limited by the absence of information on the discounts most title insurers provide and on potentially illegal ABAs. Because consumers may not have access to potentially useful information when purchasing title insurance, they may not be able to make well- informed decisions on the purchase of title insurance. Specifically, consumers may face difficulty in independently collecting information on all amounts charged by title agents in order to comparison shop. In addition, the limitations in the content of HUD’s information booklet and when consumers receive it can result in consumers’ getting information too late in the process, thereby hindering their ability to influence the selection of a title agent or insurer. Moreover, several state insurance regulators expressed concern that consumers might not be getting all available discounts because they do not know they are available or that they are entitled to the discounts. In addition, HUD officials said that the use and complexity of ABAs in the title industry has increased, and consumers could benefit from additional information in this area. Another factor that raises questions about the existence of price competition is that title agents market to those from whom they get consumer referrals, and not to consumers themselves, creating potential conflicts of interest where the referrals could be made in the best interest of the referrer and not the consumer. Because of the difficulties faced by consumers in shopping for title insurance, consumers almost always rely on a referral from a real estate or mortgage professional. In fact, some insurance regulatory officials we spoke with said they are concerned that consumers may not even be aware they are able to choose their own title agent and insurer. According to title industry officials, because of consumers’ unfamiliarity with and infrequent purchases of title insurance, it is not cost-effective to market to them. Rather, title agents market to and compete for referrals from real estate and mortgage professionals. According to title industry officials, competition among title agents for consumer referrals is very intense and motivates them to provide excellent service to real estate and mortgage professionals. This is because if they do not provide good service, those professionals will send their future referrals elsewhere. Both title and real estate industry officials told us that such professionals have a strong interest in customers’ having a good experience with respect to the portion of a closing conducted by a title agent, because customers’ experiences there will reflect back on the professional. As a result, they said, such competition on the basis of service benefits consumers. However, this competition among title agents for consumer referrals is also characterized by potential conflicts of interest, since those making the referrals may have the motivation to do so based on their own best interests rather than consumers’ best interests. Real estate and mortgage professionals interact more regularly with title agents and insurers than do consumers and, thus, are likely to have better information than consumers on the prices and quality of work of particular title agents and insurers. To the extent the interests of those professionals are aligned with those of the consumers they are referring, the knowledge and expertise of those professionals can benefit consumers. However, conflicts of interest may arise when the professional making the referral has a financial interest in directing the consumer to a particular title agent. Under such circumstances, the real estate or mortgage professional may be motivated to make a consumer referral not based on the customer’s best interests but on the professional’s best interests. For example, a real estate professional may be a partial or full owner of a title agency, such as through an ABA, and therefore receive a share of the profits earned by that agency. As such, the professional may have an incentive to refer customers to that title agency. Example of Alleedly Illeal Referral Fee Decribed in Invetiation by HUD and State Inurance Reulator A title gent provided trip, entertinment, nd ctering for entitie involved in rel ete trsaction. A title gent contributed to pool of fnd tht was given y in drwing mong rel ete gent. In recent years, HUD and state insurance regulators have identified a number of allegedly illegal activities related to the marketing and sale of title insurance that appear to be designed to obtain consumer referrals and, thus, raise questions about competition and, in some cases, the prices paid by consumers (see sidebar). In addition, several title insurers and agents told us that they lost market share because they did not provide some compensation for consumer referrals. The payment or receipt of compensation for consumer referrals potentially reduces competition because the selection of title insurer or agent might not be based on the price or quality of service provided, but on the benefit provided to the one making the referral. The giving or receiving of anything of value in return for referral of consumers’ title insurance business is a potential violation of RESPA and many state laws. For example, it might be illegal for a title insurer to provide free business services to a realtor in exchange for that realtor’s referring consumers to the title agent. It might also be illegal for the realtor to accept those services. Nonetheless, state and federal regulators have identified a number of alleged instances of such payments, resulting in those involved paying over $100 million in fines, penalties, or settlement agreements. Table 1 summarizes these investigations. From 2003 to 2006, insurance regulators in three of our six sample states had concluded at least 20 investigations related to the alleged payment of referral fees, involving over 52 entities, including title insurers, title agents, and builders. As a result of these investigations, the entities involved were ordered to pay or agreed to pay approximately $90.6 million in the form of consumer refunds, fines, and settlements. Over the same period, HUD concluded at least 38 enforcement actions resulting in settlements related to alleged referral fee violations. These actions involved at least 62 entities and resulted in those entities’ being ordered to pay or agreeing to pay approximately $10.7 million. Several insurance regulators in states outside of our sample states, while not completing enforcement actions or reaching settlement agreements, expressed concerns over activities related to referral fees. For example, in October 2006, the Washington State Office of the Insurance Commissioner published the results of its investigations into referral practices in the title industry in Washington. According to the report, the use of inducements and incentives by title companies to obtain title insurance business appeared to be “widespread and pervasive,” and these inducements were used to influence referrals by real estate agents, banks, lenders, builders, developers, and others. The inducements included, among other things, the provision of advertising services, open houses, entertainment, and educational classes. According to the report, the regulator decided not to take any enforcement actions on the basis of the activities they identified because of the expense of doing so and because the regulator accepted some responsibility for allowing such a situation to develop. However, the report also stated that the regulator would put the industry on notice that there would be consequences for any future violations. In Illinois, the state title insurance regulator issued a series of bulletins and informational handouts in 2005 and 2006 that expressed concerns over potentially illegal referral fees and inappropriate ABAs. The regulator had found that some title agents were using title service companies (owned by title insurers) that in some cases performed almost all title-related work, such that all the title agent had to do was sign and return some documents in exchange for receiving part of the premium. According to the regulator, such arrangements would violate state law requiring title agents to perform certain minimal activities in return for fees received from consumers. The regulator told us that the companies involved in these activities were cooperative in ceasing such activities and, as a result, the regulator was not pursuing any enforcement actions. Such arrangements, however, (1) may constitute an illegal referral fee under RESPA and (2) appear to be very similar to activities that were the subject in Illinois of state and HUD investigations in 1990 and 1991, resulting in a $1 million settlement between HUD and the title insurer involved. Finally, in April 2006, the state title insurance regulator in Alaska published a summary of title insurance examinations in which they expressed concern that title agents and real estate service providers were entering into business arrangements that blurred the line between legitimate transactions and illegal kickbacks. Such arrangements, the report noted, may undermine competition and be an indication that premium rates are excessively high. The report stated that the insurance regulator is contemplating new regulations regarding the legality of these arrangements, but the regulator will first obtain industry input through public hearings. Overall, the alleged referral fee arrangements identified in the state and HUD investigations could potentially indicate that those making consumer referrals did so based on their own interests, and may not have resulted in obtaining the best prices for consumers. In one mltite ettlement tht involved 26 te insunce regtor, regtor lleged tht title insurer nd home builder creted cptive reinsunce rrngement. Under therrngement, the insurer dedcted processing fee of $350 from the premim, then pid 50 percent of the reminder to reinsurer for assuming 50 percent of the policy rik. The reinsurer, in trn, provided referr to the title insurer. For exmple, in Colordo, rty to the ettlement, the premim chrged y one of the compnie involved for n owner’s nd lender’s policy on $250,000 lond prchase price was $1,614. In 2005, the comined losstio for ll insurer in Colordo was pproximtely 4.5 percent. Under the rrngement decried y regtor, on hypotheticl $250,000 trsaction, the reinsurer wold collect pproximtely $632 for assuming expected loss of abt $36 (4.5 percent of the $1,614 premim), for net profit of abt $596. In other word, abt 37 percent of the $1,614 pid y the consumer wold llegedly go to the reinsurer as compensation for it builder, lender, or rel ete roker-owner llegedly referring business to the insurer. From 2003 through 2006, state and HUD investigations of captive reinsurance arrangements, a potential form of referral fees, resulted in payments by insurers and other entities of approximately $66.8 million, as previously shown in table 1. Specifically, we identified 13 investigations involving 37 entities that were related to captive reinsurance arrangements, with 1 multistate settlement agreement involving activities in 26 states. In such arrangements, a home builder, real estate broker, lender, title insurance company, or some combination of these entities forms a reinsurance company that works in conjunction with a title insurer (see sidebar). The insurer agrees to “reinsure” all or part of the business it receives from the reinsurer’s owners with the reinsurer by paying the company a portion of the premium (and allegedly transferring a portion of the risk) for each title transaction. Investigators alleged that the amounts received by these reinsurers exceeded the risk they assumed— particularly because virtually no claims were filed with either the insurer or the reinsurer—and considered these arrangements as a way to pay for referrals, allegedly violating RESPA’s prohibitions on such payments. In settlement agreements with a lender and several home builders in 2006, HUD stated that there is almost never a bona fide need or business purpose for title reinsurance on a single family residence, especially from an entity or an affiliate of an entity that is in a position to refer business to the title insurer. In addition, HUD stated that when the payments to the captive reinsurer far exceed the risk borne by the builders, lenders, or real estate brokers, there is strong evidence that such an arrangement was created to pay referral fees and, therefore, is illegal. Figure 7 provides an example of a captive reinsurance arrangement described in a multistate settlement administered by the Colorado Division of Insurance in 2005. According to several state insurance regulators, the activities involved in such captive reinsurance arrangements suggest that title insurance premiums paid by consumers may be substantially higher than the cost of providing that insurance. The arrangements generally involved a title insurer taking the premium from a consumer, subtracting a certain amount to cover the cost of a title search and examination, then splitting the remainder with the reinsurer. On the basis of details provided in a multistate settlement, insurers were allegedly giving away as much as one- third or more of the premiums consumers paid in order to obtain consumer referrals. In 2005, industrywide loss and loss adjustment expenses only totaled about 5 percent of the total premiums written. The regulators stated that insurers’ willingness to pay such a large portion of the premium to obtain consumers’ title insurance business suggested that insurers overcharged consumers for this insurance. A number of investigations found that ABAs were allegedly being used to compensate ABA owners—often real estate or mortgage professionals— for consumer referrals, raising additional questions about competition in the title insurance industry. RESPA allows ABAs, provided that (1) a disclosure is made to the consumer being referred that describes the nature of the relationship, including financial interests, between the real estate settlement service provider and the person making the referral; (2) compensation for the referral is limited to a return on the ownership interest; and (3) the consumer being referred is not required to use a particular title agent. HUD has also issued a policy statement setting forth factors it uses to determine whether an ABA is a sham under RESPA or a bona fide provider of settlement services. These factors include whether the entity actually performs substantial services in return for fees received, the entity has its own employees to perform these services, and the entity has a separate office. Nonetheless, federal and state investigations identified a number of ABAs that were alleged to be “shell” title agencies that either had no physical location, employees, or assets or did not actually perform any title services. Regulators alleged their primary purpose was to serve as a pass-through for payments or preferential treatment given by the title agent to real estate agents and brokers, home builders, attorneys, or mortgage brokers for business referrals. Over the past 4 years, HUD has completed at least 9 investigations of ABAs, involving at least 17 entities and resulting in approximately $1.8 million being paid by those entities in settlements and refunds. A Colorado investigation found that a single licensed title agent was owner or part owner of 13 sham title agencies that were allegedly used to pay referral fees to mortgage brokers. A number of regulators and industry participants we spoke with expressed concerns about the growing use of ABAs in the title industry. For example, HUD officials have said that while properly structured ABAs may provide some consumer benefits, they also create an inherent conflict of interest as the owner of an ABA is in a position to refer a consumer to that same ABA. They expressed concern that ABAs could be used as a means to mask referral fees, which are generally illegal under RESPA, and that they were seeing more complex arrangements in which it was becoming increasingly difficult to trace the flow of money and to determine if the agents involved in ABAs were actually performing core title services. Several state insurance regulators we spoke with expressed similar concerns. For example, Colorado insurance regulatory officials were concerned over the extent of sham ABAs in Colorado that were potentially being used as a means to pay referral fees. Those officials also said that, on the basis of their work with NAIC’s Title Insurance Working Group, other state insurance regulators that had begun to examine ABAs were also finding potentially illegal activities. For instance, in a September 2005 settlement in Florida, 60 sham title agencies affiliated with 1 underwriter were alleged to have been fronts for referral fees. Some title industry participants expressed concern that ABAs might also restrict competition. They said that when a real estate or mortgage brokerage firm, for example, owns an ABA, other title agents are generally barred from marketing their services to individuals working for that firm. In addition, they said that most or all of the consumer referrals from a brokerage that is an owner of an ABA generally go to that ABA. As a result of this guaranteed order flow, they said, the title agents at that ABA might not be as interested in competing on price or service. In contrast, some title industry officials said ABAs can be beneficial because they provide consumers with better service and potential cost savings. According to an industry organization, ABAs can increase consumer satisfaction through the convenience of one-stop shopping. Furthermore, they benefit their owners and consumers by giving owners greater accountability and control over quality. Industry participants also stated that because of the ability to take advantage of efficiencies, ABAs can result in potential cost savings for the consumer. A recent study sponsored by RESPRO, an industry group that promotes ABAs, concluded that title agents that are part of an ABA do not charge consumers any more than title agents that are not part of an ABA. ABA proponents, and others, also stated that ABA owners, such as real estate or mortgage brokers, often have little leverage in encouraging their real estate agents and brokers to refer consumers to the ABA title agent. They said that these individuals compete based on their reputation, and that recommending a title agent that provided poor service would damage that reputation. As a result, they will only refer consumers to an ABA title agent if it provides good service. Industry organizations we spoke with said that they did not collect data on the percentage of business ABA title agents get from their owners’ businesses. Overall, the concerns expressed by regulators and some industry participants over ABAs raise questions about the potential effects of some ABAs on consumers. Specifically, concerns about some ABAs being used as a means of paying illegal referral fees raise questions about whether referrals are always being made in consumers’ best interests. In addition, concerns about some ABAs potentially restricting competition among title agents raise questions about the extent of competition that is beneficial to consumers. Another factor that raises questions about the prices consumers pay for title insurance is that as the purchase price or loan amount on which a policy is issued increases, the amount paid by consumers appears to increase faster than the costs incurred by insurers and agents in producing that policy. A number of title insurers and agents we spoke with said that they made more money on high-priced transactions than on low-priced transactions because, while premiums increased with price, insurers’ losses rose only slightly and agents’ search and examination costs generally either did not increase or, in many cases, fell. In fact, several title insurers and agents said that transactions involving less-expensive properties often cost agents more to complete because they required agents to correct more title defects than on more expensive transactions. As a result of this pricing structure, writing title insurance on higher-value purchases and mortgages can be quite profitable for title insurers and agents. Title industry officials told us that while high-value transactions could be quite profitable for title insurers and agents, this profit was necessary to subsidize the lower profits or even losses from smaller transactions. These officials also told us that if insurers charged consumers on the basis of the cost of the actual work done, consumers buying relatively inexpensive properties would pay more than they currently did. However, while we asked title industry officials for data to support their assertion that they often lost money on low-priced transactions, they said that they did not collect financial information that would allow them to provide such data. Thus, we could not determine whether insurers or agents were actually losing money on any transactions. According to industry officials, insurers and regulators purposely designed the current premium rate structures with an element of subsidization built in—that is, premiums for high-priced transactions were intended to subsidize the costs associated with lower-priced transactions. Among the six state insurance regulators we spoke with, although most agreed that insurers made more money on higher-priced transactions, only one told us that subsidization of consumers on lower-priced transactions was intentional on the part of the state. Among the rest, three said that there was no intentional subsidization, and two said that they did not know. Recent high profits within the title insurance industry have raised additional questions about the prices being paid by consumers. Several title insurance industry officials acknowledged that insurers’ profits had been good over the past several years as a result of increased home prices and large numbers of consumers refinancing their home mortgages, but these officials said that such profits made up for very low profits during weaker markets. However, we found that title insurers’ financial performance, as measured by return on equity, has been positive since at least 1992 and, in every year except one, has been above that of the property-casualty insurance industry as a whole. As shown in figure 8, the combined return on equity for the largest five title insurers has been at or above 9 percent, in every year except one, over the period from 1992 to 2005, and in most years it was above 12 percent. Over that same period, only one insurer had a year with a negative return on equity. In addition, during 2006 public conference calls with financial analysts, several of the largest insurers said that they expected business to be profitable even during the weakest real estate markets. An industry-sponsored study stated that several insurers had reduced title insurance rates in the last several years, and that such reductions provided evidence of price competition, at least in California. We were able to obtain historical premium rate information in five of our six sample states. Between 2000 and 2005, premium rates for the median-priced home went down in three of those five states, stayed the same in one state, and increased by only 2 percent in the other state (see fig. 9). However, because total premiums are determined by applying that rate to the home price or loan amount, and median home prices increased substantially over that period, total premiums paid by consumers in most of our sample states also increased substantially. For example, among these five sample states, consumers’ premiums fell in one state, but rose in the remaining four states, sometimes dramatically. Specifically, premiums decreased by 12 percent in one state but increased 93 percent in another, and in one state where premium rates fell by 29 percent, actual premiums paid rose by 75 percent. Historical information on possible additional amounts charged by title agents in our sample states was not available. One more factor that raises questions concerning the prices consumers pay for title insurance is that in states where agents’ charges for their search and examination services are not included in the premium paid by the consumer (i.e., agents charge separately for these services), it is unclear whether consumers may be overpaying for those services. The lack of clarity stems from the way in which title insurers determine premium rates that consumers will pay. Officials from title insurance companies told us that they generally determined their premium rates on the basis of their expected expenses, which include losses from claims, as well as the amounts retained by the title agents that write insurance for them. Title insurers know what share of consumers’ premiums the title agents that write policies for them will retain—generally around 80 to 90 percent—and what share the insurer will receive. Insurers then set their premium rates at a level sufficient to ensure that their share of the premiums will be enough to cover their expected costs and earn them a reasonable profit. These calculations take into account the portion of the premiums that title agents retain, but not whether that amount reflects the agents’ actual costs. Officials of insurance companies and title agencies told us that the split was negotiated between the insurer and agent on the basis of a number of factors, including the agent’s volume of business, the quality of the agent’s past work, and the insurer’s desire to increase its share of business in a certain geographic area. Among our sample states, the amount retained by title agents ranged from around 80 percent in one state to 90 percent in another (see fig. 10). Some insurance company officials told us that they had an idea of what agents’ costs should be based on their experience with their own direct agents, but these officials said that they did not analyze how the amounts retained by agents compared with those costs. Insurers that we spoke with also told us that they generally share the same percentage of the premium with their agents, around 80 to 90 percent, regardless of whether those agents were in states where consumers were to pay for agents’ search and examination services within the premium rate—known as all-inclusive states—or whether they were in states where agents can charge consumers separately for those services—known as risk-rate states. However, if title agents are charging separately for their search and examination services, outside of the premium, you would generally expect the percentage of the premium retained by agents to be lower because they would not need to recover the costs for those services from the premium. Because insurers told us that the percentage of the premium given to the agent does not depend on whether the title agent is in a risk-rate or all-inclusive state, this practice raises the possibility that in some risk-rate states, title agents may be (1) retaining 80 to 90 percent of the premium—a percentage meant to be sufficient to cover agents’ search and examination costs in all-inclusive states—and (2) charging the consumer a separate, additional amount intended to pay for those same services. According to HUD officials, in risk-rate states, the amount consumers pay title agents for their search and examination services, which is in addition to the title insurance premium, can sometimes be as large as the premium itself. However, reliable data did not exist to determine whether consumers in risk-rate states consistently paid more, in total, than those in all-inclusive states. While many title industry officials acknowledge that competition in title insurance markets is based primarily on service rather than price, disagreement exists between the industry and regulators over the extent of actual price competition. According to some of the title industry officials we spoke with, price competition does exist within the title insurance industry. While these officials acknowledged that consumers generally rely on referrals from real estate and mortgage professionals, they argued that these professionals could have an interest in obtaining lower-priced title services for their customers and, thus, could exert downward pressure on premium rates. Others cited various factors, such as changes in premium rates and increased levels of coverage, as evidence of price competition and have stressed the benefits for consumers of competition that is based on service. In contrast, insurance regulators in two of our sample states have concluded that premium rates are too high relative to costs, potentially due to a lack of price competition. In California, the state insurance regulator concluded in 2006 that title insurance markets were lacking competition, resulting in increased prices for consumers. The regulator there has also proposed lowering current title rates. In Texas, where title insurance premium rates are promulgated by the state insurance regulator, in each of the last two rate hearings, the regulator has proposed a premium rate reduction to account for a competitive structure that inflates prices for consumers. That is, the regulator has requested premium rate reductions to account for a market structure in which consumers pay for title insurance but others generally choose the title agent and insurer, which the Texas regulator says can result in unnecessary and unreasonable expenses. In the states we visited, we found that regulators did not assess title agents’ costs to determine whether they were in line with premium rates; had made only limited efforts to oversee title agents (including ABAs involving insurers and agents); and, until recently, had taken few actions against alleged violations of antikickback laws. In part, this situation has resulted from a lack of resources and limited coordination among different regulators within states. On the federal level, authority for alleged violations of section 8 of RESPA, including those involving increasingly complex ABAs, is limited to seeking injunctive relief. Some state regulators expressed frustration with HUD’s level of responsiveness to their requests for help with enforcement, and some industry officials said that RESPA rules regarding ABAs and referral fees need to be clarified. Industry and government stakeholders have proposed several regulatory changes, including RESPA reform, strengthened regulation of agents, a competitor right of action with no monetary penalty, and alternative title insurance models. Because consumers can do little to influence the price of title insurance, they depend on regulators to protect buyers from, for example, excessive premium rates. As they do with most lines of insurance, such as property- casualty coverage, regulators seek to ensure that title insurance premium rates are representative of the underlying risks and costs associated with the policies that are issued. In reviewing insurance rates, regulators generally focus on confirming that insurers’ projections of their expected losses on claims are accurate, because for virtually all lines of insurance, the majority of consumers’ premiums go to pay such losses. For property- casualty insurance in 2005, for example, 73 percent of total premiums were used to cover losses. For title insurers, however, only 5 percent of title insurance premiums went to cover losses (see fig. 11), while more than 70 percent went to title agents. Despite this difference, few regulators review the costs that title agents incur to determine whether they are in line with the prices charged. In fact, in the majority of states, agents’ costs for search and examination services are not considered part of the premium and, thus, receive no review by regulators. Therefore, title agents charge separately for their search and examination services, yet they receive about the same percentage of the premium as agents in states where these costs are included in the premium. In our six sample states, one regulator did not regulate premium rates for title insurance at all, and one state sold title insurance through a state-run program that did not regulate title search and examination costs. In the remaining four states, agents’ search and examination costs were considered part of the premium, but regulators in only one of those states regularly reviewed title agents’ costs as part of the rate review process. The other three regulators saw the amount retained by the agents as a cost to the insurer that they would review as justification for insurers’ premium rates. However, these states did not go beyond the insurer to review the agents’ costs. Furthermore, only two of the six regulators we reviewed collected financial and operational data on title agents, and regulatory officials in both those states said that the data that they currently collect were insufficient to analyze the appropriateness of current premium rates. For example, while officials from the California insurance regulator have concluded that a lack of competition exists and that premium rates are excessive, they have determined that they would need to collect a significant amount of additional information before they could assess the extent of overpricing. In July 2006, the officials proposed an extensive plan for collecting these data that involved gathering information at the individual transaction level. Similarly, the Texas insurance regulator has been collecting financial data on title agents, but officials there have concluded that these data, which are not organized by functional categories, are insufficient for determining the extent of potentially excessive costs. Because costs incurred by title agents receive such limited review, most state insurance regulators are limited in their ability to assess whether the amounts that consumers are charged for title insurance reflect the costs they are intended to cover. Appendix II describes the types of information that would be helpful in assessing title agents’ costs and operations. Some aspects of agent regulation, such as licensing, varied across our sample states, while other aspects, such as capitalization and education requirements, were minimal. Of our six sample states, four required agents to register or obtain a license. Iowa had no title agents, and New York had no licensing or registration requirements. Furthermore, state regulators rarely audited agents, and the audits that were done were usually limited to examining only accounts that title agents use to hold customers’ money, known as escrow accounts. Audits of operating accounts were uncommon, although some industry participants said that these accounts were a source of agent defalcations. Table 2 summarizes some aspects of title agent regulation in our sample states. Moreover, few states we visited require strong insurer oversight of agents. The nature of such oversight is usually negotiated between the insurer and the agent and defined by contract. Typically, the insurers sign up agents based on the quality of their service and their reputation in a certain area and audit their escrow accounts every 18 to 36 months. Industry participants told us that contractual stipulations and questions of unfair competitive practices were among the reasons that prevented insurers from looking into independent agents’ operating accounts. When we asked the major title insurers that we spoke with for information on title agents’ costs, they said that they did not collect data from title agents in a manner that would allow for an analysis of costs and profitability and, thus, could not provide us with such information For example, these insurers said that while they reviewed the records of agencies that wrote policies for them, contracts with the agencies generally limited such reviews to escrow accounts and policy records—that is, only enough review to ensure that the insurer had received its share of premiums for the policies issued, but not enough review to evaluate the components of agent costs. Although insurers may not have access to all of the data they need from independent title agents (1) that write for several companies and (2) that do not want insurers to see financial information related to their entire business, the situation with affiliated title agents is generally different. In affiliated arrangements, the insurer has an ownership interest in the title agent and seemingly would have access to the agent’s financial records— especially in cases where the insurer has a controlling interest in the agent and may be required to consolidate its affiliated agent’s financial statements with its own. According to regulators, however, the industry has been resistant to calls for more extensive data collection because of the potential cost burden on the insurers and their agents. Regulators in California and Colorado have recently implemented or plan to implement stronger regulations for title agents, including more stringent qualifying examinations, higher capitalization requirements, criteria to identify sham business arrangements, and more detailed data calls focusing on the costs of providing title insurance. The regulators said that these stronger regulations would be key to preventing illegal actions by agents by eliminating both bad actors and questionable practices in the title industry. Until recently, state regulators had done little to oversee ABAs. Although three of our six sample states have some type of restriction on the amount of business a title company can get from an affiliated source, enforcement of these laws appeared to be limited. In California, the laws specify that a title company can get no more than 50 percent of its orders from a controlled source. In Colorado, until recently, an insurance licensee was prohibited from receiving more in aggregate premium from controlled business sources than from noncontrolled sources. However, one regulator told us that, until recently, it had not rigorously examined data from agents to verify their compliance with the percentage restrictions. Amid recent reports of enforcement actions taken by HUD and some states against allegedly inappropriate ABAs, some state insurance regulators told us that they had begun looking into these increasingly popular arrangements. Regulatory officials told us that they had found various problems, including the level of compliance with mandatory percentage restrictions from controlled sources; the existence of potentially illegal referral fees and kickbacks among ABA owners; and title work performed at some agencies that might not qualify as “core” title work for which liability arises (such as the evaluation of title to determine insurability, clearance of underwriting objections, issuance of the title commitment and policy, and conducting the title search and closing). In Colorado and Minnesota, officials estimated that the number of ABAs had doubled in the past few years. Colorado regulatory officials attributed some of the growth to lax agent-licensing requirements, including low capitalization requirements and minimal prelicense testing. In contrast, California regulatory officials credited the relative lack of ABAs in their state to more stringent licensing and capitalization requirements. Agents in California, referred to as Underwritten Title Companies, must raise between $75,000 and $400,000 in capital to conduct business, depending on the number of documents recorded and filed with the local recorder’s office. Furthermore, California has an extensive licensing process, including a review of the character, competency, and integrity of prospective owners; a financial assessment; and a review of the reasonableness of their business plan. As we previously noted, from 2003 to 2006, a growing number of federal and state investigations into ABAs alleged that these arrangements were being used to provide illegal referral fees and kickbacks. Colorado’s regulator has implemented stronger agent regulation, such as a stricter review of agents’ applications, mandated disclosure of any affiliated relationships, and higher capitalization and testing requirements. Regulatory officials said that these changes would help prevent future illegal actions by title agents, especially through the improper use of questionable ABAs. However, the more limited regulation and oversight of title agents and ABAs in other states could provide greater opportunity for potentially illegal marketing and sales practices. Kickbacks are generally illegal under both RESPA and most state insurance laws. Although the enforcement provisions of laws in five of the six states in our sample included suspension or revocation of agents’ licenses and monetary penalties, state regulators and others did not see these sanctions as effective deterrents against kickbacks. One state regulator and some industry participants expressed concern that title insurers and agents saw the fines simply as a cost of doing business, since these businesses stood to gain much more in market share and revenue through illegal kickbacks than they would lose in state-assessed monetary penalties. From 2003 to 2006, officials in states we reviewed settled with insurers for over $90 million in penalties for alleged referral fee violations. In comparison, in 2005 alone net earnings for the five biggest title insurers totaled almost $2 billion. In addition, at least one group of industry participants told us they took the fact that regulators had taken little action in the past to mean that they would not get caught if they engaged in illegal activity. RESPA specifies that states—through their attorneys general or insurance commissioners—may bring actions to enjoin violations of section 8 of RESPA. In nearly all of our sample states, title insurance laws contain antikickback and referral fee provisions similar to those in RESPA. Also, although RESPA provides for injunctive action by state regulators, they have hesitated to use it and have only recently begun to look into RESPA section 8 violations. In one state, regulators concluded that they were prevented by state law from seeking injunctive relief under section 8 of RESPA because their only available court for complaints was an administrative one that did not satisfy RESPA requirements. Moreover, some state insurance regulators said that they had limited enforcement options against those that they identified as the major contributors to the kickback problem: real estate agents, mortgage brokers, and other real estate professionals. Even though receiving kickbacks is generally illegal under RESPA, some state regulators told us that they had no authority to go after these entities, which were regulated by other state agencies. Meanwhile, the regulators that oversee these real estate professionals have shown little interest in or knowledge of potential violations of their licensees. In California and, until recently, in Colorado, regulators said that inconsistencies in laws governing kickbacks for title insurers and other real estate professionals have made it difficult to pursue recipients of illegal kickbacks. Furthermore, some state officials told us that they received little response when they forwarded potential kickback cases to HUD investigators. A lack of consistent enforcement of antikickback and referral fee provisions by all relevant state regulators, as well as HUD, could limit the effectiveness of enforcement efforts. Regulators at the state and federal levels told us that limited resources were available to address issues in title insurance markets. Title insurance is a relatively small line of insurance, and title insurers and agents often get even less than the usual limited market conduct scrutiny that state insurance regulators give other types of insurers. With little ongoing monitoring, selected regulators told us that their attention is drawn to problems largely through complaints from competitors. Complaints from consumers have been rare because, as we have discussed, they generally do not know enough about title insurance to know that they have a problem. Furthermore, the many entities besides title insurers and agents that are involved in the marketing and sale of title insurance often have their own regulators. These entities include real estate agents, mortgage brokers, lenders, builders, and attorneys, all of which may be regulated by different state departments. Our previous work has shown the benefits of coordinated enforcement efforts between state insurance regulators and other federal and state regulators in detecting and preventing illegal activity. According to some state officials’ comments, varying levels of cooperation exist among different state regulators, with some states demonstrating little or no cooperation and other states having more structured arrangements, such as a task force that might include the state insurance regulator, mortgage lending department, real estate commission, and law enforcement officials. Until a recent Colorado law was passed, however, these arrangements stopped short of being codified in legislation or regulation in any of our sample states. The previously mentioned task force in Texas meets monthly to discuss current and potential fraud cases, and the regulators involved noted that it has helped them identify and investigate cases of which they would have otherwise been unaware. In our discussions with some noninsurance regulators, we observed that they had an apparently nominal understanding of violations of laws such as RESPA, and that they had taken few actions against their licensees for violations. Two of the state real estate regulators we spoke with, for instance, said that they were not aware that referral fees were illegal under their state laws or under RESPA. Another real estate regulator said that the department did not maintain a complaint category for RESPA violations against licensees and, thus, could not provide us with the number of RESPA-specific complaints the agency had received. In 3 years, this department had not revoked any licenses and could only identify one RESPA violation case in which licensees were publicly censured and fined. All of these actions were less than what was allowed by state law. One difficulty for state insurance regulators may be that the state laws and regulations for mortgage brokers, real estate agents, and others may differ from those for title insurers and agents, and these laws and regulations may not view referral fees in the same way, thus making interdepartmental enforcement difficult. For example, Illinois and New York real estate law contains no reference to referral fees related to settlement service providers, although the title insurance laws prohibit these fees. However, given the lack of coordination we noted among regulators in the same state, it is not surprising that different regulatory agencies were not aware of differences in the way state laws and regulations treat certain activities. Without greater communication and coordination among the various state regulators, some potentially illegal activities carried out by those involved in the sale and marketing of title insurance could go undiscovered and uncorrected. The investigative actions HUD has taken have largely resulted in voluntary settlements without admission of wrongdoing by the involved parties. According to HUD officials, it is difficult to deter future violations without stronger enforcement authority, such as civil money penalties, because, as we previously mentioned, companies view small settlements as simply a cost of doing business. While HUD has obtained a number of voluntary settlements from 2003 to 2006, the average amount assessed by the department was approximately $302,000. During the same period, the combined net earnings of the five major national title insurers averaged about $1.6 billion each year. One particular area of possible section 8 violations about which HUD officials expressed concern was the difficulty of investigating complex ABA relationships. RESPA provides an exemption to the antikickback provision for compensation for goods or services actually provided. However, HUD officials told us that it was often difficult to establish what type of and how much work an entity actually did. In the past, the most common type of ABA was an entity, such as a real estate broker, that owned another entity, such as a title agent. Recently, the arrangements have begun to involve three or more entities, making it difficult to trace the flow of money among entities and the responsibilities of each entity. HUD’s enforcement mechanism is also complaint-driven, but, as we previously noted, most consumers are not well-informed enough to bring complaints. Thus, violations could exist that HUD would not know about. HUD has few staff focused on RESPA issues, although their number has increased from 5 full-time employees in 2001 to more than 19 in 2006. According to other regulators, these employees are generally limited to responding to some complaints and pursuing a few large cases. Recently, HUD officials responsible for enforcing RESPA have begun training employees in HUD’s Office of the Inspector General on RESPA issues. The officials said that they have received some forwarded cases as a result of the training. In addition to staff specifically assigned to RESPA issues, resources in other parts of HUD, such as the Office of the General Counsel, also provide support, according to HUD officials. HUD also spends $500,000 per year on an investigative services contract to assist RESPA enforcement efforts. HUD tracks cases of alleged RESPA violations along with their disposition, staff assigned, closing date, and settlement, but we did not obtain this information by the time this report went to print. Some state regulators expressed frustration with HUD’s level of responsiveness, saying that the agency did not always follow up with them on forwarded cases, potentially limiting the success of investigative efforts. State regulators told us that they looked to HUD to enforce kickback provisions beyond what they had concluded was allowed by state insurance laws—for example, against mortgage brokers, real estate agents, and others that state insurance regulators do not oversee. Yet HUD officials and state regulators told us that there was no formal plan for coordinating with states, and that cooperation, where it existed, relied on requests and informal relationships. HUD officials cited several possible reasons for not communicating the results of forwarded cases to the states. Among these reasons were state and federal jurisdictional issues, constrained resources, and complaint- driven enforcement that limited HUD’s scope. As we mentioned, our previous work has shown the benefits of coordinated enforcement efforts between state insurance regulators and other federal and state regulators to detect and prevent illegal activity. A September 2000 report recommended that state insurance regulators improve information sharing by developing mechanisms for routinely obtaining data from other regulators and implementing policies and procedures for sharing regulatory concerns with other state insurance departments. Some industry officials also said that the rules under RESPA were not always clear and that HUD had not been responsive in answering their inquiries, potentially resulting in activities that HUD later deemed to be illegal. For example, in the case of captive reinsurance, two large underwriters told us that they had never received clear answers from HUD to inquiries about the legality of such arrangements, and that they entered into them as a result of competitive pressures. Eventually, these underwriters ended the arrangements after federal regulators investigated and deemed them improper. As a result, these underwriters and other entities paid over $66 million in settlements with states and HUD. Some industry participants, including HUD’s former general counsel, have suggested that HUD clarify RESPA by instituting a no-action letter process similar to the one that the SEC uses to address industry questions on potential activities and to the process that HUD uses in its Interstate Land Sales Program. Although clarifying regulations can provide benefits, without greater enforcement authority and more coordination with state regulators, HUD’s effectiveness at deterring, uncovering, and stopping potentially illegal title insurance activities may be limited. With knowledge gained from their recent investigations into the title insurance industry, and in line with their mission to increase access to affordable housing, HUD has developed a two-pronged approach to regulatory changes. First, HUD plans to propose reforms to the regulations that govern RESPA. Agency officials said that the reforms will help consumers shop for settlement services, and that, hopefully, consumer- driven competition will put downward pressure on prices. However, agency officials have not yet made public the specifics of these reforms. Second, HUD plans to seek substantial authority to levy civil money penalties that it expects will deter future violations of section 8 of RESPA. HUD officials said that having the authority to levy civil money penalties would greatly enhance their RESPA enforcement efforts. HUD’s obtaining civil money penalty authority in section 8 of RESPA, however, would require a legislative change. Some state regulators also have proposed changes in oversight of the title insurance industry. Regulatory officials found that weak licensing regulations may have contributed to problems in the industry, and that a lack of data on title agents’ costs hindered their ability to analyze prices paid by consumers and to ensure such prices were not excessive. As a result, regulators have proposed the following changes: In Colorado, state regulators have made changes that are primarily aimed at making the identification and, thus, the elimination of improper ABAs easier—for example, through mandatory disclosure of ownership structures on agent applications and higher capitalization requirements. At least one industry participant has welcomed the changes, which it said will help level the playing field for independent agents. In California, state regulators have concluded that premium rates are excessive and have proposed premium rate rollbacks derived from a detailed evaluation of costs. In Texas, state regulators are attempting to collect more detailed information on agent costs, shifting their emphasis to comprehensive data on functional categories that would allow them to more easily identify excess costs and illegal kickbacks. In addition, the NAIC Title Working Group is looking at modifications to the model laws in an effort to align referral fee provisions with those of RESPA and enhance state regulators’ enforcement authority. Finally, some industry officials have said that state and federal regulators either did not have the ability or lacked the will to address violations, which the officials said was the fault of only some in the industry. Other officials said that they had concluded that the industry would be better off policing itself, and some underwriters proposed giving insurers the right to seek private injunctive relief against competitors suspected of engaging in illegal activities, but with no monetary award. One underwriter official said such self-policing by the industry would help government enforcement and maintain honesty among industry participants. However, it was not clear whether such actions could be used punitively or as a way to stifle competition. Some industry stakeholders, however, see the current model of selling and marketing title insurance as irretrievably broken and have put forth two alternative title insurance models designed to benefit and protect consumers through lower prices and government intervention. The first alternative model would require lenders to pay for title insurance, on the theory that as regular purchasers of title insurance, lenders would be better informed and could potentially use their market power to obtain lower prices. However, some fear that this model would make the process less transparent, and that lenders would not pass on any cost savings. The second alternative model would be a system like Iowa’s, with state-run title underwriters. But it is not clear that this system would make the necessary changes to the current model or that it would save consumers money. For example, although title underwriters are barred from selling title insurance in Iowa, nothing prevents consumers from choosing to purchase it from them out of state, and the underwriters end up providing title insurance to about half of the market. Furthermore, while premium rates for Iowa Title Guaranty might be lower, although not the lowest, than rates in many other states, the total costs that consumers pay for title searches, examinations, and clearing of any title problems might not differ substantially. In Bankrate.com’s survey of closing costs, Iowa’s total costs were about the same as those in Maryland, Nebraska, South Dakota, Washington State, and West Virginia, where private title underwriters are free to do business. Title insurance can provide real benefits to consumers and lenders by protecting them from undiscovered claims against property that they are buying or selling. However, multiple characteristics of current title insurance markets, as well as allegedly illegal activities by a number of those involved in the marketing of title insurance, suggest that normal competitive forces may not be working properly, raising questions about the prices consumers are paying. Compounding this concern is the apparently very limited role that most consumers play in the selection of a title insurer or agent, and the fact that consumers must purchase title insurance to complete a real estate purchase or mortgage transaction. This puts consumers in a potentially vulnerable situation where, to a great extent, they have little or no influence over the price of title insurance but, at the same time, they have little choice but to purchase that insurance. Furthermore, federal and state regulators have identified a number of recent allegedly illegal activities related to the marketing and sale of title insurance, which suggests that some in the title insurance industry are taking advantage of consumers’ vulnerability. To begin to better protect consumers, improvements need to be made in at least three different areas. First, price competition between title insurers and between agents, from which consumers would benefit, needs to be encouraged. Educating consumers about title insurance is critical to achieving this objective. Some state regulators have begun to encourage competition by attempting to educate consumers and improve transparency by publicizing premium rate information on their Web sites. While HUD’s existing home-buyer information booklet also provides some useful information on buying a home, the information on title agent ABAs and available title insurance discounts is outdated and fails to provide sufficient detail. As a result, home owners may not be making informed title insurance purchases. Moreover, although some in the industry complain about ambiguity in the regulations concerning referral fees associated with ABAs, their use has continued to grow even while the extent to which any realized benefits from such arrangements are passed along to consumers is unknown. In addition, these arrangements can create potential conflicts of interest for the real estate and lending professionals involved that may disadvantage consumers. Second, to ensure that consumers are paying reasonable prices for title insurance, more detailed analysis is needed on the relationship between the prices consumers pay and the underlying costs incurred by title insurers and, especially, title agents. Because of the key role played by title agents, such analysis will not be possible until state regulators collect and analyze data on those agents’ costs and operations, including those operating as ABAs. Third, to ensure that consumers are not taken advantage of because of their limited role in the selection of a title insurer or agent, more needs to be done to detect and deter potentially illegal practices in the marketing and sale of title insurance, particularly among title agents. HUD and several state regulators have already begun to take steps in this area, but these efforts often face challenges, such as HUD’s limited enforcement authority, statutory limitations of RESPA, potentially confusing regulations, and a lack of coordination among multiple regulators. Increased regulatory scrutiny of the increasing number of complex ABAs appears to be particularly important because although only a few state regulators have looked at such arrangements in detail, those that have looked at this issue have discovered potentially illegal activities. Because entities other than insurance companies are integrally involved in these transactions, identifying approaches to increase cooperation among HUD, state insurance, real estate, and other regulators in the oversight of title insurance sales and marketing practices is also critical. Ultimately, because of the involvement of both federal and state regulators, including multiple regulators at the state-level, effective regulatory improvements will be a challenge and will require a coordinated effort among all involved. Congress can also play a role in improving consumers’ position in the title insurance market by reevaluating certain aspects of RESPA. For example, HUD currently lacks the authority to assess civil money penalties for violations of section 8 of RESPA, generally forcing HUD to rely on voluntary settlements, which can be seen by some in the title insurance industry as simply a cost of doing business. In addition, RESPA dictates when and under what circumstances HUD’s home-buyer information booklet is to be distributed to prospective buyers and borrowers. Revisiting RESPA to ensure that consumers receive this information as soon as possible when they are considering any type of mortgage transaction, not just when purchasing real estate, could be beneficial. As part of congressional oversight of HUD’s ability to effectively deter violations of RESPA related to the marketing and sale of title insurance, Congress should consider exploring whether modifications are needed to RESPA, including providing HUD with increased enforcement authority for section 8 RESPA violations, such as the ability to levy civil money penalties. Congress also should consider exploring the costs and benefits of other changes to enhance consumers’ ability to make informed decision, such as earlier delivery of HUD’s home-buyer information booklet—perhaps at a real estate agent’s first substantive contact with a prospective home buyer—and a requirement that the booklet be distributed with all types of consumer mortgage transactions, including refinancings. We are recommending that HUD take the following two actions, as appropriate. The Secretary of HUD should take action to (1) protect consumers from illegal title insurance marketing practices and (2) improve consumers’ ability to comparison shop for title insurance. Among the actions they should consider are the following: expanding the sections of the home-buyer information booklet on title agent ABAs and available title insurance discounts; evaluating the costs and benefits to consumers of title agents’ operating as clarifying regulations concerning referral fees and ABAs; and developing a more formalized coordination plan with state insurance, real estate, and mortgage banking regulators on RESPA enforcement efforts. Likewise, we are recommending that state insurance regulators, working through NAIC where appropriate, take the following two actions. State regulators should take action to (1) detect and deter inappropriate practices in the marketing and sale of title insurance, particularly among title agents, and (2) increase consumers’ ability to shop for title insurance based on price. Among the actions they should consider are the following: strengthening the regulation of title agents through means such as establishing meaningful requirements for capitalization, licensing, and continuing education; improving the oversight of title agents, including those operating as ABAs, through means such as more detailed audits and the collection of data that would allow in-depth analyses of agents’ costs and revenues; increasing the transparency of title insurance prices to consumers, which could include evaluating the competitive benefits of using state or industry Web sites to publicize complete title insurance price information, including amounts charged by title agents; and identifying approaches to increase cooperation among state insurance, real estate, and other regulators in the oversight of title insurance sales and marketing practices. We requested comments on a draft of this report from HUD and NAIC. We received written comments from the Assistant Secretary for Housing of HUD and the Executive Vice President of NAIC. Their letters are summarized below and reprinted in appendixes III and IV, respectively. The Assistant Secretary for Housing at HUD generally agreed with our findings, conclusions, and recommendations. Specifically, he indicated that the report accurately assessed the issues that adversely affect consumers in the title insurance market. He also acknowledged the importance of protecting consumers and improving their ability to shop for title insurance. In response to our recommendation to expand the sections of the home-buyer information booklet on ABAs and discounts, he noted the importance of home-buyer education and amending the home-buyer’s booklet to include this information. Addressing our recommendation to evaluate the costs and benefits of ABAs, he said that while ABAs are currently legal, HUD is in the process of evaluating various ABA structures to ensure they operate as Congress intended. We also recommended that HUD clarify regulations about referral fees and ABAs. The Assistant Secretary stated that HUD will continue its efforts to clarify existing guidelines, as well as develop new guidelines, to address practices that negatively impact consumers. Furthermore, he generally agreed with our recommendation for greater coordination with state regulators, noting that such coordination is necessary and pointing out past instances of HUD coordination with state regulators on RESPA enforcement that have resulted in successful outcomes. Lastly, he emphasized the ongoing challenge of RESPA enforcement without civil money penalty authority, stating that consumers would benefit if such authority were granted to HUD. The Executive Vice President of NAIC agreed that our report identified concerns in the area of consumer protection. She also said that our recommendations are worthy of exploration, and that NAIC would continue to work to improve consumer education, consumer protections, and price transparency in the title insurance market. We also received separate technical comments from staff at HUD and NAIC. We have incorporated their comments into the report, as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the Chairman, House Committee on Financial Services, and the Chairman and Ranking Member, Senate Committee on Banking, Housing, and Urban Development. We will also send copies to the Secretary of Housing and Urban Development, the President of the National Association of Insurance Commissioners, and each of the state insurance commissioners. We will make copies available to others upon request. The report will also be available at no charge on our Web site at http://www.gao.gov. Please contact me at (202) 512-8678 or [email protected] if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. We previously provided a report and testimony identifying characteristics of current title insurance markets that merited additional study, including the extent to which title insurance premium rates reflect underlying costs and the extent of state oversight of title agents and other real estate professionals. This report focuses on issues related to (1) the characteristics of title insurance markets across states, (2) the factors that raise questions about prices and competition in the industry, and (3) the current regulatory environment and planned regulatory changes. Because of our awareness that title insurance regulation varies considerably from state to state, we chose six states in which to perform a detailed review of their laws, and regulatory and market practices. These states were California, Colorado, Illinois, Iowa, New York, and Texas. We chose these states to obtain a broad variety of state title insurance activity across the following dimensions: Proportion of the premiums written nationwide. Differences in the process of purchasing title insurance and the real estate transaction, including the relative importance of attorneys and alternative systems for title insurance. Domiciling of the largest national insurers and larger regional insurers. Varying rate-setting regimes and total premiums. The existence of ongoing or past Department of Housing and Urban Development (HUD) investigations in the state. Different combinations of premium rates, annual home sales, and rate- setting regimes. The activity of known proactive regulators in some states. Except where noted, our analysis is limited to these states. We used the information obtained in the states to address each of our objectives, in addition to other work detailed in the following text. To gain an overall understanding of the characteristics of national and local title insurance markets, we reviewed available studies. These included the study on the California title insurance market (as well as numerous criticisms of that study) and recent studies conducted on behalf of the Fidelity National Title Group, Inc., and the Real Estate Settlement Providers Council (RESPRO). We discussed the studies’ results with the authors and raised questions about their methodology and conclusions to further broaden our knowledge of the varying approaches in analyzing title insurance markets. To better understand the effect consumers have on the price and selection of title insurance, we obtained information from title insurers, title agents, and state title industry associations about typical consumer behavior in the title insurance transaction. To deepen our understanding of the dynamics of the industry and current practices and issues within the title insurance industry that affect consumers, we gathered views from a variety of national organizations whose members are involved in the marketing or sale of title insurance or related activities. These organizations included the American Land Title Association (ALTA), RESPRO, the National Association of Realtors, the Mortgage Bankers Association of America, the American Bar Association, the National Association of Home Builders, and the National Association of Mortgage Brokers. To better understand the relationship between premium rates and underlying costs, we discussed these issues with insurers, agents, and title industry associations. We attempted to obtain cost data from agents and insurers, but they were not able to provide us with data that would allow analysis of agent costs. In some states, we toured title plant facilities and observed the title search and examination process to broaden our analysis of underlying title insurance costs. To gain a better understanding of how title insurance premiums are shared between insurance companies and agents, we reviewed annual financial data collected by the National Association of Insurance Commissioners (NAIC) from title insurance companies and, to some extent, data collected by the Texas Department of Insurance, the California Department of Insurance, and ALTA. We analyzed these data to deepen our understanding of title insurer and agent costs and revenues. We also consulted other publicly available financial information on title insurers and agents and spoke with agents. To determine how insurers account for premiums, we also looked at financial data filed with the Securities and Exchange Commission and spoke with officials from three of the largest title insurance underwriters. To assess the current state and federal regulatory environment, we reviewed laws and regulations, and interviewed key regulators. To determine the role that states play in overseeing the various parties involved in the title insurance industry, we reviewed laws and regulations governing title insurance, real estate, and mortgage banking in six selected states. We also spoke with insurance, banking, mortgage, and real estate regulators in each state. To obtain an understanding of the federal oversight role in the title insurance market, we interviewed officials from HUD and reviewed relevant laws and regulations. We also discussed these issues with officials at the Federal National Mortgage Association and the Federal Home Loan and Mortgage Corporation to better understand the relationship between the secondary mortgage market and title insurance. Furthermore, we interviewed staff and state regulators working with NAIC to get their views on the industry and to obtain information on the activities of their Title Insurance Working Group. We performed our work in Washington, D.C.; Chicago, Illinois; and selected sample states between February 2006 and March 2007 in accordance with generally accepted government auditing standards. Understanding title agents’ costs and how these costs relate to title insurance premiums that consumers pay is important because title agents do or coordinate most of the work necessary for issuing title insurance policies, and they retain most of the premium. Understanding these costs would require state insurance regulators to gather and analyze financial data on title agents. The list below illustrates the types of data that might be gathered and analyzed. This would be a multistep process and could involve detailed analysis of some title agents, such as those that look quite different financially from group (such as county or statewide) averages. Reasonable explanation for such differences could be informative of agency costs, while the absence of reasonable explanation could raise questions about the legitimacy of such costs. We identified the following information on affiliated agents and direct operations that could be requested from insurers: 1. A complete list of underwriters’ affiliated title agents and title service companies that would include the company name and address and the year acquired or established by the underwriter. 2. Financial data on each affiliate that would include balance sheets and statements of changes in owners’ equity. 3. Revenue data that would include title premium revenues and production fees earned from others (e.g., search and examination, closing, and recording). 4. Title premium revenues and policies written that would be broken out between residential and commercial. 5. Personnel cost data that would include salaries, commissions, bonuses, benefits, and full-time equivalent employees, by function. 6. Other personnel data that would include average salaries, bonuses and benefits, and brief descriptions of any incentive pay systems, by job type and function. 7. Five years of other expense data that would include search and examination fees paid to contractors, advertising, entertainment, plant maintenance, rent, office supplies, and legal fees and settlements. 8. Expenses allocated to and from the underwriter. 9. For each affiliated title service company, the names of the 10 largest clients. 10. For each subsidiary of the underwriter, the names of any other underwriters, escrow companies, realtors, builders, developers, mortgage brokers, lenders, or other entities in the title, real estate, or mortgage industry that have ownership interests in the subsidiary, in which the subsidiary has an ownership interest, or that are vendors of the subsidiary and owned by subsidiary management. Likewise, we identified the following information on independent title agents that could be requested from insurers: 1. The number of independent agents, by state. 2. The number of offices of each independent agent, by state. 3. Each agent’s title premiums written for the underwriter as a percentage of the agent’s total title premiums written. 4. Premiums written by each agent for this underwriter, by state. 5. Revenue data that would include title premium revenues and production fees earned from others (e.g., search and examination, closing, and recording). 6. Expense data that would include employee and owner salaries, commissions, bonuses, and benefits; director fees; search and examination fees paid to contractors; advertising; entertainment; plant maintenance; rent; office supplies; legal fees and settlements; and claim losses. In addition to the contact person named above, Lawrence Cluff, Assistant Director; Patrick Ward; Tania Calhoun; Emily Chalmers; Jay Cherlow; Nina Horowitz; Thomas McCool; Marc Molino; Donald Porteous; Carl Ramirez; and Melvin Thomas made key contributions to this report.
In a previous report and testimony, GAO identified issues related to title insurance markets, including questions about the extent to which premium rates reflect underlying costs, oversight of title agent practices, and the implications of recent state and federal investigations. This report addresses those issues by examining (1) the characteristics of title insurance markets across states, (2) factors influencing competition and prices within those markets, and (3) the current regulatory environment and planned regulatory changes. To conduct this review, GAO analyzed available industry data and studies, and interviewed industry and regulatory officials in a sample of six states selected on the basis of differences in size, industry practices, regulatory environments, and number of investigations. The U.S. title insurance market is highly concentrated at the insurer level, but market characteristics varied across states. In 2005, for example, five insurers accounted for 92 percent of the national market, with most states dominated by two or three large insurers. Variations across states included the way title agents conducted their searches as well as the number of affiliated business arrangements (ABA) in which real estate agents, brokers, and others have a stake in a title agency. Finally, premiums varied across states due to cost and market variations that can also make understanding and overseeing title insurance markets a challenge on the national level. Certain factors raise questions about the extent of competition and the reasonableness of prices that consumers pay for title insurance. Consumers find it difficult to comparison shop for title insurance because it is an unfamiliar and small part of a larger transaction that most consumers do not want to disrupt or delay for comparatively small potential savings. In addition, because consumers generally do not pick their title agent or insurer, title agents do not market to them but to the real estate and mortgage professionals who generally make the decision. This can create conflicts of interest if those making the referrals have a financial interest in the agent. These and other factors put consumers in a potentially vulnerable situation where, to a great extent, they have little or no influence over the price of title insurance but have little choice but to purchase it. Furthermore, recent investigations by the Department of Housing and Urban Development (HUD) and state insurance regulators have identified instances of alleged illegal activities within the title industry that appeared to take advantage of consumers' vulnerability by compensating realtors, builders, and others for consumer referrals. Combined, these factors raise questions about whether consumers are overpaying for title insurance. Given consumers' weak position in the title insurance market, regulatory efforts to ensure reasonable prices and deter illegal marketing activities are critical. However, state regulators have not collected the type of data, primarily on title agents' costs and operations, needed to analyze premium prices and underlying costs. In addition, the efforts of HUD and state insurance regulators to identify inappropriate marketing and sales activities under the Real Estate Settlement Procedures Act (RESPA), have faced obstacles, including constrained resources, HUD's lack of statutory civil money penalty authority, some state regulators' minimal oversight of title agents, and the increasing number of complicated ABAs. Finally, given the variety of professionals involved in a real estate transaction, a lack of coordination among different regulators within states, and between HUD and the states, could potentially hinder enforcement efforts against compensation for consumer referrals. Because of the involvement of both federal and state regulators, including multiple regulators at the state level, effective regulatory improvements will be a challenge and will require a coordinated effort among all involved.
The DOD centrally billed travel card program is part of a governmentwide travel card program started in 1983 with the express purpose of increasing convenience to the traveler and lowering (1) the government’s cost of travel by reducing the need for cash advances to the traveler and (2) the associated administrative costs. The travel card program includes both the individually billed accounts—accounts held and paid by the individual cardholders—and the centrally billed accounts. In general, individual cardholders use the individually billed accounts to charge non- transportation-related expenses, while most DOD services and units used the centrally billed accounts to purchase transportation services such as airline and train tickets and to facilitate expenses incurred for group travel. According to Bank of America data, the net value of airline tickets charged during fiscal years 2001 and 2002 to DOD’s centrally billed accounts totaled over $2.4 billion. As shown in figure 1, five U.S. airlines—American, Delta, Northwest, United, and US Airways—together accounted for more than 82 percent of the dollar value of airline tickets purchased by DOD during fiscal years 2001 and 2002. More than 85 other airlines—both U.S. and foreign carriers—accounted for the remaining 18 percent of the value of total airline tickets DOD purchased in fiscal years 2001 and 2002. The $2.4 billion is made up of more than $2.6 billion in gross airline purchases net of credits totaling $233 million, or about 9 percent of gross airline purchases. Credits related to tickets that were unused, tickets issued erroneously, and charges identified as fraudulent. The airline tickets DOD purchased through the centrally billed accounts are generally acquired under the terms of the air transportation services contract that the General Services Administration (GSA) negotiates with U.S. airlines. Airline tickets purchased under this contract have no advance purchase requirements, have no minimum or maximum stay requirements, are fully refundable, and do not incur penalties for changes or cancellation. The revenue recognition policy for the airlines industry is to recognize ticket sales as a liability until the transportation is provided, that is, when the ticket is used. The terms of the air transportation services contract also provide that contract carriers are to fully refund all unused portions of any government contract fare ticket to the activity paying for the ticket, the travel management center issuing the ticket, or the individual traveler, as appropriate. Because DOD travel regulations require that federal and military travelers on official business use a contract carrier for official airline travel unless a specific exception applies, airline tickets purchased by DOD are typically fully refundable. Federal agencies are authorized to recover payments made to airlines for tickets that agencies acquired but did not use. While generally there is a 6-year statute of limitations on the government’s ability to file an action for money damages based on a contractual right, the government also has up to 10 years to offset future payments for amounts it is owed. Several airlines sued GSA after it offset payments due to the airlines for the value of airline tickets that GSA claimed were unused. The court upheld GSA’s authority to administratively offset the payments despite the airlines’ assertion that provisions printed on the tickets themselves specified a shorter time limit in which the government could request a refund. The court further held that the government’s right to refunds could not be limited by terms unilaterally imposed by the airlines. As shown in table 1, data provided by five airlines and verified against Bank of America’s data showed that about 58,000 tickets with a value of $21.1 million were purchased with DOD’s centrally billed accounts but were unused and not refunded. The $21.1 million included more than 48,000 tickets valued at $19.2 million that were fully unused, and $1.9 million in the residual value of about 10,000 American Airlines partially used tickets, that is, at least one leg had not been used. Based on our assessment of the limited data provided by the airlines, it is possible that since fiscal year 1997, DOD purchased more than $100 million in airline tickets that were not used and not processed for refunds and for which DOD may be entitled to refunds or offsets against other payments to those airlines. Table 2 provides further details on the $19.2 million identified as being fully unused. Fully unused tickets made up most of the known unused tickets value of $21.1 million, which also included the residual value of partially unused tickets. As shown in table 2, DOD spent $19.2 million on more than 48,000 airline tickets that were fully unused and not refunded. Since DOD was not aware of these unused tickets, and consequently did not know their number or dollar value, we requested these data from DOD’s five most frequently used airlines—American, Delta, Northwest, United, and US Airways. Although we asked each airline for consistent data on tickets purchased in fiscal years 2001 and 2002 that were unused and not refunded, we did not receive uniform, complete, or consistent responses. For example, although American Airlines and US Airways provided us with data on fully unused tickets for fiscal year 2002—tickets that were issued to DOD travelers but never traded in at a counter—they provided only partial data for fiscal year 2001. Delta Airlines provided us information on the status of all tickets DOD purchased with its centrally billed accounts during fiscal years 2001 and 2002, and guidance on how to identify those tickets that were fully and partially unused. For more detailed information on the breakdown of fully unused tickets by year, and further discussion of the types of data we received from the airlines, see appendix II. In addition to the $19.2 million in fully unused tickets, DOD failed to claim refunds on millions of dollars in airline tickets purchased with centrally billed accounts that were partially unused. As in the case of fully unused tickets, DOD was not aware of, and therefore did not maintain data on, partially unused tickets. Consequently, we had to request these data from the airlines. Partially unused tickets are those tickets that, although used, still have residual value as only portions of those tickets were traded in for travel. A DOD ticket may be partially unused for several reasons. For instance, a ticket can be partially unused when a DOD traveler used the ticket for the outbound flight but decided to drive home with another DOD traveler. A ticket can also be partially unused if, for example, a DOD traveler who was originally scheduled to travel from Seattle to Miami and then to San Juan, Puerto Rico, was ordered to return home from Miami because of weather problems. In this case, the portion of the airline ticket from Miami to Puerto Rico was unused. While the portions that have been used represent services rendered by the airlines, the portions that are partially used have a residual value that can be claimed as a refund. Table 3 summarizes the number and purchase price of partially unused tickets, as well as the residual value of partially unused tickets where available. Over the entire period for which four airlines provided data on partially unused tickets—American, Delta, Northwest, and United—we identified that more than 91,000 tickets costing about $68 million were only partially used. Although Delta, Northwest, and United provided data that identified over 81,000 partially unused tickets with a purchase price of more than $62 million, these three airlines informed us that their ticket data are not maintained in a format that would allow them to easily quantify the residual value of these partially unused tickets. To do so would require a complex process involving the repricing of each of the segments that made up the total purchase price—a process the airlines told us would be labor- intensive and costly. American Airlines was able to provide the residual value of its partially unused tickets—$1.9 million. Further, US Airways did not provide us with any data on these types of tickets. As indicated by the airlines, substantial work remained to be done to derive an estimate of the residual value of partially unused tickets. For more detailed information on the breakdown of partially unused tickets by year, see appendix II. In addition to the millions of dollars in known amounts of unused tickets or segments thereof that were not refunded, we used the limited airline data to assess that the possible magnitude of outstanding unused tickets purchased from 1997 through 2003 was at least $100 million. As stated previously, federal agencies are authorized to recover payments made to airlines for tickets that agencies ordered but did not use. Generally a 6-year statute of limitations applies to the government’s ability to file an action for money damages based on a contractual right, but the government also has up to 10 years to offset future payments for amounts it is owed. However, airline representatives told us that they were concerned about the feasibility and costs of retrieving DOD’s unused ticket data from their archives. As you requested, we used the data provided by the airlines to determine the possible magnitude of the tickets DOD purchased but did not use or claim as a refund since 1997. Using the data that the airlines provided for fiscal year 2002, we calculated the total value of fully unused tickets as a percentage of total tickets purchased using a centrally billed account. We also used the more limited data the airlines provided on partially unused tickets for fiscal year 2002 to gauge the residual value of partially used tickets as a percentage of the total purchase value of these tickets. We applied these combined results, which on a per airline basis ranged from 1.44 percent to 3.26 percent, to the total purchase value of tickets purchased with centrally billed accounts since 1997 (about $8 billion) and found—using the lowest estimate of 1.44 percent—that it is possible DOD purchased at least $100 million in airline tickets that were unused and not refunded during this period. (See app. II for further information on our calculations.) As discussed previously, DOD was not aware of, and consequently did not maintain data on, unused tickets and would therefore have to rely on the airlines to provide the relevant data needed to claim refunds. The inconsistent and incomplete responses we received from the airlines point to the difficulties in determining the total value of unused tickets. While the airlines readily provided us with at least 1 year of the data we requested, some airline representatives informed us that data on tickets purchased prior to the last 18 months have been moved to electronic archives, and retrieving data from these archives is costly and time-consuming. The process involves restoring from archives millions of records of tickets the airlines have issued before they can identify tickets purchased with the DOD centrally billed accounts that are fully and partially unused. Further, additional work would be necessary to determine the value of the unused portions of partially unused airline tickets. Finally, the airlines stated that some ticket data were not maintained electronically and that generating information related to these tickets would involve manually sifting through the airlines’ ticket coupons. However, unused tickets from these 5 airlines and the more than 85 other airlines that DOD uses represent a potentially substantial government claim. Millions of dollars of unused tickets have not been refunded because DOD did not have a systematic process to identify and process unused tickets. Effective internal controls are the first line of defense in safeguarding assets and preventing and detecting fraud, and are an integral part of an entity’s accountability for government resources. However, we found that DOD’s flawed process relied extensively on DOD personnel to report unused tickets to the travel offices. DOD had not systematically implemented procedures to identify instances in which travelers failed to notify the commercial travel offices (CTO) and their commands of unused tickets, or to ensure that refunds were processed once the CTOs received notifications. Although some units had instituted a process by fiscal year 2002 to more systematically identify instances of unused tickets, the process was not implemented DOD-wide and could only be used to identify unused electronic—not paper—tickets. Further, in locations where this process had been implemented, DOD did not have systematic procedures to verify that the CTOs identified all unused electronic tickets and processed these for refunds. Because our preliminary assessment determined that current operations used to identify and process unused tickets were flawed, we did not statistically test current processes and controls. During fiscal years 2001 and 2002, DOD relied on travelers to report unused tickets to the CTOs. DOD travel regulations state that the traveler must notify the CTO when a ticket is not used, and DOD’s Financial Management Regulations further stipulates that it is the traveler’s responsibility to return unused transportation tickets to the CTO for a refund. At some CTOs, each trip itinerary generated at the time an airline ticket was issued also contained a reminder to the traveler to return all unused tickets to the CTO. Unused ticket notification initiates a process whereby requests for refunds can be submitted to the airlines. As mentioned above, contract tickets purchased by the government are fully refundable. Timely processing of refunds ensures that scarce resources are returned to the government. However, DOD did not implement control procedures to systematically determine the extent to which DOD travelers adhered to the unused ticket requirements, and to identify instances in which they did not. According to bank data, DOD received credits amounting to about 9 percent of the airline tickets purchased through the CTOs during fiscal years 2001 and 2002. Although these data indicate that some DOD travelers followed the unused ticket requirements, DOD did not maintain data in such a manner as to allow the department to identify the extent of noncompliance. Figure 2 illustrates where control breakdowns can occur if travelers do not adhere to DOD requirements and report unused tickets to the CTOs. As shown in figure 2, once a ticket is charged to the centrally billed account and given to the traveler, DOD has no systematic controls to determine that the ticket was used—or remains unused—unless the traveler notifies the CTO that the ticket was not used. If the traveler does not notify the CTO of an unused ticket, the ticket would not be refunded unless the CTO monitored the status of airline tickets issued electronically and applied for refunds on all unused tickets. Figure 2 also shows that the failure to notify the CTO of an unused paper ticket would result in the ticket being unused and not refunded. In addition, if the CTO identifies or is notified of an unused ticket but fails to process a refund, the ticket will also be unused and not refunded. DOD services and agencies have sometimes identified unused tickets as an indirect result of the procedures they have put in place to monitor the status of their obligations. For example, budget officials at the Department of the Navy’s location in Keyport, Washington, informed us that their financial system is programmed to identify all travel orders for which corresponding travel vouchers had not been filed 5 days after travel was to have been completed. Upon identification of the missing travel vouchers, the system automatically produces a list of travelers who have not filed their travel claims. Once identified, the finance office sends an e-mail reminder to the traveler to file his or her travel voucher. According to these officials, the traveler responds by filing a voucher or notifying the finance office that the travel was canceled and, therefore, the ticket was not used. Rather than relying on the traveler to notify the CTO of the unused ticket, the finance office gives the notification so that a refund can be processed. Similarly, finance officials at Hickam Air Base regularly monitor the accounting system for open travel orders (unliquidated obligations) and notify the appropriate resource officers to work on these unliquidated obligations. These resource officers would in turn remind the travelers to file the requisite travel vouchers or notify the CTOs of unused tickets. However, the monitoring of open travel orders is only partially effective in identifying unused tickets, as it still relies, to a large extent, on the traveler providing notification to the CTO of canceled travel. Further, in a report issued in January 2003, we noted that hundreds of millions of dollars in unliquidated obligations were not accounted for because Navy fund managers failed to follow DOD regulations that require the periodic review of unliquidated obligations exceeding $50,000. These Navy managers cited time constraints as one of the obstacles to reviewing these unliquidated obligations. In contrast to the unliquidated obligations exceeding $50,000 referred to in that report, travel obligations are often valued in the thousands and sometimes only in the hundreds of dollars. Consequently, it is likely that many of these smaller obligations would be of an even lower priority and therefore not reviewed. Similarly, we found that the monitoring of open travel orders at the Air Force is not effective in identifying unused tickets. The Air Force records all travel expenses—regardless of whether they were incurred with the centrally billed accounts or by the travelers directly—as one lump sum obligation. Once a voucher is filed, or a centrally billed charge is paid, whichever is sooner, the obligation would be liquidated, and the travel order removed from the list of unliquidated travel orders. Thus, the liquidation of an obligation is dependent on either the filing of a voucher or the payment of a centrally billed account, not whether the ticket is used or unused. Consequently, like the Navy, it would be difficult for the Air Force to consistently identify unused tickets through the monitoring of open travel orders. We also found that DOD did not have control procedures to provide assurance that tickets identified as unused were processed for refunds. At 9 of the 10 locations we visited, neither the CTO nor the government travel office (GTO) maintained centralized records of unused tickets submitted by the travelers. Without a centralized record of unused tickets, an unused ticket that had been lost or never processed for refund would not be detected. Therefore, neither the CTO nor the GTO could certify that all tickets turned in by the travelers were processed for refunds. We did note, however, that 6 of the 10 locations had implemented procedures to verify that unused tickets processed for refunds resulted in credits to the government. Our internal control standards identify reconciliation as a control activity that helps enforce management’s directives and ensures that actions are taken to address risks. Reconciliation should be performed routinely so that problems are detected and corrected promptly and differences are not allowed to age, thereby becoming increasingly difficult to research. However, we found that DOD did not implement reconciliation procedures that link airline tickets purchased with the centrally billed account to travel claims submitted by travelers. Specifically, when DOD purchased an airline ticket with a centrally billed account, DOD did not implement procedures to identify whether the traveler has or has not submitted a travel voucher. A lack of a travel voucher could indicate that the ticket was unused. Without reconciling these two types of records, DOD could not obtain reasonable assurance that centrally billed account charges represent airline tickets that were eventually used. DOD regulations require that travelers file travel claims, in the form of a travel voucher, within 5 days of the end of travel. The filing of a travel claim provides positive confirmation that the travel took place. A travel claim also represents the traveler’s assertion that the transportation mode indicated on the travel order—be it air or land—was used in the performance of official duty. Only in exceptional circumstances would the filing of a travel voucher fail to provide the confirmation that an airline ticket provided to the traveler through the centrally billed account was not used. For example, mechanical problems at one airline could result in the traveler buying another ticket on a different airline with an individually billed travel card or a personal credit card. In these instances, the ticket acquired through the centrally billed account would not be used, and this could have been detected through a reconciliation process. In addition, the positive confirmation provided by the filing of a travel voucher indicates that a DOD-wide reconciliation between travel vouchers and centrally billed account charges would lead to the identification of instances in which travel claims have not been filed for travel involving the issuance of a centrally billed airline ticket. Such identification would allow DOD to follow up with the travelers to determine whether travel was taken, and therefore whether the ticket was used. A failure to reconcile tickets that were centrally acquired to travel claims filed by travelers resulted in DOD not being able to determine whether the airline tickets it purchased were used. In the period under audit, some DOD units have made improvements to its unused ticket process to compensate for the lack of internal controls over the centrally billed account program. By fiscal year 2002, some DOD units had established procedures intended to systematically identify unused electronic tickets, thus allowing DOD to obtain refunds on tickets it otherwise might have missed. At these locations, the computer systems used to reserve and purchase flights allow the CTOs to search the databases of each airline that participates in electronic ticketing. Each CTO can customize the searches to generate a list of unused electronic tickets directly from these computer reservation systems, or manually review data on the status of each electronic ticket to identify tickets that are unused. These procedures enable the CTOs to systematically identify unused e- tickets without having to receive notification from the travelers. CTOs can process requests for refunds directly in the computer reservation systems after unused tickets are identified. This on-line capacity allows DOD to obtain timely refunds and increases the organization’s ability to safeguard resources. An increasingly larger portion of all tickets issued are issued electronically—data from one large U.S. airline indicated that 69 percent of tickets issued during fiscal year 2002 and as much as 84 percent of tickets issued during fiscal year 2003 were electronic tickets. Consequently, these procedures increase DOD’s ability to capture refunds on a large portion of unused tickets. This ability to independently identify unused electronic tickets, if implemented across the services consistently and properly supervised, would allow DOD to partially compensate for the lack of controls in ensuring that travelers notify the appropriate office of canceled travel. However, our work found that not all locations implemented this capability during fiscal year 2002. This is partly because DOD did not incorporate a requirement for this capability into all the contracts it issued to the CTOs during fiscal year 2002. For example, the CTO at Hickam Air Base did not monitor unused tickets electronically because its contract did not require it to do so. In contrast, some locations that did not have this requirement in their contracts nevertheless implemented this capability through an agreement between the CTO and the GTO. High-level DOD officials to whom we reported the status of unused tickets informed us that they will require this capability to be a part of the new contracts issued under the Defense Travel System (DTS). The DTS is intended to be the DOD-wide travel system and replace the more than 30 travel systems currently operating within the department. However, according to a 2002 DOD Office of Inspector General report, “the DTS was being substantially developed without the requisite requirements, cost, performance, and schedule documents and analyses needed as the foundation for assessing the effectiveness of the system and its return on investment.” The report further noted that DOD estimated that deployment would not be completed until fiscal year 2006. In general, we found that the GTOs at locations where the CTOs had put this process in place have not implemented control procedures to verify that the CTOs consistently identify and file for refunds on unused e-tickets. For instance, the CTOs at several locations we visited did not provide their respective GTOs with inventories of tickets they have identified as unused. Therefore, the transportation officers were unable to determine that all unused tickets were turned in for refunds. As noted above, we found that several transportation officers had procedures in place to confirm that the organization receives a credit for tickets that have been submitted for refunds. However, this process was not implemented at all locations we visited. Consequently, not all DOD units could provide assurance that all requests for refunds resulted in a credit to the government. Even if the CTOs can identify all e-tickets, they cannot independently identify nonelectronic tickets. Nonelectronic tickets are typically used for international travel because many non-U.S. carriers do not issue electronic tickets. International travel can sometimes make up as much as 25 percent of the total dollar value of a unit’s travel. Given the inability to identify unused paper tickets, and weaknesses in the control procedures over the CTOs, the reconciliation procedures we discussed previously will still need to be established to match purchases made with centrally billed accounts to travel vouchers that have been filed. Unless DOD establishes a procedure to verify whether all airline tickets are used, it will not have reasonable assurance that airline tickets purchased through the centrally billed account are used or refunded. In a series of testimonies and reports issued in fiscal years 2002 and 2003, we addressed problems that the Army, Navy, and Air Force had in managing the individually billed account travel cards. These testimonies and reports showed high delinquency rates and significant potential fraud and abuse related to DOD’s individually billed travel card program. However, recent improvements to the individually billed program point to the possibility of using this program as the principal means of acquiring tickets, thereby reducing the government’s risk of losses from unused tickets arising from the use of centrally billed accounts. In response to our testimonies and reports on the individually billed accounts, the Congress took actions in the fiscal year 2003 appropriations and authorization acts requiring (1) the establishment of guidelines and procedures for disciplinary actions to be taken against cardholders for improper, fraudulent, or abusive use of government travel cards; (2) the denial of government travel cards to individuals who are not creditworthy; (3) split disbursements for travel cardholders; and (4) offset of delinquent travel card debt against the pay or retirement benefits of DOD civilian and military employees and retirees. In response, DOD has implemented many of the legislatively mandated improvements—most notably the implementation of split disbursements and salary offsets and the reduction in the numbers of individuals with access to the travel cards. According to Bank of America, the delinquency rates we noted in our prior reports at the Army, Navy, and Air Force have decreased. For example, the delinquency rate at the Navy had decreased from an average monthly delinquency rate of about 11 percent during fiscal year 2002 to an average monthly delinquency rate of less than 7 percent in fiscal year 2003. Similarly, during that same period the Army’s average monthly delinquency rate decreased from about 14 percent to an average monthly delinquency rate of about 9 percent. The benefits of using a well-controlled individually billed account program as the principal mechanism for acquiring airline tickets are twofold. First, in the individually billed account program, the cardholder is directly responsible for all charges incurred on his or her travel card. In contrast, improper charges to centrally billed accounts that are not detected and disputed or authorized charges, such as airline tickets that are not used, result in direct financial losses to the government in the amount of the face value of the tickets. Second, with the use of individually billed accounts to purchase tickets, DOD travelers have greater incentive to turn in unused tickets because they are responsible for paying the ticket charges. The use of individually billed cards to acquire airline tickets would therefore help to limit the government’s financial exposure. However, the use of the individually billed accounts to acquire airline tickets would only minimize, not eliminate, the necessity of implementing internal controls over the centrally billed account program. DOD would still need to maintain a centrally billed account structure to purchase airline tickets for travelers who have been denied individually billed accounts, infrequent travelers whose individually billed credit cards have been canceled, and new employees who have not yet acquired individually billed accounts. The millions of dollars wasted on unused airline tickets provides another example of why DOD financial management is one of our “high-risk” areas, with DOD highly vulnerable to fraud, waste, and abuse. In implementing the centrally billed component of the travel card program, DOD relied on a weak process that depended on travelers reporting all unused tickets to CTOs. Although many DOD travelers informed the CTOs of unused tickets as required, the lack of specific internal control procedures to identify instances in which the travelers did not do so resulted in DOD paying for thousands of airline tickets that were not used and not processed for refunds. During fiscal year 2003, some DOD units began implementing procedures to more systematically identify unused airline tickets, and in fiscal year 2004, DOD started working to recover from the airlines the value of unused tickets we identified. DOD must build on these improvements and establish controls over unused tickets to improve its ability to control costs and ensure basic accountability over scarce resources. In addition, DOD should take immediate actions to recover the outstanding value of tickets that were fully or partially unused and not refunded. To improve the management DOD’s travel resources, we are making the following 20 recommendations to DOD officials. To decrease the risks associated with the use of the centrally billed accounts, we recommend that the Secretary of Defense evaluate the feasibility of the following two actions: establishing the individually billed account travel card as the primary payment mechanism for transportation expenses, and limiting the use of the centrally billed account travel card to procuring transportation expenses for those employees without access to individually billed accounts, such as new employees who have not yet obtained individually billed account travel cards and employees who do not qualify for the use of individually billed account travel cards, and for other situations in which the use of an individually billed travel card is not practical. To enable DOD to systematically identify future unused airline tickets purchased through the centrally billed accounts, and improve internal controls over the processing of unused airline tickets for refunds, we recommend that the Secretaries of the Army, Air Force, and Navy and the heads of DOD agencies direct the appropriate personnel within services and agencies to take the following nine actions: evaluate the feasibility of implementing procedures to reconcile airline tickets acquired using the centrally billed accounts to travel vouchers in the current travel system; modify existing CTO contracts to include a requirement that the CTOs establish a capability to systematically identify unused e-tickets in their computer reservation systems, identify all unused tickets based on specified criteria before the unused ticket data are removed from the computer reservation systems, maintain daily schedules that identify unused tickets and how long they have been unused, routinely provide the GTOs with unused ticket reports, routinely process refunds for tickets identified as unused, and submit to the GTOs all requests for refunds that have been require the GTOs to routinely compare unused tickets processed by the CTOs to the credits on the Bank of America invoice; and require either the GTOs or the units responsible for monitoring the CTOs’ activities to determine whether the CTOs are consistently implementing the procedures to identify unused tickets and process these tickets for refunds. To enable DOD to more effectively monitor unused tickets under the DTS, we recommend that the Secretary of Defense direct the appropriate personnel to take the following four actions: use the DTS to remind travelers to claim refunds on all unused tickets, include in future contracts issued for the DTS, a requirement that the CTOs establish the capability to systematically identify unused tickets and process these tickets for refunds, and establish, in the DTS, a capability to routinely match travel vouchers to tickets issued through the centrally billed accounts. To recover outstanding claims on unused tickets, we recommend that the Under Secretary of Defense (Comptroller) initiate the following five actions: immediately submit claims to the airlines to recover the $21 million in fully and partially unused tickets identified by the airlines and included in this report; calculate, with the assistance of the airlines, the residual value of the partially unused tickets identified by the airlines and included in this report; and work with the five airlines identified in this report and other airlines from which DOD purchased tickets with centrally billed accounts to identify the feasibility of determining the recoverability of other fully and partially unused tickets purchased with DOD centrally billed accounts, determine the value of the unused portions of those tickets, and initiate actions to obtain refunds. In written comments on a draft of this report, which are reprinted in appendix III, DOD concurred with all 20 of our recommendations and stated that it had taken actions or will take actions to address these recommendations. For example, with respect to actions already taken, DOD stated that it has implemented, in the DTS, a capability to routinely match travel vouchers to tickets issued through the centrally billed accounts. This capability is currently being tested at certain pilot sites. With respect to actions under way, DOD had submitted claims to the airlines on February 26, 2004, to recover the $21 million in fully and partially unused paper and electronic tickets identified by the airlines, and stated that it will work with the airlines from which it purchased tickets through the centrally billed accounts to identify the feasibility of determining the recoverability of other fully and partially unused tickets. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time, we will send copies to interested congressional committees; the Secretary of Defense; the Under Secretary of Defense, Comptroller; the Secretary of the Army; the Secretary of the Navy; the Secretary of the Air Force; and the Director of the Defense Finance and Accounting Service. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Gregory D. Kutz at (202) 512-9505 or [email protected], John J. Ryan at (202) 512-9587 or [email protected], or John V. Kelly at (202) 512-6926 or [email protected] if you or your staffs have any questions concerning this report. Major contributors to this report are acknowledged in appendix IV. Pursuant to a joint request by the Chairman and Ranking Minority Member of the Permanent Subcommittee on Investigations, Senate Committee on Governmental Affairs; the Chairman of the Senate Committee on Finance; and Representative Schakowsky, we audited the controls over unused airline tickets purchased through the Department of Defense’s (DOD) centrally billed accounts. Our assessment covered the following: the extent of tickets charged to the centrally billed accounts that are unused and not refunded and whether DOD’s internal controls provided reasonable assurance that all unused tickets were identified and submitted for refunds. To assess the magnitude of tickets charged to the centrally billed accounts that are unused and not refunded, we obtained from Bank of America databases for fiscal years 2001 and 2002 travel transactions charged to DOD’s centrally billed travel card accounts. The databases contained transaction-specific information, including ticket fare, ticket number, name of passenger, date and destination of travel, and number of segments in each ticket. We reconciled these data files to control totals provided by Bank of America and to data reported by the General Services Administration as DOD’s centrally billed account activities. We also requested that the five airlines that DOD used most frequently provide us with data relating to tickets DOD purchased during fiscal years 2001 and 2002 that were unused and not refunded. These five airlines—American, Delta, Northwest, United, and US Airways—together accounted for more than 82 percent of the value of total airline tickets DOD purchased. To obtain assurance that the tickets the airlines reported as unused represented only airline tickets charged to DOD centrally billed accounts we compared data provided by the airlines to transaction data provided by Bank of America. Because DOD does not track whether tickets purchased with centrally billed accounts were used, we were unable to confirm that the population of unused tickets that the airlines provided was complete, that is, it included all DOD tickets that are unused and not refunded. While the five airlines provided data on unused tickets that allowed us to identify which tickets were unused and not refunded, they did not provide uniform or consistent data, and their data did not always cover the same periods. Specifically, all five airlines provided data that enabled us to determine the total purchase price of fully unused tickets, and four of the five airlines provided data that enabled us to determine the total purchase price of partially unused tickets. Only one airline provided us the unused value (residual value) of partially unused tickets. Further, while all five airlines provided us information on fiscal year 2002 fully unused tickets, the data they provided on fully unused tickets covered from 1 to over 4 years. In addition, only four airlines provided partially unused ticket data, and the data they provided similarly covered from 1 to over 4 years. For further details on the type of data received, and our calculations of the value of fully unused and partially unused tickets, see appendix II. We also reviewed relevant statutes and court decisions related to the period of time federal agencies are allowed to claim refunds and apply administrative offsets to goods and services they purchased but did not receive. To assess controls over unused tickets, we obtained an understanding of the travel process by reviewing DOD’s travel regulations and interviewing officials from the Departments of the Army, Navy, and Air Force. We visited two Army units, three Navy units, three Air Force units, and two Marine Corps units to confirm our understanding of the travel process. We also interviewed DOD officials at the government travel offices and representatives of the commercial travel offices to obtain an understanding of the process used to identify unused tickets and claim refunds on those tickets. To assess the internal controls over unused tickets, we applied the fundamental concepts and standards set forth in our Standards for Internal Control in the Federal Government to the practices followed by these units to manage unused tickets. Because we determined that controls over unused tickets were ineffective, we did not assess or design statistical sampling tests to test these controls. We briefed DOD managers, including DOD officials in the Office of the Under Secretary of Defense (Comptroller), the Defense Finance and Accounting Service, and the Office of the Inspector General; Army officials in the Office of Deputy Chief of Staff for Logistics; Navy officials in the Office of the Assistant Secretary of the Navy for Financial Management and Comptroller; Air Force officials in the Office of the Deputy Chief of Staff for Installation and Logistics; and Marine Corps officials in the Office of Deputy Chief of Staff for Installations and Logistics concerning the results of our work. On December 19, 2003, we provided DOD officials with a partial list of fully and partially unused tickets we received as of that date. On February 10, 2004, we requested comments on a draft of this report from the Secretary of Defense or his designee. We conducted our audit work from March 2003 through January 2004, in accordance with U.S. generally accepted government auditing standards. To determine whether design flaws in controls over purchasing tickets using DOD centrally billed accounts resulted in a significant loss of federal tax dollars, we contacted DOD’s five most frequently used airlines and requested that they identify tickets DOD purchased with a centrally billed account that had not been used or refunded as of the date of our request. While those airlines did not provide consistent data concerning unused tickets, they did provide information sufficient to allow us to determine the unused value of some of their fully unused tickets and the original purchase price of some of the partially unused tickets. In addition, one airline provided information on the unused value of at least some of its partially unused tickets. Using the ratios of the known value of unused tickets to total tickets purchased, we were able to assess the potential magnitude of tickets purchased with DOD centrally billed accounts that were not used or refunded since 1997—the first year for which centrally billed account information is available from the General Services Administration. We asked DOD’s five most frequently used airlines—American, Delta, Northwest, United, and US Airways—to provide information on tickets purchased by DOD with centrally billed accounts during fiscal years 2001 and 2002 that had not been used or refunded. As shown in table 4, while all the airlines generally provided complete data on fiscal year 2002 fully unused tickets, the airlines did not provide uniform, complete, or consistent responses to our request for fiscal year 2002 partially unused tickets, or for fully or partially unused ticket data for fiscal year 2001. In addition, some airlines provided some information on fully and partially unused tickets that were purchased in fiscal years 2003, 2000, and 1999. While American, Delta, Northwest, and United provided some data on partially unused tickets, US Airways did not provide any. Further, as noted in table 4, while American, Delta, Northwest, and United provided data on the total purchase price of tickets that were partially unused, American Airlines was the only airline that also provided data on the unused value of its partially unused tickets. Table 4 also shows that there were inconsistencies among the airlines when it came to providing data on unused tickets purchased before, during, and after fiscal year 2002. The airlines cited difficulties with accessing their historical files as the reason for not being able to fully respond to our request. The airlines pointed out that to provide additional information, they would have had to access information that had been stored in archived computer files, and in some instances, the computer files had been eliminated and the only documentation that remained were paper records of the flights. As shown in table 5, airline data showed that DOD used centrally billed accounts to purchase about 58,000 tickets with a value of $21.1 million that were unused and not refunded. The $21.1 million included more than 48,000 tickets valued at $19.2 million that were fully unused and about 10,000 partially unused tickets (i.e., at least one leg had never been used) that had an unused value of about $1.9 million. Due to differences in the data provided by the airlines, we used three primary methods to identify the number and value of tickets that are fully unused. Three airlines— Northwest, United, and US Airways—provided us with data files that identified a ticket as being fully unused and its purchase price. For these airlines, we calculated the unused value of fully unused tickets by totaling the purchase price of each of the fully unused tickets. American Airlines provided us a file containing fully unused and partially unused tickets. To separate partially from fully unused, we used Bank of America data. Delta Airlines provided us with data on the status of each airline ticket DOD purchased using the centrally billed accounts during fiscal years 2001 and 2002 and guidance on how to determine whether a ticket is fully used, partially unused, or fully unused. To identify fully unused Delta tickets from Delta databases, we first identified tickets reported by Delta that matched Bank of America data of how many ticket legs were purchased, then we used the guidance Delta provided to identify its fully unused tickets. Once we identified these tickets, we derived the unused value of fully unused Delta Airlines tickets by totaling the purchase price of each of the fully unused tickets, similar to the methodology we used for the other four airlines. We could not determine the total unused value of partially unused tickets for the five airlines we reviewed. Only American Airlines calculated the unused value of the partially unused tickets it identified. Three airlines— Delta, Northwest, and United—provide the original purchase price of the partially unused airlines tickets, and US Airways did not provide any information on partially unused tickets. To determine the number and total purchase price of partially used tickets purchased with DOD centrally billed accounts for which DOD did not claim a refund, we followed the same methodology used to derive the number and purchase price of fully unused tickets. For example, to derive the total purchase price of partially unused American, Northwest, and United tickets, we added the purchase price of each of the partially unused tickets. For Delta Airlines, we applied the guidance Delta representatives provided to identify partially unused tickets, then added the purchase price of each of these tickets to derive the total. As shown in table 6, our analysis of partially unused tickets indicated that DOD travelers did not use all of the segments on more than 91,000 tickets DOD purchased with centrally billed accounts. During the period from fiscal year 1997 through 2003, DOD spent about $8 billion on airline tickets purchased using centrally billed accounts. To assess the potential magnitude of fully and partially unused airline tickets purchased using centrally billed accounts since 1997—the first year that centrally billed account information is available from the General Services Administration—we used known information related to fiscal year 2002 fully and partially unused tickets and applied that information to the tickets purchased from fiscal years 1997 through 2001 and in fiscal year 2003. We used fiscal year 2002 data because it was the only year for which the airlines provided us fairly complete data. This assessment indicates that the unused value of fully and partially unused tickets purchased from fiscal year 1997 to fiscal year 2003 with DOD centrally billed accounts could be at least $115 million. The magnitude of fully unused tickets could be at least $53 million. As shown in table 7, the percentage of fully unused tickets (unused ticket value as a percentage of total ticket sales) in fiscal year 2002 for DOD’s five most frequently used airlines ranged from 0.66 percent for United Airlines to 1.11 percent for American Airlines and US Airways. The substantially lower ratio for United Airlines is attributed to the fact that fully unused ticket data from United Airlines did not include complete data on paper tickets purchased during the first 6 months of fiscal year 2002. If data on fully unused paper tickets were available, the ratio of fully unused tickets for United Airlines would be higher. To be conservative, we applied the lowest percentage of the value of fully unused tickets (United Airlines ratio of 0.66 percent) to the value of all airline tickets that DOD purchased from fiscal years 1997 through 2003 ($8 billion). If the ratio of the value of fully unused tickets to the total value of tickets purchased with a centrally billed account in fiscal year 2002 was consistent since 1997, the magnitude of value of fully unused tickets could be at least $53 million. The potential magnitude of the unused value of partially unused tickets could increase the unused ticket value by at least $62 million. To arrive at this assessment, we used the data provided by the airlines on partially unused tickets for fiscal year 2002 to calculate the estimated unused value of partially unused tickets for each of the four airlines that provided partially unused ticket data. The first step was to estimate the unused value of partially unused tickets for each of the four airlines. We accomplished this by multiplying the total value of partially unused tickets by the fiscal year 2002 American Airlines ratio of the unused value of partially unused tickets to the total purchase price of those tickets. We then divided the estimated unused value of the partially unused tickets for the four airlines by the total fiscal year 2002 ticket sales for those airlines. As shown in table 8, if the American Airlines’ experience can be extrapolated to the other airlines, the unused value of partially unused tickets ranges from .78 percent to 2.25 percent of total purchases. Again, to be conservative, we applied the lowest ratio of unused value of partially unused tickets (United Airlines ratio of 0.78 percent) to the $8 billion of all airline tickets that DOD purchased from fiscal years 1997 through 2003. If the ratio of the unused value of partially unused tickets to the total value of tickets purchased with centrally billed accounts in fiscal year 2002 was consistent since 1997, the magnitude of unused value of partially unused tickets could be at least $62 million. To determine the possible total magnitude of the value of airline tickets DOD purchased with centrally billed accounts that were unused and not refunded, we added the minimum value of potential fully unused tickets to minimum value of potential partially unused tickets. As a result, we determined that it was possible that DOD had purchased at least $115 million in tickets that were unused and not refunded. Staff making key contributions to this report were Beverly Burke, Francine DelVecchio, Aaron Holling, Jeffrey Jacobson, Julie Matta, John Ryan, and Sidney H. Schwartz. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Ineffective oversight and management of the Department of Defense's (DOD) travel card program, which GAO previously reported on, have led to concerns about airline tickets DOD purchased but did not use and for which it did not claim refunds. GAO was asked to (1) determine whether, and to what extent, airline tickets purchased through the centrally billed accounts were unused and not refunded and (2) determine whether DOD's internal controls provided reasonable assurance that all unused tickets were identified and submitted for refunds. Control breakdowns over the centrally billed accounts resulted in DOD paying for airline tickets that were not used and not processed for refund. DOD was not aware of this problem before our audit and did not maintain data on unused tickets. We determined, based on airline data, that DOD had purchased--primarily in fiscal years 2001 and 2002--about 58,000 tickets with a residual (unused) value of more than $21 million that remained unused and not refunded as of October 2003. We also identified more than 81,000 partially unused airline tickets with a purchase price of about $62 million that will require additional analysis to determine the residual value. Based on further analysis of the limited data, it is possible that DOD purchased at least $100 million in airline tickets that it did not use and for which it did not claim refunds from fiscal years 1997 through 2003. Although GAO asked DOD's five most frequently used airlines for fiscal year 2001 and 2002 unused ticket data, the airlines did not provide uniform, complete, or consistent responses. For example, one airline did not provide partially unused ticket data, another airline's fiscal year 2001 data covered only September 2001, while yet another airline provided data on electronic tickets dating back to November 1998. Although additional data on unused tickets may be available from the airlines' archives, our attempts to obtain additional information were unsuccessful. DOD's unused ticket problems were caused by a flawed process that relied extensively on DOD personnel to report unused tickets to the travel offices. Although it appears that many unused tickets were processed for a refund, the internal controls DOD had in place did not detect millions of dollars of unused airline tickets. Specifically, DOD did not systematically implement compensating procedures to identify instances in which DOD personnel did not report unused tickets, or reconcile the centrally billed accounts to travel claims to determine whether airline tickets were used. Although some units had instituted a process by fiscal year 2002 to more systematically identify instances of unused tickets, the process was not implemented DOD-wide, DOD did not verify that units were consistently implementing the process, and the process could only identify unused electronic--not paper--tickets.
HUD, through FHA, provides insurance that protects private lenders from financial losses stemming from borrowers’ defaults on mortgage loans for both single-family homes and multifamily rental housing properties for low- and moderate-income households. When a default occurs on an insured loan, a lender may “assign” the mortgage to HUD and receive payment from FHA for an insurance claim. According to the latest data available from HUD, FHA insures mortgage loans for about 15,800 multifamily properties. These properties contain just under 2 million units and have a combined unpaid mortgage principal balance of $46.9 billion.These properties include multifamily apartments and other specialized properties, such as nursing homes, hospitals, student housing, and condominiums. In addition to mortgage insurance, many FHA-insured multifamily properties receive some form of direct assistance or subsidy from HUD, such as below-market interest rates or Section 8 project-based assistance. HUD’s Section 8 program provides rental subsidies for low-income families. These subsidies are linked either to multifamily apartment units (project-based) or to individuals (tenant-based). Under the Section 8 program, residents in subsidized units generally pay 30 percent of their income for rent and HUD pays the balance. According to HUD, its restructuring proposals apply to 8,636 properties that both have mortgages insured by FHA and receive project-based Section 8 rental subsidies for some or all of their units. Data provided by HUD in April 1996 show that, together, these properties have unpaid principal balances totaling $17.8 billion and contain about 859,000 units, of which about 689,000 receive project-based Section 8 subsidies. According to HUD’s data, about 45 percent of the insured Section 8 portfolio (3,859 properties, 303,219 assisted units, and $4.8 billion in unpaid loan balances) consist of what are called the “older assisted” properties. These are properties that were constructed beginning in the late 1960s under a variety of mortgage subsidy programs, to which project-based Section 8 assistance (Loan Management Set Aside) was added later, beginning in the 1970s, to replace other subsidies and to help troubled properties sustain operations. About 55 percent of the insured Section 8 portfolio (4,777 properties, 385,931 assisted units, and $13.0 billion in unpaid loan balances) consists of what are called the “newer assisted” properties. These properties generally were built after 1974 under HUD’s Section 8 New Construction and Substantial Rehabilitation programs and received project-based Section 8 subsidies based on formulas with automatic annual adjustments, which tended to be relatively generous to encourage the production of affordable housing. There is great diversity among the properties in HUD’s insured Section 8 portfolio, as illustrated by 10 properties that we studied in greater depth as part of our current assignment (see app. I). These properties differ in a number of important respects, such as the amount of their remaining unpaid mortgage debt; the types and amounts of assistance they receive from HUD; and their financial health, physical condition, rents, types of residents served, and surrounding neighborhoods and rental housing markets. These factors can influence the effect that HUD’s or other reengineering proposals would have on the properties. The insured Section 8 portfolio suffers from three basic problems—high subsidy costs, high exposure to insurance loss, and in the case of some properties, poor physical condition. A substantial number of the properties in the insured Section 8 portfolio now receive subsidized rents above market levels, many substantially above the rents charged for comparable unsubsidized units. This problem is most prevalent in (but not confined to) the “newer assisted” segment of the portfolio, where it stems from the design of the Section 8 New Construction and Substantial Rehabilitation programs. The government paid for the initial development or rehabilitation of these properties under these programs by initially establishing rents above market levels and then raising them regularly through the application of set formulas that tended to be generous to encourage the production of new affordable housing. It has become difficult to continue the high subsidies in the current budget environment. A second key problem affecting the portfolio is the high risk of insurance loss. Under FHA’s insurance program, HUD bears virtually all the risk in the event of loan defaults. A third, closely related problem is the poor physical condition of many properties in the portfolio. A 1993 study of multifamily rental properties with FHA-insured or HUD-held mortgages found that almost one-fourth of the properties were “distressed.” Properties were considered to be distressed if they failed to provide sound housing and lacked the resources to correct deficiencies or if they were likely to fail financially. The problems affecting HUD’s insured Section 8 portfolio stem from several causes. These include (1) program design flaws that have contributed to high subsidies and put virtually all the insurance risk on HUD; (2) HUD’s dual role as mortgage insurer and rental subsidy provider, which has resulted in the federal government averting claims against the FHA insurance fund by supporting a subsidy and regulatory structure that has masked the true market value of the properties; and (3) weaknesses in HUD’s oversight and management of the insured portfolio, which have allowed physical and financial problems at a number of HUD-insured multifamily properties to go undetected or uncorrected. In May, 1995 HUD proposed a mark-to-market process to address the three key problems and their causes by decoupling HUD’s mortgage insurance and project-based rental subsidy programs and subjecting the properties to the forces and disciplines of the commercial market. HUD proposed to do this by (1) eliminating the project-based Section 8 subsidies as existing contracts expired (or sooner if owners agreed), (2) allowing owners to rent apartments for whatever amount the marketplace would bear, (3) facilitating the refinancing of the existing FHA-insured mortgage with a smaller mortgage if needed for the property to operate at the new rents, (4) terminating the FHA insurance on the mortgage, and (5) providing the residents of assisted units with portable Section 8 rental subsidies that they could use to either stay in their current apartment or move to another one if they wanted to or if they no longer could afford to stay in their current apartment. Recognizing that many properties could not cover their expenses and might eventually default on their mortgages if forced to compete in the commercial market without their project-based Section 8 subsidies, the mark-to-market proposal set forth several alternatives for restructuring the FHA-insured mortgages in order to bring income and expenses in line. These alternatives included selling mortgages, engaging third parties to work out restructuring arrangements, and paying full or partial FHA insurance claims to reduce mortgage debt and monthly payments. The proposed mark-to-market process would likely affect properties differently, depending on whether their existing rents were higher or lower than market rents and on their funding needs for capital items, such as deferred maintenance. If existing rents exceeded market rents, the process would lower the mortgage debt, thereby allowing a property to operate and compete effectively at lower market rents. If existing rents were below market, the process would allow a property to increase rents, potentially providing more money to improve and maintain the property. HUD recognized, however, that some properties would not be able generate sufficient income to cover expenses even if their mortgage payments were reduced to zero. In those cases, HUD proposed using alternative strategies, including demolishing the property and subsequently selling the land to a third party, such as a nonprofit organization or government entity. After reviewing HUD’s proposal, various stakeholders raised questions and concerns about the proposal, including the effect that it would have on different types of properties and residents, and the long-term financial impact of the proposal on the government. In response to stakeholders’ concerns, HUD made several changes to its proposal and also renamed the proposal “portfolio reengineering.” The changes HUD made included (1) giving priority attention for at least the first 2 years to properties with subsidized rents above market; (2) allowing state and local governments to decide whether to continue Section 8 project-based rental subsidies at individual properties after their mortgages are restructured or switch to tenant-based assistance; and (3) allowing owners to apply for FHA insurance on the newly restructured mortgage loans. In addition, HUD stated a willingness to discuss with the Congress mechanisms to take into account the tax consequences related to debt forgiveness for property owners who enter into restructuring agreements. More recently, HUD has also suggested that action should be deferred on properties that would not be able to generate sufficient income to cover operating expenses after reengineering until strategies are developed that address the communities’ and residents’ needs relating to the properties. On April 26, 1996, HUD received legislative authority to conduct a demonstration program to test various methods of restructuring the financing of properties in the insured Section 8 portfolio. Participation in the program is voluntary and open only to properties that have rents which exceed HUD’s fair market rent (FMR) for their locality. The purpose of the demonstration is to test the feasibility and desirability of properties meeting their financial and other obligations with and without FHA insurance, with and without above-market Section 8 assistance, and using project-based assistance or, with the consent of the property owner, tenant-based assistance. The demonstration program is limited by law to mortgages covering a total of 15,000 units or about 2 percent of the total units in the insured Section 8 portfolio. An appropriation of $30 million was provided to fund the cost of modifying loans under the program, which remains available until September 30, 1997. HUD believes that this funding level could limit the number of properties that can be reengineered under the demonstration. On July 2, 1996, HUD issued a public notice announcing the program and providing initial guidance on how it plans to operate the program. On May 21, 1996, the Senate Committee on Banking, Housing, and Urban Affairs issued a Staff Discussion Paper to outline a general strategy for addressing the problems with HUD’s insured Section 8 portfolio. Among other things, the staff proposed to continue project-based Section 8 assistance and to subsidize rents at 90 percent of FMR (or at higher budget-based rents in certain cases if the FMR-based rents would not cover the costs of operation). On June 27, 1996, the Subcommittee on Housing Opportunity and Community Development held a hearing on the staff’s proposals, and as of mid-July the Subcommittee was drafting a restructuring bill. In May 1995, when HUD proposed the mark-to-market initiative, the Department did not have current or complete information on the insured Section 8 portfolio upon which to base assumptions and estimates about the costs and impact of the proposal. For example, HUD lacked reliable, up-to-date information on the market rents the properties could be expected to command and the properties’ physical conditions—two variables that strongly influence how properties will be affected by the mark-to-market proposal. To obtain data to better assess the likely outcomes and costs of the mark-to-market proposal, HUD contracted with Ernst & Young LLP in 1995 for a study on HUD-insured properties with Section 8 assistance to (1) determine the market rents and physical condition of the properties and (2) develop a financial model to show how the proposal would affect the properties and to estimate the costs of subsidies and claims associated with the mark-to-market proposal. The study was conducted on a sample of 558 of 8,363 properties and extrapolated to the total population of 8,563 properties identified by HUD at that time as representing the population subject to its mark-to-market proposal. The sample was designed to be projectible to the population with a relative sampling error of no more than plus or minus 10 percent at the 90-percent confidence level. A briefing report summarizing the study’s findings was released by HUD and Ernst & Young on May 2, 1996. It provides current information on how the assisted rents at the properties compare with market rents, the physical condition of the properties, and how the properties are expected to be affected by HUD’s proposal as the proposal existed while the study was underway. As such, it is important to note that the study’s results do not reflect the changes that HUD made to its proposal in early 1996. Ernst & Young estimates that the majority of the properties have assisted rents exceeding market rents and that the properties have significant amounts of immediate deferred maintenance and short-term and long-term capital needs. Specifically, Ernst & Young’s study estimates that a majority of the properties—between 60 and 66 percent—have rents above market and between 34 and 40 percent are estimated to have below-market rents. Ernst & Young’s data also indicate a widespread need for capital—between $9.2 billion and $10.2 billion—to address current deferred maintenance needs and the short- and long-term requirements to maintain the properties. The study estimates that the properties have between $1.3 billion and $1.6 billion in replacement and cash reserves that could be used to address these capital needs, resulting in total net capital needs of between $7.7 billion and $8.7 billion. The average per-unit cost of the total capital requirements, less the reserves, is estimated to be between $9,116 and $10,366. Ernst & Young’s analysis also indicates that about 80 percent of the properties would not be able to continue operations unless their debt was restructured. Furthermore, for approximately 22 to 29 percent of the portfolio, writing the existing debt to zero would not sufficiently reduce costs for the properties to address their immediate deferred maintenance and short-term capital needs. The study estimates that between 11 and 15 percent of the portfolio would not even be able to cover operating expenses. The study was designed to use the information on market rents and the properties’ physical condition gathered by Ernst & Young, as well as financial and Section 8 assistance data from HUD’s data systems, in a financial model designed to predict the proposal’s effects on the portfolio as a whole. Specifically, the model estimates the properties’ future cash flows over a 10-year period on the basis of the assumption that they would be reengineered (marked to market) when their current Section 8 contracts expire. The model classifies the loans into four categories—performing, restructure, full write-off, and nonperforming—that reflect how the properties would be affected by HUD’s proposal. Placement in one of the four categories is based on the extent to which income from the reengineered properties would be able to cover operating costs, debt service payments, deferred maintenance costs, and short-term capital expenses. Table 1 shows the results of Ernst & Young’s analysis of how properties would be affected by HUD’s proposal. We are currently evaluating Ernst & Young’s financial model and expect to issue our report late this summer. Our preliminary assessment is that the model provides a reasonable framework for studying the overall results of portfolio reengineering, such as the number of properties that will need to have their debt restructured and to estimate the related costs of insurance claims and Section 8 subsidies. In addition, we did not identify any substantive problems with Ernst & Young’s sampling and statistical methodology. However, our preliminary assessment of the study indicates that some aspects of Ernst & Young’s financial model and its assumptions may not reflect the way in which insured Section 8 properties will actually be affected by portfolio reengineering. Also, some of the assumptions used in the model may not be apparent to readers of Ernst & Young’s May 1996 briefing report. For example, Ernst & Young’s assumptions about the transition period that properties go through in the reengineering process may be overly optimistic. During the transition, a reengineered property changes from a property with rental subsidies linked to its units to an unsubsidized property competing in the marketplace for residents. The model estimates that the entire transition will be completed within a year after the first Section 8 contract expires. In addition, the model assumes that during this year, the property’s rental income will move incrementally toward stabilization over 9 months. Lenders with whom we consulted on the reasonableness of the model’s major assumptions generally believed that a longer transition period of 1 to 2 years is more likely. They also anticipated an unstable period with less income and more costs during the transition rather than the smooth transition assumed in the model. An Ernst & Young official told us that the 9-month period was designed to reflect an average transition period for reengineered properties. While he recognized that some properties would have longer transition periods than assumed in the model, he believed that the transition periods for other properties could be shorter than 9 months. In addition, Ernst & Young’s May 1996 report does not detail all of the assumptions used in the firm’s financial model that are useful to understanding the study’s results. In particular, the model assumes that the interest subsidies some properties currently receive will be discontinued after the first Section 8 contract expires, including those in the performing category whose debts do not require restructuring. We are currently examining how the assumptions contained in the Ernst & Young study affect its estimates of the effects of portfolio reengineering. In addition, we are assessing how the use of alternative assumptions would affect the study’s results. We also observed that although Ernst & Young’s study provided information on the cost to the government of the portfolio reengineering proposal, the May report did not provide these results. We are currently examining Ernst & Young’s data and will provide cost estimates derived from Ernst & Young’s model covering changes in the Section 8 subsidy costs and FHA insurance claims. Our preliminary review of this information indicates that the costs of claims will be significant. On average, the data indicate that mortgage balances for the properties needing mortgage restructuring—including those in the full write-off and nonperforming categories that would have their mortgages totally written off—would need to be reduced by between 61 and 67 percent. This reduction would result in claims against FHA’s multifamily insurance funds. As we discussed in our testimony before this Subcommittee last year, the Congress faces a number of significant and complex issues in evaluating HUD’s portfolio reengineering proposal. Since last year there has been considerable discussion on the issues we noted, but there is still disagreement on how many of them should be addressed. New issues have also been raised. Key issues include the following. One key cause of the current problems affecting the insured Section 8 portfolio has been HUD’s inadequate management of the portfolio. HUD’s original proposal sought to address this situation by subjecting properties to the disciplines of the commercial market by converting project-based subsidies to tenant-based assistance, adjusting rents to market levels, and refinancing existing insured mortgages with smaller, uninsured mortgages if necessary for properties to operate at the new rents. However, to the extent that the final provisions of reengineering perpetuate the current system of FHA insurance and project-based subsidies, HUD’s ability to manage the portfolio will remain a key concern. Thus, it will be necessary to identify other means for addressing the limitations that impede HUD’s ability to effectively manage the portfolio, particularly in light of the planned staff reductions that will further strain HUD’s management capacity. An issue with short-term—and potentially long-term—cost implications is whether HUD should continue to provide FHA insurance on the restructured loans and, if so, under what terms and conditions. If FHA insurance is discontinued when the loans are restructured as originally planned, HUD would likely incur higher debt restructuring costs because lenders would set the terms of the new loans, such as interest rates, to reflect the risk of default that they would now assume. The primary benefits of discontinuing insurance are that (1) the government’s dual role as mortgage insurer and rent subsidy provider would end, eliminating the management conflicts associated with this dual role, and (2) the default risk borne by the government would end as loans were restructured. However, the immediate costs to the FHA insurance fund would be higher than if insurance, and the government’s liability for default costs, were continued. If, on the other hand, FHA insurance were continued, another issue is whether it needs to be provided for the whole portfolio or could be used selectively. For example, should the government insure loans only when owners cannot obtain reasonable financing without this credit enhancement? Also, if FHA insurance were continued, the terms and conditions under which it is provided would affect the government’s future costs. Some lenders have indicated that short-term (or “bridge”) financing insured by FHA may be needed while the properties make the transition to market conditions, after which time conventional financing at reasonable terms would be available. Thus, the government could insure loans for 3 to 5 years, in lieu of the current practice of bearing default risk for 40 years. Finally, the current practice of the government’s bearing 100 percent of the default risk could be changed by legislation requiring state housing finance agencies or private-sector parties to bear a portion of the insurance risk. In addressing the problems of the insured Section 8 portfolio, one of the key issues that will need to be decided is whether to continue project-based assistance, convert the portfolio to tenant-based subsidy, or use some mix of the two subsidy types. On one hand, the use of tenant-based assistance can make projects more subject to the forces of the real estate market, which can help control housing costs, foster housing quality, and promote resident choice. On the other hand, by linking subsidies directly to property units, project-based assistance can help sustain those properties in housing markets that have difficulty in supporting unsubsidized rental housing, such as inner-city and rural locations. In addition, residents who would likely have difficulty finding suitable alternative housing, such as the elderly or disabled and those living in tight housing markets, may prefer project-based assistance to the extent that it gives them greater assurance of being able to remain in their current residences. If a decision is made to convert Section 8 assistance from project-based to tenant-based as part of portfolio reengineering, decisions must also be made about whether to provide additional displacement protection for current property residents. HUD’s April 1996 reengineering strategy contains several plans to protect the residents affected by rent increases at insured properties. For example, the residents currently living in project-based Section 8 units that are converted to tenant-based subsidy would receive enhanced vouchers to pay the difference between 30 percent of their income and the market rent for the property in which they live, even if it exceeds the area’s fair market rent ceiling. The residents of reengineered properties who currently live in units without Section 8 subsidy would receive similar assistance if the properties’ new rents require them to pay more than 30 percent of income. Such provisions are clearly important to help limit residents’ rent burdens and reduce the likelihood of residents being displaced, but they also reduce Section 8 savings, at least in the short run. The Ernst & Young study’s cost estimates assume that HUD would cover Section 8 assistance costs for existing residents, even if a property’s market rents exceed fair market rent levels set by HUD. However, it does not include any costs for providing Section 8 subsidy to residents who are currently unassisted. The decision about which properties to include in portfolio reengineering will likely involve trade-offs between addressing the problem of high subsidy costs and addressing the problems of poor physical condition and exposure to default. On one hand, reengineering only those properties with rents above market levels would result in the greatest subsidy cost savings. On the other hand, HUD has indicated that also including those properties with rents currently below market levels could help improve these properties’ physical and financial condition and reduce the likelihood of default. However, including such properties would decrease estimated Section 8 subsidy cost savings. Although HUD’s latest proposal would initially focus on properties with rents above market, it notes that many of the buildings with below-market rents are in poor condition or have significant amounts of deferred maintenance which will need to be addressed at some point. Selecting a mortgage restructuring process that is feasible and that balances the interests of the various stakeholders will be an important, but difficult, task. Various approaches have been contemplated, including payment of full or partial insurance claims by HUD, mortgage sales, and the use of third parties or joint ventures to design and implement specific restructuring actions at each property. Because of concerns about HUD’s ability to carry out the restructuring process in house, HUD and others envision relying heavily on third parties, such as State Housing Financing Agencies (HFAs) or teams composed of representatives from HFAs, other state and local government entities, nonprofit organizations, asset managers, and capital partners. These third parties would be empowered to act on HUD’s behalf, and the terms of the restructuring arrangements that they work out could to a large extent determine the costs to, and future effects of restructuring on, stakeholders such as the federal government, property owners and investors, mortgage lenders, residents, and state and local government housing agencies. Some, however, have questioned whether third parties would give adequate attention to the interests of owners or to the public policy objectives of the housing. On the other hand, with the proper incentives, third parties’ financial interests could be aligned with those of the federal government to help minimize claims costs. Who should pay for needed repairs, and how much, is another important issue in setting restructuring policy. As discussed previously, Ernst & Young’s study found a substantial amount of unfunded immediate deferred maintenance and short-term capital replacement needs across the insured Section 8 portfolio, but particularly in the “older assisted” properties. Ernst & Young’s data indicate that between 22 and 29 percent of the properties in the portfolio could not cover their immediate deferred maintenance and short-term capital needs, even if their mortgage debt were fully written off. HUD proposes that a substantial portion of the rehabilitation and deferred maintenance costs associated with restructuring be paid through the affected properties’ reserve funds and through FHA insurance claims in the form of debt reduction. Others have suggested that HUD use a variety of tools, such as raising rents, restructuring debt and providing direct grants, but that per-unit dollar limits be set on the amount that the federal government pays, with the expectation that any remaining costs be paid by the property owners/investors or obtained from some other source. According to Ernst and Young’s assessment, between 22 and 29 percent of HUD’s insured portfolio would have difficulty sustaining operations if market rents replaced assisted rents. Furthermore, between 11 and 15 percent of the portfolio would not even be able to cover operating costs at market rents. If additional financial assistance is not provided to these properties, a large number of low-income residents would face displacement. While HUD has not yet developed specific plans for addressing these properties, it appears likely that different approaches may be needed, depending on a property’s specific circumstances. For example, properties in good condition in tight housing markets may warrant one approach, while properties in poor condition in weak or average housing markets may warrant another. Further analysis of these properties should assist the Department in formulating strategies for addressing them. HUD’s portfolio reengineering proposal is likely to have adverse tax consequences for some project owners. These tax consequences can potentially result from either reductions in the principal amounts of property mortgages (debt forgiveness) or actions that cause owners to lose the property (for example, as a result of foreclosure). We have not assessed the extent to which tax consequences are likely to result from portfolio reengineering. However, HUD has stated that it believes tax consequences can be a barrier to getting owners to agree to reengineer their properties proactively. While HUD has not formulated a specific proposal for dealing with the tax consequences of portfolio reengineering, it has stated that it is willing to discuss with the Congress mechanisms to take into account tax consequences related to debt forgiveness for property owners who enter into restructuring agreements. The multifamily demonstration program that HUD recently received congressional authority to implement provides for a limited testing (on up to 15,000 multifamily units) of some of the aspects of HUD’s multifamily portfolio reengineering proposal. As such, the program can provide needed data on the impacts of reengineering on properties and residents, the various approaches that may be used in implementing restructuring, and the costs to the government before a restructuring program is initiated on a broad scale. However, because of the voluntary nature of the program, it may not fully address the broad range of impacts on the properties or the range of restructuring tools that the Department could use. For example, owners may be reluctant to participate in the program if HUD plans to enter into joint ventures with third-party entities because of concerns they may lose their properties and/or suffer adverse tax consequences. Another potential limitation on the program is that the funding provided to modify the multifamily loans may not be sufficient to cover the limited number of units authorized under the demonstration program. How these issues are resolved will, to a large degree, determine the extent to which the problems that have long plagued the portfolio are corrected and prevented from recurring and the extent to which reengineering results in savings to the government. HUD’s portfolio reengineering initiative recognizes a reality that has existed for some time—namely, that the value of many of the properties in the insured Section 8 portfolio is far less than the mortgages on the properties suggest. Until now, this reality has not been recognized and the federal government has continued to subsidize the rents at many properties above the level that the properties could command in the commercial real estate market. As the Congress evaluates options for addressing this situation, it will be important to consider each of the fundamental problems that have affected the portfolio, and their underlying causes. Any approach implemented should address not only the high Section 8 subsidy costs, but also the high exposure to insurance loss, poor physical condition, and the underlying causes of these long-standing problems with the portfolio. As illustrated by several of the key issues discussed above, questions about the specific details of the reengineering process, such as which properties to include and whether or not to provide FHA insurance, will require weighing the likely effects of various options and the trade-offs involved when proposed solutions achieve progress on one problem at the expense of another. Changes to the insured Section 8 portfolio should also be considered in the context of a long-range vision for the federal government’s role in providing housing assistance, and assistance in general, to low-income individuals, and how much of a role the government is realistically able to have, given the current budgetary climate. Addressing the problems of the portfolio will inevitably be a costly and difficult process, regardless of the specific approaches implemented. The overarching objective should be to implement the process in the most efficient and cost-effective manner possible, recognizing not only the interests of the parties directly affected by restructuring but also the impact on the federal government and the American taxpayer. As indicated earlier in our statement, we are continuing to review the results of Ernst & Young’s study and other issues associated with portfolio reengineering, and we will look forward to sharing the results of our work with the Subcommittee as it is completed.
GAO discussed the Department of Housing and Urban Development's (HUD) efforts to reengineer its multifamily rental housing portfolio. GAO noted that: (1) the portfolio's excessive subsidy costs, high exposure to insurance loss, and poor physical condition stem from program design flaws, HUD dual role as loan insurer and rental subsidy provider, and weaknesses in HUD oversight and management; (3) in 1995, HUD proposed allowing property owners to set rents at market levels, terminating Federal Housing Administration (FHA) mortgage insurance, and replacing project-based rent subsidies with portable tenant-based subsidies; (4) although the proposal could lower mortgage debt, it would result in substantial FHA insurance claims; (5) HUD has made several proposal changes in 1996 due to concerns about the lack of data, effects on properties and existing residents, and the long-term financial impact on the government; (6) a 1996 contractor's report confirmed that most properties have assisted rents that are higher than estimated market rents and significant maintenance and capital improvement needs; (7) the study also indicates that most portfolio properties need to have their debt reduced to continue operating; and (8) reengineering issues requiring congressional consideration include HUD portfolio management problems, FHA insurance for restructured loans, project- versus tenant-based rent subsidies, protection for displaced households, inclusion of properties with below-market rents, mortgage restructuring, government financing of rehabilitation costs, and property owners' tax relief.
SEC is an independent agency created in 1934 to protect investors; maintain fair, honest, and efficient securities markets; and facilitate capital formation. The agency has a five-member Commission that the President appoints, with the advice and consent of the Senate, and that a Chairman designated by the President leads. The Commission oversees SEC’s operations and provides final approval of SEC’s interpretation of federal securities laws, proposals for new or amended rules to govern securities markets, and enforcement activities. Table 1 identifies several key SEC units and summarizes their roles and responsibilities. SEC’s current 2004-2009 strategic plan established four goals: (1) enforce compliance with the federal securities laws, (2) promote healthy capital markets through an effective and flexible regulatory environment, (3) foster informed investment decision-making, and (4) maximize the use of SEC resources. Enforcement and OCIE share joint responsibility for implementing the agency’s first strategic goal. The Commission and the Office of the Executive Director, which develops and implements all the agency’s management policies, are updating the agency’s strategic plan, which is to be issued in the summer of 2007. Enforcement personnel are located in SEC’s home office in Washington, D.C., as well as the agency’s 11 regional offices. Enforcement staff located in the home office include the director and one of two deputy directors, five investigative groups or Offices of Associate Directors, as well as internal support groups, including its Offices of Chief Counsel and Chief Accountant (see fig. 1). An associate director heads each Office of Associate Director and has one or more assistant directors. Branch chiefs report to assistant directors and supervise the work of investigative staff attorneys assigned to individual investigations, with review and support provided by division management. SEC regional office staff are typically divided between Enforcement and OCIE personnel. Enforcement units in the regional offices have Office of Associate Director structures similar to those in the home office and report to the Director of Enforcement in Washington, D.C. The Sarbanes-Oxley Act of 2002 substantially increased SEC’s appropriations, and Enforcement subsequently increased its staffing levels. In 2002, Enforcement had 1,012 staff and, at the end of fiscal year 2006, 1,273 staff. As shown in figure 2, the number of investigative attorneys in Enforcement increased substantially, from 596 in 2002 to 740 in 2005. However, the number of staff in Enforcement, in particular its investigative attorneys, decreased from 2005 to 2006 because of a May 2005 hiring freeze (instituted across the agency in response to diminished budgetary resources) and subsequent attrition. Since October 2006, however, SEC has permitted Enforcement and other SEC divisions and offices to replace staff that leave the agency. However, the agency does not contemplate returning to early 2005 staffing levels. Appendix II provides additional information on Enforcement’s staffing resources and workload indicators. Figure 3 provides a general overview Enforcement’s investigative process. At the initial stage of the investigative process, attorneys evaluate information that may indicate the existence of past or imminent securities laws violations. The information can come from sources such as tips or complaints from the public as well as referrals from other SEC divisions or government agencies. If Enforcement staff decide to pursue the matter, they will open either a Matter Under Inquiry (MUI) or an investigation. Staff open a MUI when more information is required to determine the merits of an investigation; otherwise, staff may open an investigation immediately. Investigations can be conducted informally—without Commission approval—or formally, in which case the Commission must first approve a formal order if staff find it necessary to issue subpoenas for testimony or documentation. Based on the analysis of collected evidence, Enforcement will decide whether or not to recommend that the Commission pursue enforcement actions, which can be administrative or federal civil court actions (both of which must be authorized by the Commission). Enforcement has established a variety of controls over the enforcement action process, including reviews by senior division officials in Washington, D.C., and, ultimately, review and approval by the Commission. Enforcement has an information technology system— CATS—that tracks the progress of its MUIs, investigations, and enforcement actions. Enforcement also is responsible for implementing and overseeing the Sarbanes-Oxley Act’s Fair Fund provision, which allows SEC to combine civil monetary penalties and disgorgement amounts collected in enforcement cases to establish funds for investors harmed by securities laws violations. Fair Funds may be created through either SEC administrative proceedings or litigation in U.S. District Court, and either SEC or the courts may administer the funds. However, SEC is responsible for general monitoring of all Fair Funds created. Typically, for SEC- ordered Fair Funds, the agency hires consultants to create Fair Fund distribution plans (independent distribution consultants) and oversee payments to harmed investors (fund administrators). However, in some cases, SEC staff will take care of all of the distribution responsibilities internally. The development of a Fair Fund plan can include estimating losses suffered by harmed investors. For court-ordered funds, SEC recommends a receiver or distributions agent, who creates a distribution plan that is presented for court approval. Enforcement’s approaches for planning, tracking, and closing investigations have had some significant limitations that have hampered its ability to effectively manage its operations, allocate limited staff resources, and ensure the fair treatment of individuals and companies under investigation. While SEC and Enforcement officials are aware of these limitations and have begun addressing them, some of their actions may not fully correct identified weaknesses. Specifically, Enforcement has not (1) established written procedures and criteria for its newly centralized review and approval process for new investigations, which could limit its ability to ensure its consistent implementation and reduce the Commission’s ability to oversee the division’s operations; (2) established controls to help ensure the reliability of the investigative data that division attorneys will be required to enter into a new information system, which could limit the usefulness of management reports generated by the system; and (3) established plans and procedures to ensure that all investigations that are no longer being actively pursued are closed promptly to reduce the negative impact on individuals and companies no longer under review. To establish overall investigative priorities, Enforcement officials said that they regularly communicate with senior SEC officials and their counterparts in other agency units. For example, Enforcement officials said that they hold weekly meetings with the SEC Chairman and other commissioners as appropriate. During the Chairman’s tenure, he has identified the pursuit of securities fraud against senior citizens as a key investigative priority for Enforcement and other agency offices, including OCIE. In addition to specific priorities, Enforcement officials said that they seek to maintain a constant investigative presence across all areas of potential securities violations (for example, insider trading abuses) and that this “cover the waterfront” approach is designed to prosecute and possibly deter securities law offenders. While an Enforcement official said that the division has not established minimum quotas for different types of investigations and enforcement actions, it will intervene if any one type threatens to overwhelm the division’s operations. Based on internal analysis of enforcement action data, Enforcement officials determined that if the division’s pursuit of any type of securities enforcement action exceeded 40 percent of total enforcement actions, an unacceptable commitment of division resources would result. While Enforcement has established planning processes for determining overall priorities, the division has used a largely decentralized approach for reviewing and approving individual new investigations, which may have limited the division’s operational effectiveness, according to senior SEC and Enforcement officials. Under this traditional approach, associate directors in either SEC’s home or 11 regional offices approved the opening of MUIs after staff came across a potential violation of federal securities law. While Enforcement’s senior leadership in the home office reviewed proposals for formal investigations and received weekly reports on MUIs and new investigations that had been approved in each office (and reviewed summaries of all investigations on a quarterly basis), it did not have formal approval responsibility for such new MUIs and investigations. According to Enforcement officials, staff in each office generally decided to open MUIs and investigations based on considerations such as the likelihood that they would be able to find and prove a violation of federal securities laws, the potential amount of investor loss, the gravity of the misconduct, and the potential message the case would deliver to the industry and public. Typically, the staff attorney who opened the MUI was responsible for conducting the investigation. According to Enforcement officials, this decentralized approach was generally viewed as fostering creativity in the investigative process and providing staff with incentives to actively seek potential investigations. However, without a centralized control mechanism for reviewing and approving all new MUIs and investigations, Enforcement’s capacity to ensure the efficient use of available resources, which is one of SEC’s four strategic goals, was limited. For example, SEC’s Chairman, officials from his office and the Office of the Executive Director, and Enforcement officials said that the division has not always been able to prioritize or ensure an efficient allocation of limited investigative staff resources. Officials said that in some cases staff attorneys worked on investigations that were outside of their geographic area (for example, San Francisco staff conducting an investigation in the Atlanta region). Consequently, the officials said that the division incurred travel and other related costs that could have been minimized if a centralized process had been in place to approve all new investigations. Further, one official from the Chairman’s office said that without a formal quality check by senior Enforcement officials, in some cases MUIs and investigations had been opened and allowed to linger for years with little likelihood of resulting in enforcement actions. In March 2007, Enforcement began using a new, more centralized approach to review and approve investigations. Under the new approach, two deputy directors, who report directly to the Director of Enforcement, are to review and approve all newly opened MUIs and investigations to ensure the appropriateness of resource allocation considerations and whether the MUI should be pursued. One deputy director is to review MUIs opened in the division’s home office and another deputy director, based in New York, is to review MUIs opened in regional offices. In addition to the MUI review, after an investigation is open for 6 months, staff will be required to prepare a memorandum with information on evidence gathered to date, whether an enforcement action is likely, resources, and estimated time frames for review by their deputy director. According to Enforcement officials, the goal of this new approach is to provide early assessments of whether an investigation ought to be pursued further and resources reallocated. The deputy directors are also expected to use this review to determine if the investigation is being conducted in a timely manner, if it should be reprioritized based on Enforcement’s current caseload, or if it should be closed. While these are positive developments, Enforcement has not yet established comprehensive written policies specifying how the new approach will be carried out or the criteria that will be used to assess new MUIs and ongoing investigations. According to our and Office of Management and Budget (OMB) standards, documentation is one type of control activity that will help ensure that management’s directives, such as these new procedures, are carried out. In spring 2007, Enforcement developed and distributed divisionwide a one-page planning document that, among other items, identified the new centralized approach for reviewing and approving MUIs and investigations. However, without the establishment of agreed-upon and written procedures for carrying out the new approach and relevant assessment criteria, the division may face challenges in consistently communicating and explaining the new approach to all current and new staff. Moreover, the Commission’s ability to oversee how effectively Enforcement is implementing the new approach and generally managing its operations may be limited. For example, the lack of a transparent and documented standard could limit the Commission’s capacity to identify inconsistencies in the implementation of the new approach, determine whether any such inconsistencies have affected Enforcement’s operations, and take corrective action as warranted. Enforcement officials have consistently stated that the division’s current information system for tracking investigations and enforcement actions— CATS—is severely limited and virtually unusable as a management tool. In particular, the officials have said that access to CATS is limited and the system does not allow division management to generate summary reports, which could be used to help manage operations on an ongoing basis. Currently, the only summary reports CATS readily produces for management review are lists of all open MUIs, investigations, and enforcement actions by general violation types, such as violations involving broker-dealers or investment advisers. CATS does not allow its users to create timely reports on more specific topics, such as ongoing investigations involving hedge funds, which do not exist as classification fields in the system. As a result of the system’s limitations, several senior Enforcement management officials said that they maintain their own manual lists of certain types of investigations (such as those for hedge funds) to assist in managing division activities. Further, to obtain customized reports and statistics on Enforcement operations, division officials said that they must submit requests to SEC’s Office of Information Technology (OIT) and then wait for OIT staff to create and run a computer program to respond to the request. Enforcement officials said that OIT staff generally are responsive and work very hard to address these requests; however, given their heavy workload, one Enforcement official said that it generally takes 1-2 days to receive the information, and more complex requests can take as long as a week. Further, Enforcement officials said that obtaining technical support for CATS can be difficult because the system is proprietary and the company that created it is no longer in business. According to Enforcement officials, CATS’s deficiencies result from the fact that the system was hastily designed in preparation for expected year 2000 technical challenges. Having recognized CATS’s limitations, SEC and Enforcement officials are developing a new investigation information management system, called the Hub, which is scheduled to be in use divisionwide by the end of fiscal year 2007. According to Enforcement officials, division officials and staff in SEC’s Boston office developed a prototype of the Hub in 2004 because of their dissatisfaction with CATS. Subsequently, Enforcement, in coordination with OIT, developed an enhanced version of the Hub, which was then tested among home and regional office staff in late 2006 and early 2007. Enforcement officials said that the Hub is an interim system that will continue to interface with the CATS database until the second phase of the Hub fully replaces CATS, which is expected to occur in fiscal year 2009. Although the Hub is an interim system, Enforcement officials said that it will significantly enhance the division’s capacity to manage the investigative process. In particular, the officials said that the Hub will facilitate the creation of a variety of management reports on the division’s investigative activities, including detailed reports on ongoing investigations by certain types (for example, reports on the number of hedge fund investigations). The Hub will also provide more detailed information on the status of investigations so management can better track their progress and timeliness. Further, the officials said that the Hub is designed to be (1) generally accessible to all division staff, although highly sensitive investigative information will be restricted on a need-to- know basis; (2) user-friendly, primarily employing drop-down menus for data entry; (3) searchable so that staff can identify relevant information associated with an investigative matter; and (4) flexible, because new data fields can be added. We reviewed prototype screens for the Hub and found that they were consistent with the descriptions of Enforcement officials, and staff we contacted generally made favorable comments about the system. However, due to significant planned changes to Enforcement’s traditional approach for recording investigative data, there is a risk that data may not be entered into the Hub on a timely and consistent basis, as required by federal internal control standards. Enforcement has traditionally required support personnel or case management specialists (rather than attorneys) to enter investigative data into CATS because of the limited access to the system and its lack of user friendliness. However, once the Hub is implemented in late 2007, Enforcement officials said that they plan to require division attorneys to enter relevant data into the system for all investigations opened after that date. Further, Enforcement officials said that attorneys will be responsible for entering relevant data into the Hub for ongoing investigations that are being actively pursued but were initiated prior to the system’s implementation. Enforcement officials regard the entry of such data as critical; otherwise, management reports generated by the Hub would only include information on investigations begun after the system’s scheduled implementation in late 2007. One Enforcement official said that the decision to require attorneys to enter data into the Hub was based on the view that such attorneys have first- hand knowledge of ongoing investigations and thus would be able to streamline the process. However, another Enforcement official said that requiring attorneys to maintain timely, accurate, and consistent investigative data in the Hub would require a cultural change on the attorneys’ part because they have become accustomed to relying on case management specialists to perform this task. Another Enforcement official questioned whether division attorneys would enter investigative data into the Hub on a timely and consistent basis because they may view doing so as another administrative requirement diverting them from their primary investigative responsibilities. While Enforcement’s plans to require attorneys rather than case management specialists to enter data into the Hub may be appropriate, the division plans only a limited number of actions to ensure that data entered into the system are timely, consistent, and reliable. For example, Enforcement plans to train attorneys on the Hub as it is implemented and is developing a system user manual. However, Enforcement is not developing written guidance identifying data entry into the Hub as a priority for division attorneys and specifying how and when such data entry is to be done. Moreover, Enforcement has not yet established a written process that would allow division officials to independently review and determine the extent to which data entry for the Hub is performed on a timely, consistent, and reliable basis in accordance with federal internal control standards. Without doing so, the usefulness of management reports generated by the Hub may be limited, and the system’s potential to significantly enhance Enforcement’s capacity to better manage the investigative process may not be fully realized. Enforcement may leave open for years many investigations that are not being actively pursued with potentially negative consequences for individuals and companies no longer under review. According to CATS data, about two-thirds of Enforcement’s nearly 3,700 open investigations as of the end of 2006 were started 2 or more years before, one-third of investigations at least 5 years before, and 13 percent at least 10 years before. According to an Enforcement official, technical limitations in CATS make it difficult to readily determine how many of these investigations resulted in enforcement actions and how many did not. Nevertheless, other data suggest that the number of aged investigations that did not result in an enforcement action may be substantial. For example, Enforcement officials at one SEC regional office said that as of March 2007, nearly 300 of 841 open investigations (about 35 percent) were more than 2 years old, had not resulted in an enforcement action, and were no longer being actively pursued. Enforcement officials cited several reasons for division attorneys not always closing investigations promptly. In particular, the officials said that Enforcement attorneys may view pursing potential securities violations as the division’s highest priority and lack sufficient time, administrative support, and incentives to comply with established administrative procedures for closing investigations. For example, Enforcement requires attorneys to complete closing memoranda for each investigation that is to be closed. These memoranda must identify why the investigation was opened, describe the work performed, and detail the reasons for recommending that the investigation be closed without an action. Staff must also prepare draft termination letters, which inform individuals or companies that they are no longer under review. A closing memorandum is also required for investigations with associated enforcement actions. In these cases, the staff attorney must account for all ordered relief before the investigation is closed. One regional Enforcement official estimated that it could take as long as a month for a staff attorney to complete this process and submit the closing package to the home office, although senior division officials noted that attorneys typically would not spend all their time doing so. Once closing packages are received by the home office, Enforcement’s Office of Chief Counsel must then approve the closing of the investigation, at which point final termination letters are sent to affected individuals and companies. Enforcement officials in SEC’s home office said that a lack of resources in their office also contributed to delays in closing investigations. They noted that only one person in the division was assigned to processing closing packages for investigations. Consequently, the officials said there was a backlog of investigations for which the closing package had been completed but not reviewed. As of March 1, 2007, the backlog consisted of 464 investigations, according to an Enforcement official. However, Enforcement officials told us that in May 2007 they began eliminating the backlog of investigations with completed—but unreviewed—closing packages and had almost eliminated the backlog by mid-June 2007. The division recently added one staff person to work on administering closing procedures in the home office, and Enforcement officials have set a goal of processing new closing documentation within 2 weeks of receipt. Also in May 2007, Enforcement implemented revised procedures for sending termination letters for investigations that will not result in an enforcement action. Under the procedures, Enforcement will send the letters to individuals and companies at the start of the closing process rather than at the end. This particular effort will be emphasized on Enforcement’s intranet—EnforceNet. Enforcement officials said they changed this procedure out of concerns about fairness to those under investigation and to reduce any negative impact an open investigation may have on them. For example, a company may bar an individual from performing certain duties until a pending SEC investigation is resolved. Staff are generally encouraged to close investigations if they know they will not be bringing any enforcement actions, even if all of their investigative steps have not yet been completed. While the above steps are a positive development, they do not address the potentially large backlog of investigations that are not likely to result in enforcement actions and for which closing packages have not been completed. As a result, the subjects of many aged and inactive investigations may continue to suffer adverse consequences until closing actions are completed. We recognize that reviewing and resolving this potentially large backlog of investigations and enforcement actions likely would impose resource challenges for Enforcement. Nevertheless, the failure to address this issue—potentially through expedited administrative closing procedures for particularly aged investigations—would limit Enforcement’s capacity to manage its operations and ensure the fair treatment of individuals and companies under its review. According to available SEC data, the distribution of funds to harmed investors under the Fair Fund program remains limited after 5 years of operation. Enforcement officials, as well as consultants involved in Fair Fund plans, have cited a variety of reasons for the slow distribution, including challenges in identifying harmed investors, the complexity of certain Fair Funds, and the need to resolve tax and other issues. However, the largely decentralized approach that Enforcement and SEC have used in managing the Fair Fund program may also have contributed to distribution delays. SEC has announced plans to create a central Fair Funds office, but it is too early to assess this proposal, as final determinations about its staffing, roles and responsibilities, and procedures have not yet been determined. Further, Enforcement does not yet collect key data necessary to effectively monitor the Fair Fund program (such as data on fund administrative expenses for ongoing plans) because an information system designed to capture such data is not expected to be implemented until 2008. In the meantime, Enforcement has not ensured that reports intended to provide expense data for completed Fair Fund plans contain consistent information or are analyzed. Until Enforcement clearly defines the Fair Fund office’s oversight roles and responsibilities and officials establish procedures to consistently collect and analyze additional data, the division will not be in an optimal position to help ensure the effective management of the Fair Fund program. As of June 2007, Enforcement officials said that they were tracking 115 Fair Funds created since the program’s inception in 2002—up from the 75 identified in our 2005 report—largely because funds were created as part of a series of enforcement actions against mutual fund companies. The Fair Fund plans vary considerably in size and complexity, ranging from plans for small broker-dealers with relatively few customers to large corporate cases, according to SEC data. The smallest Fair Fund plan established (measured by the amount of funds ordered returned to investors) was $29,300 for alleged fraud at a hedge fund; the largest was $800 million for alleged securities fraud at American International Group, Inc. (AIG). Table 2 shows the 10 largest Fair Funds ordered through June 2007; 7 are court-created plans, and 3 have been established through SEC administrative proceedings. SEC monitors all Fair Fund plans regardless of their source. According to SEC data, from 2002 to 2007, federal courts and SEC administrative proceedings ordered individuals and entities subject to SEC enforcement actions to pay a total of $8.4 billion into Fair Fund plans, an increase of about 75 percent from the $4.8 billion total Fair Funds we identified in our 2005 report. As of June 2007, $1.8 billion of the $8.4 billion (about 21 percent) had been distributed to harmed investors, according to SEC data. As shown in table 3, the amount distributed from court-overseen plans exceeded that distributed from SEC-overseen plans. According to Enforcement officials, the funds were distributed more slowly from SEC- overseen plans largely because much of the money ordered through SEC proceedings involves mutual fund market timing matters, which, as discussed later, are among the most complex Fair Fund plans. According to Enforcement officials and consultants who work on Fair Funds, a key reason for the slow distribution of Fair Funds has been the difficulty of identifying harmed investors in certain cases. Unlike typical securities class action lawsuits, Fair Funds may not rely on a claims-based process in which injured parties identify themselves by filing claims with trustees or other administrators. For example, in Fair Fund cases involving mutual fund market timing abuses, which account for many funds ordered into Fair Fund plans, Enforcement attorneys and plan administrators have assumed responsibility for identifying harmed investors. This step was taken because with the large number of affected investors and the nature of market timing violations, many such investors may not even have been aware that their accounts experienced losses. One Fair Fund plan consultant said that many harmed investors already had redeemed their shares in the affected mutual fund companies in prior years. Tracking down such former customers can be challenging because they may have changed their addresses several times, the consultant said. Several consultants and Enforcement officials also said that tracking down customers can be hard because securities brokers, through whom individuals may purchase mutual funds, may maintain customer account information rather than the mutual fund company itself. As a result, a Fair Fund administrator might need to contact and obtain the cooperation of relevant broker-dealers to obtain customer account information and make related distributions. The complexity of some cases can also impede the timely distribution of Fair Funds. For example, in mutual fund market timing cases, sophisticated analysis might be required to first identify trades that benefited from improper activity and then to calculate profits earned from those transactions and associated losses to investors, which may be spread across many such customers. According to a Fair Fund plan consultant and Enforcement officials, another significant challenge to the Fair Fund distribution involves retirement plans and the Employee Retirement Income Security Act of 1974 (ERISA), the federal law setting minimum standards for pension plans in private industry. Retirement plans hold assets on behalf of their beneficiaries, and it is not unusual for those assets to be invested with entities that become subject to Fair Fund enforcement actions. Thus, ERISA-covered retirement plans will be entitled to Fair Fund proceeds by virtue of such investments. But depending on circumstances, a Fair Fund distribution consultant may need to make determinations on a variety of complex issues before funds can be distributed, such as determining when such distributions become plan assets under ERISA. One Fair Fund consultant told us he spent a year waiting for Department of Labor clarification of relevant ERISA issues. Finally, determining the tax treatment of funds may also slow the distribution process. According to Fair Fund consultants, tax information must accompany Fair Fund distributions to investors so that recipients have some idea of how to treat their payments for tax purposes. Consultants and Enforcement officials told us that determining appropriate tax treatment has been time-consuming as they had no precedents upon which to draw. Depending on circumstances, an investor’s recovery of disgorged profits can constitute ordinary income or a capital gain—which can be taxed at different rates—or not represent taxable income at all. SEC ultimately hired a consulting firm to handle tax issues. One Fair Fund consultant told us that obtaining tax guidance from the Internal Revenue Service delayed the plan’s distribution by about 1 year. In addition to the factors discussed above, Enforcement’s largely decentralized approach to managing the process may also have contributed to delays in the distribution of Fair Funds. Currently, Enforcement staff attorneys in either SEC’s home office or 1 of its 11 regional offices who are pursuing individual enforcement cases take a lead role in the Fair Funds process, overseeing much of the work necessary to establish and maintain a fund. This includes supervising cases directly, overseeing consultants who design or administer distribution plans, and advising or petitioning courts presiding over Fair Fund plans. Enforcement officials said that the approach made sense from a Fair Fund administration standpoint because division attorneys have substantial knowledge of the regulated entity involved and the relevant enforcement action. Enforcement officials also said that senior officials in the home office have always played an important role in the oversight of the program. Their responsibilities have included providing guidance on selecting consultants, leading information-sharing and problem-solving efforts (for example, leading regular conference calls among fund consultants, parties involved in Fair Fund enforcement actions, their legal counsel, and SEC staff in Enforcement and elsewhere), and reviewing proposed Fair Fund distribution plans and recommending modifications as necessary. Outside consultants hired to design and implement Fair Fund plans told us that Enforcement staff attorneys assigned to their cases were dedicated and responsive and that the agency appears to be making a good faith effort to implement and oversee the Fair Funds provision. However, they also said that Enforcement’s delegated approach has resulted in delays, higher costs, and unnecessary repetition of effort. With different Enforcement staff handling different Fair Fund cases, the consultants said that Enforcement forgoes the opportunity to build institutional expertise and efficiencies. For example, one consultant said that Enforcement’s delegated management of the Fair Fund program has resulted in inefficiencies in key administrative aspects of the program, such as the development of standardized means of communicating with investors (for example, form letters) and the mechanics for distributing funds to them. Consequently, the consultant said that the Fair Fund program incurs a substantial amount of unnecessary administrative costs. Further, the consultants generally agreed that it would make sense for SEC to consider centralizing at least some aspects of the administration of Fair Fund plans to improve the efficiency of the distribution process. While Enforcement officials have cited benefits associated with the current management of the program, both SEC and division officials also acknowledged that it has created challenges. An official within the Chairman’s office said that the slow distribution of funds to harmed investors is of significant concern to the agency and that the lack of a centralized management approach has limited the development of standardized policies and controls necessary to facilitate disbursements. Further, Enforcement officials said that while Fair Fund work is important, it can divert investigative attorneys from pursuing other cases. The officials said that the Fair Fund workload on any particular case varies over time, but during peak periods it can consume about 50 to 75 percent of a staff attorney’s time. At one SEC regional office, Enforcement officials said that administering the Fair Fund program has resulted in a significant commitment of attorneys’ time, especially because the office lost almost 25 percent of its investigative staff due to attrition in the past year or so. In response to concerns about the slow distribution of Fair Fund proceeds to harmed investors, SEC’s Chairman took two actions in 2007. First, he established an internal agency committee to examine the program’s operation. The committee—which includes representatives from Enforcement, General Counsel, the Office of the Secretary, and the Office of the Executive Director—is assessing lessons learned in program implementation, the agency’s selection of consultants to administer the plans, and SEC’s policies and procedures for managing the program. An Enforcement official said that the committee is expected to complete its analysis by September 30, 2007. Second, in March 2007, the Chairman announced plans to create a centralized Fair Fund office. The Chairman stated that the purpose of the new office is to develop consistent fund distribution policies and dedicate full-time trained staff to ensure the prompt return of funds to harmed investors. While creating a central office within SEC could facilitate the distribution of Fair Funds, it was not yet possible to assess the planned office’s potential impact at the time of our review. For example, SEC had not announced which SEC unit the office would report to, although one official said that the office probably would be located within Enforcement. Further, SEC had not staffed the new Fair Fund office and had not established the roles and responsibilities of the new office or written relevant policies and procedures. For example, SEC had not determined the extent to which the new office might assume complete responsibility for managing at least some Fair Fund plans, although it is expected the office will continue to provide support to division attorneys who currently manage such plans. Until such issues are resolved, the new office’s potential efficiency in more quickly distributing Fair Fund proceeds to harmed investors will not be realized. Enforcement does not collect key data, as we recommended in 2005, to aid in division oversight of the Fair Fund program. In particular, Enforcement does not systematically collect data on administrative expenses for all ongoing Fair Fund plans. These costs range from fees and expenses that Fair Fund administrators and consultants charged to the costs of identifying harmed investors and sending checks to them. Approximately two-thirds of individual Fair Funds pay for administrative costs from fund proceeds, so that the greater the administrative expenses, the less money is available for distribution to harmed investors, according to our analysis of SEC Fair Fund information. However, without data on administrative expenses charged, Enforcement cannot judge the reasonableness of such fees and take actions as necessary to minimize them. Enforcement officials generally attributed SEC’s inability to implement our 2005 recommendation to changes in priorities for the development of the agency’s information systems. After we issued our 2005 report, Enforcement officials said they began working to modify the CATS system so that it could better track Fair Fund administrative expenses and other data. However, SEC ultimately decided to accelerate the development of a new financial management system for the division, called Phoenix. SEC and Enforcement officials said that the agency implemented the first phase of Phoenix in February 2007. The first phase includes limited information relevant to the Fair Fund program (the amount of money ordered in penalties and disgorgement and the amount paid to the agency), but it does not include data on such items as fund administrative expenses. Enforcement officials told us that a second phase of the Phoenix system will contain additional features for more complete management and monitoring of Fair Fund activity, consistent with our 2005 recommendation. According to Enforcement officials, Phoenix II has been funded and is expected to be in place in 2008. Until Phoenix II is implemented and tested, Enforcement officials will continue to lack information necessary for effective Fair Fund management and oversight. We also note that in the meantime, Enforcement has not leveraged reports that could enhance the division’s understanding of Fair Fund expenses, including administrative expenses. SEC rules generally require that final accounting reports be prepared when SEC-overseen Fair Funds are fully distributed and officially closed. We reviewed four such reports and found that three of them were inconsistent in data reported and did not include comprehensive accounting information. For example, two of the accounting reports did not include complete data on the expenses incurred to administer the Fair Fund plan. Further, senior Enforcement officials said that the division could improve its analysis of information contained in the reports. As a result, Enforcement cannot evaluate the reasonableness of administrative expenses for individual Fair Fund plans or potentially gain a broader understanding of the reasonableness of such expenses among a variety of plans. Enforcement has established a variety of processes to coordinate its investigative and law enforcement activities with other SEC offices. Further, Enforcement has established processes to coordinate its investigative activities with other law enforcement agencies, including Justice. However, Enforcement and SEC have not yet implemented our 2005 recommendation that they document referrals of potential criminal activity to other agencies, although plans to do so have been established as part of the division’s new investigation management information system (the Hub). Until Enforcement completes this process, its capacity to effectively manage the referral process is limited. Enforcement officials said that they hold a variety of meetings periodically to coordinate investigative and other activities within SEC. As discussed previously, senior Enforcement officials said they meet regularly with the SEC Chairman and commissioners to establish investigative priorities. According to the Director of Enforcement, she meets weekly with the heads of other SEC divisions, and other senior division officials said that they meet periodically with their counterparts in other agency units. Enforcement officials cited their coordination with other SEC units on investigations of the backdating of stock options as an example of the agency’s successful collaborative efforts. One Enforcement official said that division staff worked closely with the Office of Economic Analysis to analyze financial data and trends related to options backdating, which allowed them to identify patterns used to target companies for further investigation. This official said that Enforcement also collaborated with the Office of the Chief Accountant, the Division of Corporation Finance, and the Office of the General Counsel throughout this effort. Enforcement officials also said that coordinating their activities with OCIE is particularly important and that the division places a high value on referrals it receives from OCIE regarding potentially illegal conduct. Enforcement officials said that because OCIE staff regularly examine broker-dealers, investment advisers, and other registered entities, they have a broad perspective on compliance with securities laws and regulations. Enforcement officials in SEC’s Philadelphia regional office estimated that about 30 or 35 percent of the enforcement actions the Philadelphia office initiates are based on referrals from OCIE staff. They cited one notable recent insider trading case—involving broker-dealer Friedman, Billings, Ramsey & Co., Inc., and which was among the first cases of its kind since the 1980s—as stemming from a referral from an OCIE examination. However, other Enforcement officials said that historically they have had some concerns about limitations in information contained in OCIE referrals. These concerns centered on how clearly OCIE identifies potentially improper conduct in its referrals and how much evidence it provides in support of such matters. As a result, in November 2006, OCIE and Enforcement instituted a process that would provide a more formal review of the nature and quality of OCIE referrals. According to OCIE and Enforcement officials, the new procedures expand and formalize a preexisting committee process for reviewing OCIE referrals to Enforcement and communicating the ultimate outcome of those referrals to OCIE. The officials said the revised procedures were instituted to (1) help identify the types of OCIE referrals that were likely (or not) to result in enforcement actions and (2) provide better information to OCIE on the ultimate disposition of its referrals. Enforcement officials noted that the division receives many more referrals from OCIE than from any other SEC division or office; therefore, developing a formal committee and tracking process for other internal referrals has not been viewed as warranted. SEC also receives referrals from self-regulatory organizations (SRO)—such as what is now the Financial Industry Regulatory Authority (FINRA)—often involving allegations of insider trading, which are received by Enforcement’s Office of Market Surveillance (OMS). In a forthcoming report, we assess OMS’s and Enforcement’s processes for reviewing and prioritizing these SRO referrals. Enforcement officials also said that division staff have established processes to coordinate their investigative activities and working relationships with other law enforcement and regulatory agencies. For example, Enforcement officials in SEC’s regional offices said they have established effective working relationships with U.S. attorney offices to prosecute alleged criminal violations of the securities laws. In our 2005 report, we discussed how Enforcement worked with Justice and state attorneys general to prosecute investment advisers that allegedly violated criminal statutes related to market timing and late trading. In some cases, Enforcement details investigative attorneys to Justice to assist in the criminal prosecution of alleged securities law violators. Other outside organizations with which SEC and Enforcement coordinate investigative activities include the Federal Bureau of Investigation, federal banking regulators, the Commodity Futures Trading Commission, state securities regulators, and local police. Enforcement also participates in interagency investigative task forces, such as the Corporate Fraud Task Force, the Bank Fraud Enforcement Working Group, and the Securities and Commodities Fraud Working Group. Additionally, in March 2007, Enforcement held its annual conference at SEC’s Washington headquarters on securities law enforcement, which federal and state regulators and law enforcement personnel attended. Topics covered included coordination of SEC investigations with criminal law enforcement agencies and advice on trying a securities case. SEC also conducted sessions on market manipulation, insider trading, financial fraud, stock options backdating, and executive compensation. In September 2007, Enforcement will join other SEC units in hosting the Commission’s second Seniors Summit, at which SEC, other regulators, and law enforcement agencies will discuss how to work together to address the growing problem of fraud targeting the nation’s senior citizens. Although Enforcement officials say they are planning to do so, they have not yet fully implemented our 2005 recommendation to document Enforcement’s referrals of potential criminal matters—and the reasons for making them—to other law enforcement agencies. As discussed in that report, SEC has established a policy under which Enforcement attorneys may make referrals on an informal basis to Justice and other agencies with authority to prosecute criminal violations. That is, Enforcement attorneys may alert other agencies to potential criminal activity through phone calls or meetings, and such referrals need not be formally approved by the division or the Commission. We noted that such an informal referral process may have benefits, such as fostering effective working relationships between SEC and other agencies, but also found that Enforcement did not require staff to document such referrals. Appropriate documentation of decision-making is an important management tool. Without such documentation, Enforcement and the Commission cannot readily determine whether staff make appropriate and prompt referrals. Also, the division does not have an institutional record of the types of cases that have been referred over the years. However, Enforcement officials told us that the forthcoming Hub system will include data fields that indicate when informal referrals of potential criminal activity have been made. In recent years, SEC’s Enforcement division and investigative attorneys have initiated a variety of high-profile enforcement actions that resulted in record fines and other civil penalties for alleged serious securities violations and contributed to criminal convictions for the most egregious offenses. While Enforcement has demonstrated considerable success in carrying out its law enforcement mission, some significant limitations in the division’s management processes and information systems have hampered its capacity to operate at maximum effectiveness and use limited resources efficiently. One key reason for these limitations appears to have been Enforcement’s management approach, which emphasized a broad delegation of key functions with limited centralized management review and oversight, particularly in the approval and review of new investigations and the administration of the Fair Fund program. Delegation of authority is an important management principle that can foster creativity at the local level and, in the case of Enforcement, likely had some benefits for the investigative process and the administration of the Fair Fund program. However, without well-defined management processes to exercise some control over delegated functions, inefficient program implementation and resource allocation can also occur. Officials from Enforcement and the Offices of the Chairman and Executive Director have recognized limitations in the division’s operations and taken important steps to establish more centralized oversight procedures. In particular, they have centralized the review and approval of new investigations, moved forward to upgrade or replace information systems key to division operations and management, and announced the creation of a new Fair Fund office. However, as described below, these plans require additional actions to fully address identified limitations and maximize the division’s operational effectiveness. Enforcement has not developed written procedures and criteria for reviewing and approving new investigations. Establishing such guidance would help focus the review of investigations and reinforce the consistency of reviews, as intended by the centralization of this function, and assist in communicating the new policies to all current and new staff. Further, developing written procedures and criteria would establish a transparent and agreed-upon standard for the review and approval of new investigations and thereby facilitate the Commission’s ability to oversee and evaluate the division’s operations and resource allocation. Enforcement has not developed written controls to help ensure the timely and consistent entry of investigative data in the Hub information system, which could increase the risk of misleading or inaccurate management reports being generated by the system. Without written guidance and the establishment of independent and regular reviews of the accuracy of Hub data by division officials, Enforcement is not well positioned to help ensure that it is receiving reliable program information. Further, the lack of guidance and controls may limit the new system’s capacity to better manage the investigation process. Enforcement’s potentially large backlog of investigations for which closing memoranda and other required administrative procedures have not been completed requires division management’s attention. We recognize that clearing this potentially large backlog could pose challenges to Enforcement given the resource commitment that would be required to do so. Nevertheless, leaving such investigations open indefinitely continues to compromise management’s ability to effectively manage its ongoing portfolio of cases. Moreover, it has potentially negative consequences for individuals and companies that are no longer under investigation. SEC has not yet staffed or defined the roles and responsibilities of the new office that is being established to administer the Fair Fund program. Therefore, it is not possible to determine the extent to which the office may better facilitate the distribution of funds to investors harmed by securities frauds and other violations. While Enforcement awaits the development and implementation of a new information system that would collect comprehensive information on Fair Fund expenses for ongoing plans (for example, administrative expenses), the division has not taken other steps that would, in the meantime, allow it to develop a better perspective on the reasonableness of such expenses. That is, Enforcement has not ensured the consistency of information contained in reports on completed Fair Fund plans or sufficiently analyzed such reports, compromising its capacity to monitor the program. Given SEC and Enforcement’s critical law enforcement mission, it is important that senior officials ensure that weaknesses in their planned improvements be addressed and implementation monitored. Without a full resolution of existing limitations, a significant opportunity to further enhance the division’s effectiveness may be missed. To strengthen Enforcement’s management processes and systems and help ensure compliance with securities laws, we recommend that the Chairman of the Securities and Exchange Commission direct the Division of Enforcement and other agency offices, such as the Office of Information Technology or Office of the Executive Director, as appropriate, to take the following four actions: establish written procedures and criteria on which to base the review and approval of new investigations; establish written procedures that reinforce the importance of attorneys entering investigative data into the Hub, provide guidance on how to do so in a timely and consistent way, and establish a control process by which other division officials can independently assess the reliability of investigative data maintained in the system; consider developing expedited administrative and review procedures for closing investigations that have not resulted in enforcement actions and are no longer being actively pursued; and establish and implement a comprehensive plan for improving the management of the Fair Fund program, to include (1) staffing the new central Fair Fund office, defining its roles and responsibilities, and establishing relevant written procedures and (2) ensuring the consistency of and analyzing final accounting reports on completed Fair Fund plans. We provided a draft of this report to the Chairman of SEC for comment, and he and the Director of the Division of Enforcement provided written comments that are reprinted in appendix III. In its written comments, SEC agreed with our conclusions and stated it would implement all of our recommendations. Moreover, SEC officials noted that the agency has since established that the new Fair Fund office—referred to as the Office of Distributions, Collections and Financial Management—will be located within the Division of Enforcement. SEC officials said that a senior officer and two assistant directors will lead the operations of the office and the agency is developing the office’s responsibilities. SEC also provided technical comments, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will provide copies to the Chairman of the Senate Committee on Finance; the Chairman and Ranking Member of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Member of the House Committee on Financial Services; and other interested committees. We are also sending a copy of this report to the Chairman of the Securities and Exchange Commission. We will make copies available to others upon request. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Key contributors are acknowledged in appendix IV. To address our first objective—evaluating the Securities and Exchange Commission (SEC) Division of Enforcement’s (Enforcement) internal processes and information systems for planning, tracking, and closing investigations and planned changes to these processes and systems—we reviewed relevant SEC and Enforcement documentation and data, including the agency’s strategic plan, annual performance reports, performance measurement data, investigation and enforcement action data from the Case Activity Tracking System (CATS), and Enforcement personnel data. We also reviewed guidance on Enforcement’s intranet— EnforceNet—to determine internal procedures for conducting and managing the investigation process and obtained documentation and attended demonstrations for the first phase of Enforcement’s new planned successor system for CATS (the Hub) and for the base model system for the Hub (M&M). We also reviewed prior GAO reports on SEC and Enforcement processes and information technology systems, as well as federal internal control standards. Further, we interviewed the SEC Chairman, two commissioners, officials from SEC’s Offices of the Executive Director and General Counsel, and Enforcement officials in Washington and the agency’s New York, Boston, and Philadelphia regional offices. To address our second objective—evaluating the implementation of SEC’s Fair Fund responsibilities—we reviewed a 2005 GAO report that discussed SEC’s Fair Fund process, as well as relevant legislation. We also obtained and analyzed summary Fair Fund statistics and documentation and data on individual funds, as provided by Enforcement. However, Enforcement did not provide data on 24 Fair Fund plans that were identified in our 2005 report. Among the reasons Enforcement officials cited for the omissions were that some of the 24 funds had been fully distributed and thus were not included in an information system established in 2006 that was designed to track only ongoing plans. However, these 24 Fair Fund plans are generally smaller (accounting for about $118 million or 1 percent of total Fair Funds), and their exclusion does not change our overall conclusion that distributions have been limited. In addition to Fair Fund data, we reviewed SEC guidance on Fair Funds, including rules on distribution plans, tax treatment, selection of consultants, and distribution procedures. We also reviewed Fair Fund guidance from the U.S. Department of Labor. In addition to discussing the Fair Fund program with relevant SEC and Enforcement officials, we interviewed six consultants hired to design and implement Fair Fund plans, attorneys, consumer advocates, an academic expert, and a representative of a trade group for a retirement plan service provider trade group. To address the third objective—evaluating Enforcement’s efforts to coordinate investigative activities with other SEC divisions and federal and state law enforcement agencies—we reviewed previous GAO reports on mutual fund trading abuses and Enforcement’s coordination efforts with law enforcement. We also reviewed relevant SEC documentation, including internal referral policies, and guidance regarding coordination between Enforcement and outside law enforcement authorities. We also attended SEC’s annual securities coordination conference held in Washington in March 2007, which was attended primarily by federal and state regulators and law enforcement personnel. Further, we discussed Enforcement’s coordination efforts with relevant SEC and division officials. We conducted our work in Washington, D.C.; Boston, Massachusetts; Philadelphia, Pennsylvania; and New York, New York, between November 2006 and July 2007 in accordance with generally accepted government auditing standards. We collected data from the Securities and Exchange Commission (SEC) on the Division of Enforcement’s (Enforcement) investigative caseload and other personnel information (see tables 4-6 below). As shown in table 4, the ratio of ongoing Enforcement investigations to staff attorneys increased substantially from about five investigations per attorney in 2002 to eight per attorney in 2006, according to SEC data. However, these SEC data should be interpreted with caution and may significantly overestimate the number of investigations per Enforcement attorney. The reported number of investigations includes all open investigations at the end of each year, even investigations that have been open for many years. As discussed in this report, Enforcement has not promptly closed many investigations that have not resulted in enforcement actions and are likely no longer being actively pursued. Accordingly, we requested that SEC provide data on the number of ongoing investigations in Enforcement that as of year-end 2006 had been initiated within the previous 2 years. When pending investigations that were more than 2 years old are excluded, the investigation-to-staff-attorney ratio drops to 2.54. While this ratio may provide a more accurate assessment of Enforcement attorneys’ active workloads, individual investigations more than 2 years old could continue to be actively pursued while some individual investigations less than 2 years old may no longer be actively pursued. Enforcement officials estimated that staff attorneys generally can be working on from 3 to 5 investigations at one time, including administering individual Fair Fund plans. Table 5 shows that the ratio of Enforcement investigative attorneys to paralegals, who provide support to the investigative process, generally declined from 2003 to 2006. Our review of SEC data indicates that the number of Enforcement paralegals increased substantially from 2003 to 2005 (from 58 to 98, or 69 percent) and remained stable in 2006 at 94 (a decline of just over 4 percent). While the number of Enforcement staff and supervisory attorneys also increased from 2003 to 2005 (from 596 to 740, or 24 percent), the rate of increase was not nearly as high as for paralegals. In addition, the number of Enforcement investigative attorneys declined from 740 in 2005 to 684 in 2006, or 8 percent. The relatively slower pace of attorney hiring from 2003 to 2005 and relatively higher rate of attrition in 2006 helps explain why the ratio of attorneys to paralegals has declined in recent years. Other SEC data that we reviewed also indicated a decline in the ratio of investigative attorneys to various types of administrative support staff, such as research specialists, during this period. Table 6 shows that the ratio of investigative staff attorneys to supervisory attorneys has remained relatively constant. Supervisory attorneys are branch chiefs and assistant directors and do not include attorneys at the associate director level and above. In addition to the contact named above, Wesley M. Phillips, Assistant Director; Allison Abrams; Christopher Forys; Marc Molino; Carl Ramirez; Linda Rego; Barbara Roesmann; and Christopher Schmitt made significant contributions to this report.
The Securities and Exchange Commission's (SEC) ability to conduct investigations and bring enforcement actions for violations of securities laws is critical to its mission to protect investors and maintain fair and orderly markets. SEC's Division of Enforcement (Enforcement) is charged with investigating securities law violations; recommending civil enforcement actions when appropriate, either in a federal court or before an administrative law judge; and negotiating settlements on behalf of the Commission. The types of sanctions that Enforcement can seek on behalf of the Commission include monetary penalties or fines and disgorgements of the profits that individuals or companies may derive by having committed securities violations. While SEC has only civil authority, it also works with various law enforcement agencies, including the United States Department of Justice (Justice), to bring criminal cases when appropriate. In addition, Enforcement is responsible for overseeing the Fair Fund program, which seeks to compensate investors who suffer losses resulting from fraud or other securities violations by individuals and companies. Under the Fair Fund program, SEC can combine the proceeds of monetary penalties and disgorgements into a single fund and then distribute the proceeds to harmed investors. In recent years, Enforcement has initiated high-profile actions that resulted in record civil fines against companies and senior officers and in some cases contributed to criminal convictions. However, the capacity of SEC in general and Enforcement in particular to appropriately plan and effectively manage their activities and fulfill their critical law enforcement and investor protection responsibilities on an ongoing basis has been criticized in the past. Although SEC received a substantial increase in its appropriations as a result of the Sarbanes-Oxley Act of 2002, questions have been raised in Congress and elsewhere on the extent to which the agency is using these resources to better fulfill its mission. Moreover, we have reported that aspects of Enforcement's information systems and management procedures could limit the efficiency and effectiveness of its operations. For example, we found in 2004 that Enforcement faced challenges in developing the advanced information technology necessary to facilitate the investigative process. In addition, we reported in 2005 that the distribution of funds to harmed investors under the Fair Fund program was limited and that Enforcement had not developed adequate systems and data to fulfill its oversight responsibilities. Because of congressional interest in ensuring that SEC effectively manages its resources and helps ensure compliance with securities laws and regulations, Congress requested that we review key Enforcement management processes and systems and follow up on our previous work where appropriate. Accordingly, this report evaluates Enforcement's (1) internal processes and information systems for planning, tracking, and closing investigations and planned changes to these processes and systems; (2) implementation of SEC's Fair Fund program responsibilities; and (3) efforts to coordinate investigative activities with other SEC divisions and federal and state law enforcement agencies. Enforcement's processes and systems for planning, tracking, and closing investigations have had some significant limitations that have hampered the division's capacity to effectively manage its operations and allocate limited resources. While Enforcement and SEC officials are aware of these deficiencies and have recently begun addressing them, additional actions are necessary to help ensure that the planned improvements fully address limitations in the division's operations. In March 2007, Enforcement said it would centrally review and approve all new investigations of potential securities law violations by individuals or companies. Under Enforcement's previous, largely decentralized approach, the division was not always able to ensure the efficient allocation of resources or maintain quality control in the investigative process. While the new centralized approach was designed to help address these issues, Enforcement has not yet established written procedures and criteria for reviewing and approving new investigations. Without such procedures and criteria, Enforcement may face challenges in consistently communicating the new approach to existing and new staff. The lack of written procedures and criteria could also limit the Commission's ability to evaluate the implementation of the new approach and help ensure that the division is managing its operations and resources efficiently. Recognizing that the division's current information system for tracking investigations and enforcement actions--Case Activity Tracking System (CATS)--is severely limited as a management tool, Enforcement plans to start using a new system (the Hub) by late 2007. The deficiencies of CATS include its inability to produce detailed reports on investigations of certain types or the status of such investigations. While the Hub is designed to address many of CATS's deficiencies the way that the system is being implemented may not address all existing limitations. More specifically, Enforcement has not established written controls to help ensure that staff enter investigative data in the Hub in a timely and consistent manner. Without such controls, management reports generated by the Hub may have limited usefulness, and the system's capacity to assist Enforcement in better managing ongoing investigations will not be fully realized. In May 2007, Enforcement implemented procedures to help ensure the prompt closure of investigations that are no longer being pursued and thereby better ensure the fair treatment of individuals and companies under review, but these procedures do not fully address the entire backlog of these investigations. One regional Enforcement official said that as of March 2007, nearly 300 of the office's 840 open investigations were 2 or more years old, were no longer being pursued, and had no pending enforcement actions. Enforcement officials said that the failure to close such investigations promptly could have negative consequences for individuals and companies no longer suspected of having committed securities violations. They attributed the failure to close many investigations to several factors, such as time-consuming administrative requirements for attorneys to prepare detailed investigation closing memorandums that then must be routed to senior division officials for review and approval. To address these issues, Enforcement plans to inform individuals and companies more promptly that they are no longer under review and expedite the review and closure of the existing backlog of investigations for which administrative tasks have been completed (as of March 2007, there were 464 such investigations). However, Enforcement's plans do not include clearing the potentially large backlog of investigations for which such administrative tasks have not been completed, which could be negatively impacting individuals and companies no longer actively under review.
The debt problems of many of the world’s heavily indebted lowest income countries continue to be a challenge for the international community. Most of these countries’ debt is owed to official creditors consisting of other governments (bilateral) and international financial institutions (multilateral). Despite debt relief efforts undertaken largely by bilateral and commercial creditors since the 1980s, the overall debt burden of the poor countries has increased. The debt burdens are of concern for two reasons: they may hamper economic development in debtor countries and they involve the lenders and debtors in a time-consuming pattern of rescheduling debt, providing new loans, and supplying donor assistance. To address the growing debt burden, in September 1996 governments around the world agreed to the Heavily Indebted Poor Countries (HIPC) Debt Initiative developed by the World Bank and the International Monetary Fund (IMF). The initiative was intended to build on existing debt relief efforts and bring together all of a country’s creditors to provide debt relief in conjunction with policy reforms to allow countries to exit from the rescheduling process. Establishment of the HIPC initiative involved resolution of differences among creditors concerning the need for expanded debt relief. Despite repeated efforts to relieve the debt burden of developing countries, the total amount of money owed to external creditors by the 40 countries classified by the World Bank and the IMF as heavily indebted poor countries increased from an average of $122 billion for 1983-85 to $221 billion for 1993-95 (nominal value, in 1997 dollars). (See app. I for a list of countries.) Figure 1.1 shows the composition of this debt among three categories of medium- and long-term debt, as well as short-term debt. Although total external debt increased substantially between these periods, the amount of medium- and long-term debt owed to private financial institutions (commercial creditors) decreased. Because these countries are seen as high credit risks, they have had limited access to private sector financing. For 1993-95, 73 percent of total external debt was medium- and long-term debt owed to official creditors, with the majority of that amount owed to bilateral creditors. The remaining 27 percent was medium- and long-term debt owed to commercial creditors and short-term debt such as trade financing. Of the total debt, 45 percent was owed to governments (bilateral creditors) and 28 percent was owed to international financial institutions (multilateral creditors). According to the U.S. Treasury, as of August 1998, 31 of the 40 countries had outstanding debt of approximately $6.8 billion to the United States. (See app. II for the amount owed by each country.) By the mid-1990s, much of the debt owed by the 40 countries was not being paid. According to the World Bank, heavily indebted poor countries made roughly 50 percent of their scheduled debt payments during 1994. We estimated that, during 1993-95, HIPC countries paid about 41 percent of their debt service owed. Although the largest share of the debt during the later period was owed to bilateral creditors, a majority of the debt service paid was to multilateral institutions, due to these institutions’ requirement that countries fully service their debt before receiving new lending. Significantly, the share of long- and medium-term debt service paid to multilateral creditors increased from 29 percent to 52 percent of the total, while the share paid to commercial creditors decreased from 44 percent to 22 percent. (See fig. 1.2.) Addressing the debt burdens of very poor countries, in the context of the broad range of development needs they face, constitutes a substantial challenge. Thirty-two of the 40 countries classified by the World Bank and the IMF as heavily indebted poor countries are in sub-Saharan Africa. Eighty-three percent of these countries are classified by the United Nations as being in its lowest category of human development, based on life expectancy, literacy, and per capita national income. Most receive substantial amounts of development assistance from governments, multilateral organizations, and nongovernmental organizations (NGO). We estimated that in 1994 foreign assistance represented about 16 percent of national income, using a weighted average, for 36 of these countries for which data is available. Some of the 40 countries, moreover, have recently emerged from—or continue to be engaged in—conflict or civil unrest. Although countries may incur external debt as part of their development strategy, development experts, including officials from the World Bank, the United Nations, and NGOs, have cited several reasons why debt burdens of some poor countries are a concern. Some development experts believe that debt levels above a certain threshold amount relative to a country’s economic capacity may, in and of themselves, limit economic growth. This has been termed the “debt overhang effect.” This effect reflects the view that if a country has substantial debt obligations, the debt will discourage current investment in the debtor country, due to a concern that future income may be highly taxed to pay debt. Other experts question whether debt overhang constitutes a serious obstacle to investment in HIPC countries, in light of additional impediments to investment, such as weak financial institutions and inadequate physical infrastructure, these countries face. Nonetheless, many experts agree that high debt payments constitute a drain on a country’s budget, potentially lowering the amount of money available for health and education spending and, for many countries, requiring further loans or grants. For the poorest countries, this can mean an increasing percentage of new aid will go to service existing debt rather than to aid in development. Finally, rescheduling and financing debt payments have been time-consuming for both creditors and debtors. For example, according to Department of State data, potential recipients of HIPC debt relief have concluded about 100 debt negotiations with the Paris Club over the last 10 years. Debt relief efforts since the 1980s have been undertaken primarily by bilateral and commercial creditors. However, these prior efforts have not resulted in a substantial reduction in the overall debt owed by poor countries. Some efforts aimed at poor countries have actually increased debt levels by, for example, converting interest payment arrears into new debt. Other mechanisms have left the debt of poor countries largely unaffected, notably the Baker and Brady plans of the 1980s. These plans focused on resolving the commercial debt problems of middle-income countries by essentially providing funds for countries to buy back part of their commercial bank debt. Two instruments have been used to reduce the commercial bank debt of some heavily indebted poor countries. Sixteen countries have received $11.8 billion of debt reduction since 1989, although about one third of this reduction has been for one country, Côte d’Ivoire. These instruments are the Debt Reduction Facility of the International Development Association (IDA), the part of the World Bank that lends to poor countries on highly concessional terms, and, more recently, officially supported debt and debt service reduction programs (Brady operations). According to World Bank data, through the Debt Reduction Facility, 16 countries had retired about $4.2 billion of principal and interest arrears owed to commercial banks, as of December 1997. In May 1997, Côte d’Ivoire also received debt reduction through the second mechanism, when Côte d’Ivoire reached an agreement with commercial creditors that resulted in debt reduction of $4.1 billion. The restructuring agreement helped Côte d’Ivoire to clear unpaid interest owed to commercial creditors and ensure that commercial creditors would provide relief at least comparable to that offered by official creditors. Bilateral creditors have forgiven some debt and renegotiated debt payments by lowering interest rates or extending due dates. Some bilateral creditors have individually forgiven debt owed by poor countries, but these amounts have not been large relative to the total bilateral debt owed. For example, between 1990 and 1997, the United States forgave $2.3 billion, or 37 percent, of the $6.1 billion of debt we estimate was owed by the 40 HIPC countries as of the end of fiscal year 1989. According to an Organization for Economic Cooperation and Development report, since 1989 France has forgiven over $10 billion in official development assistance (ODA) debt owed by countries in sub-Saharan Africa. According to the German government, Germany has forgiven or pledged to forgive about $5 billion in ODA debt owed by poor countries. More often, bilateral creditors have worked together to offer debt relief to poor countries by rescheduling debt payments on concessional terms or reducing debt through the Paris Club. To qualify for Paris Club relief, countries must be in imminent default and reach an agreement with the IMF on a reform program. The Paris Club conditions its debt relief on countries’ implementation of economic and structural reforms under IMF-supported lending programs, such as the ESAF. Disbursement of relief is then conditioned on satisfactory implementation of the reform program, generally lasting 3 years. Since 1988, the Paris Club has treated debt owed by poor countries on increasingly concessional terms. In many cases these efforts did not significantly reduce debt but instead mainly focused on helping countries meet debt payments within the short term by altering payment due dates or interest rates, rather than on forgiving debt. Some debtors sought repeated rescheduling. In 1988, the Paris Club became the first group of creditors to offer countries the option of reducing the amount of debt. Under the most recent terms of the Paris Club adopted in 1994, called “Naples terms,” countries could receive up to a 67-percent reduction in eligible debt under a stock-of-debt operation. Naples terms broadened the range of eligible debt, elaborated procedures for reducing a country’s debt, allowed for a reduction in the amount of debt owed, and were intended to allow the countries to stop rescheduling debt in the future. Multilateral creditors generally have not rescheduled or reduced debt owed them because of their belief that forgiving or reducing debt would diminish assurances of repayment on new lending. Multilateral development banks were also concerned that forgiving debt would hurt their credit ratings. Instead, multilateral creditors have relied on increased concessional lending and relief from bilateral creditors to enable countries to continue servicing their multilateral debt. Since the 1990s, there has been growing recognition that some poor countries were having increasing difficulty servicing their multilateral debt. For example, during a Paris Club restructuring of Uganda’s debt in the mid-1990s, some creditors concluded that debt relief from bilateral creditors would not sufficiently ease the country’s debt burden because most of Uganda’s debt was owed to multilateral creditors. Moreover, creditors and others were concerned that a greater percentage of new lending was being used to service existing debt rather than for development purposes. These recognitions contributed to the industrialized nations’ call for a new approach to address the debt of heavily indebted poor countries, including that owed to multilateral creditors. The HIPC initiative is the first coordinated effort to include all creditors, most notably the multilaterals, in addressing the debt problems of heavily indebted poor countries. Participating creditors include bilateral governments; the major multilateral creditors such as the World Bank and the IMF; and over 20 other multilateral development institutions, including the African Development Bank, the Inter-American Development Bank, and the International Fund for Agricultural Development. (See app. III.) According to the World Bank, over the past 2 years the Boards of the World Bank and the IMF have met about 30 times each, and about 25 multilateral development banks have been meeting every 6 months under the chairmanship of the World Bank to coordinate the implementation of the HIPC initiative. In 1996, the World Bank and the IMF made a preliminary determination regarding which of the 40 countries might eventually receive relief based on the HIPC initiative’s specific criteria concerning income, indebtedness, and reform, and identified 20 countries as potential recipients. As of August 1998, the World Bank and the IMF estimated that the creditors would provide debt relief through the initiative to 20 countries, worth about $8.2 billion in 1996 present value terms. Specific eligibility decisions have been made for eight countries, with six countries deemed eligible for relief under the HIPC initiative. One country—Uganda—has completed the process. The HIPC initiative builds on prior debt relief efforts, most notably those of the Paris Club. The HIPC initiative’s goal is to bring countries’ debts to levels that are considered sustainable, meaning the countries can make debt payments without incurring loan arrears or requiring debt rescheduling. The basic HIPC framework establishes eligibility criteria based on a country’s per capita income, indebtedness, and track record of reform. As shown in figure 1.3, implementation of the initiative involves two stages. Each stage can last 3 years and can be shortened in some cases. Eligibility for HIPC debt relief is assessed at the end of stage one, following the successful completion of World Bank- and IMF-supported programs. At this point (termed the decision point), the Boards of Executive Directors of the World Bank and the IMF determine whether (1) existing debt relief mechanisms are sufficient to bring a country’s debt to a point considered sustainable or (2) the country requires additional debt relief. The determination of whether debt is sustainable is based mainly on a World Bank and IMF assessment of whether the projected ratio of a country’s debt (in present value terms) to the value of its exports will be greater than a target value that is set within the range of 200-250 percent. Lowering the target level increases the amount of debt relief required to reach the target. For example, lowering the ratio of a country’s debt to its exports from 300 percent to 200 percent requires more debt relief than lowering it from 300 percent to 250 percent. The target level is based on factors affecting the vulnerability of the country’s economy, such as the percentage of government revenue required for debt service and whether export earnings are generated by a few commodities. Under certain conditions, for countries with very open economies and strong efforts to generate fiscal revenues, the target may be based on the ratio of debt to government revenue. This fiscal indicator can allow debt-to-export targets below 200 percent. If the Boards determine that existing debt relief mechanisms are insufficient to make debt levels sustainable and other principal creditors agree, the country enters the second stage of the HIPC initiative. During this stage, the country receives some debt relief from bilateral and commercial creditors and financial support from multilateral institutions. Paris Club creditors have agreed to provide relief up to 80 percent of debt service during the second stage. Multilateral creditors may also provide relief as part of their total commitment under the HIPC initiative during this second stage. The country must agree to continue implementing economic reform programs supported by the IMF and the World Bank and social reforms agreed to with the World Bank. If countries are judged to have met the requirements of these programs, they receive the remaining relief at the end of this stage, called the completion point. Official creditors have agreed to share the costs of HIPC relief by providing equal percentage reductions of debt owed them after the full use of existing debt relief mechanisms, including those offered by the Paris Club. Paris Club creditors have said they will limit relief to up to 80 percent of a country’s eligible debt. In exceptional cases, they may negotiate expanded terms. Commercial creditors are expected to provide relief comparable to bilateral creditors. Creditors will each decide how they will provide their share of debt relief to specific countries and which debt will be eligible for relief. Creditors may choose to provide relief through various means, such as rescheduling debt payments at lower interest rates, making debt service payments for countries as they come due, converting loans into grants, reducing debt, and/or lending new funds on concessional terms to be used to make debt service payments. The international financial institutions have said that even under the HIPC initiative they will not forgive debt outright because to do so may endanger their preferred creditor status.Instead, they will use other means. (The HIPC framework is described in more detail in app. IV.) Establishing a comprehensive framework for debt relief required resolving fundamental differences among creditors. For example, prior to 1995, both the World Bank and the IMF maintained that extraordinary debt relief mechanisms, including debt relief by multilateral creditors, were not necessary except for a handful of countries. Some creditors were concerned about the cost of providing debt relief and about the issue of “moral hazard”—that the prospect of debt relief would discourage countries from undertaking needed reforms and maintaining or strengthening responsible borrowing policies. In June 1995, the leaders of the Group of Seven countries called for the IMF and the World Bank to develop a comprehensive approach to assist heavily indebted poor countries with multilateral debt burdens. Shortly thereafter, a World Bank task force report called for a facility to pay multilateral debt service for a select group of countries. The World Bank and the IMF prepared subsequent analyses, and NGOs worked to influence the terms of the evolving framework. Key issues being negotiated during the design process of the HIPC initiative included how unsustainable debt burdens would be determined (with implications for eligibility and relief amounts), the type and length of reforms, whether debt stocks would be reduced, and how creditors would share in providing debt relief. The resulting September 1996 framework reflects creditors’ compromise views. However, the use of a range of values of the primary debt sustainability indicator, as well as the announced intention to implement the framework flexibly, left many key decisions to be made during implementation of the initiative. And some aspects of design, most notably how the shares of debt relief would be divided among creditors, had not yet been decided. The introduction of the HIPC initiative has prompted suggestions for alternative approaches to address the debt burdens of poor countries. Alternatives include fairly straightforward modifications to the HIPC initiative, such as increasing levels of relief, expanding eligibility, and accelerating implementation. Some suggestions call for more fundamental modifications of the HIPC framework and even question the basic structure of the HIPC initiative. Our report does not address the viability of different alternatives or compare them to the HIPC initiative. According to creditors, debtors, and NGOs, negotiating the design of the HIPC initiative has been a very challenging process, and there is a reluctance to significantly modify the HIPC framework. The Chairman of the Subcommittee on International Economic Policy, Export and Trade Promotion, Senate Committee on Foreign Relations, asked us to review the HIPC initiative. Specifically, we focused our review on (1) the implementation of the HIPC initiative and (2) the initiative’s potential to achieve its stated goal. This goal is to reduce select poor countries’ debt to sustainable levels; that is, to allow certain poor countries to pay their international debts on time and without further rescheduling. To describe the implementation of the HIPC initiative, we met with and obtained information from government officials of the United States, HIPC recipient countries, and other creditor countries; and officials from multilateral organizations and NGOs. We met with officials at the Department of State, the U.S. Agency for International Development, the Department of the Treasury, the World Bank, and the IMF. As an agency of the United States, we have no direct authority to review the operations of multilateral institutions. However, we obtained access to World Bank and IMF officials and information through the staffs of the U.S. members of their Boards of Executive Directors. We also obtained information from and interviewed officials of other creditor organizations, such as the Paris Club secretariat, the African Development Bank, and the Inter-American Development Bank. To obtain the views of other creditor nations on the implementation of the HIPC initiative, we met with officials from France, Germany, and the United Kingdom, including their representatives to the World Bank and the IMF and officials from their finance ministries, development ministries, and other government organizations. We met with and obtained data on debt and development from representatives of the Organization for Economic Cooperation and Development; and U.N. organizations, including the U.N. Development Program, the U.N. Conference on Trade and Development (UNCTAD), and the U.N. Children’s Fund. We also met with and obtained information from academic experts and NGOs, including Oxfam, the European Network on Debt and Development, Debt Relief International, Jubilee 2000, the Center of Concern, the Catholic Fund for Overseas Development, and the Heritage Foundation. To obtain information from recipient countries about the implementation of the HIPC initiative, we interviewed officials in Burkina Faso, Côte d’Ivoire, and Uganda. We selected recipient countries likely to represent a range of experiences under the HIPC initiative. Within the recipient countries we visited, we discussed concerns about the HIPC initiative with officials of relevant government bodies (for example, the prime minister’s office and the ministries of finance, trade, and planning), World Bank and IMF field staff, U.S. embassy and aid officials, local representatives of other donor countries and the European Union, business representatives, and local academics. To assess the initiative’s potential to achieve its stated goal, we met with officials from the U.S. government, other creditor governments, recipient governments, multilateral organizations, and nongovernmental organizations. We examined analytical papers and studies of debt issues from the World Bank and the IMF. Based on information from these studies as well as other sources, we conducted analyses of the HIPC initiative’s economic underpinnings and issues that arose during implementation. Within the recipient countries we visited (Burkina Faso, Côte d’Ivoire, and Uganda), we discussed concerns about the HIPC initiative with officials of relevant national and local government bodies (for example, the prime minister’s office and the ministries of finance, trade, and agriculture), World Bank and IMF field staff, U.S. embassy and aid officials, local representatives of other donor countries and the European Union, nongovernmental organizations, business representatives, and local academics. We performed our review from July 1997 to August 1998 in accordance with generally accepted government auditing standards. The Department of the Treasury and the Department of State commented that the report should provide greater context concerning the extent of prior debt relief efforts, particularly the efforts of bilateral creditors through both the Paris Club process, and unilaterally. We have expanded the report’s discussion of the debt relief efforts of bilateral creditors. The implementation of the HIPC initiative has involved significant negotiation among the major creditors on issues such as the eligibility of a country, the amount of relief to be provided, and the way in which relief is to be shared among creditors. As of August 1998, the Boards of the World Bank and the IMF had determined that six countries are eligible for assistance under the HIPC initiative and have agreed upon the amount and timing of relief for these countries. For five of these six countries, the Boards agreed to provide relief at the upper end of what the negotiated framework allows. Bilateral and multilateral creditors have agreed to share the debt relief by providing an equal percentage reduction of the debt owed them (after the full use of existing debt relief mechanisms) and to individually determine how they will provide the relief. The total amount of relief to be provided depends on creditors’ decisions as they implement the HIPC initiative, such as the number of countries deemed eligible, as well as debtors’ actions to establish the necessary track record of reform. Since implementation began, creditors have made some modifications to the HIPC framework that have expanded eligibility and contributed to increased estimates of relief. The amount of HIPC debt relief could increase further if, for example, countries that were not included in previous estimates become eligible. Conversely, if countries included in the estimates do not undertake required reforms and thus do not receive relief under the HIPC initiative, the amount of relief provided could decrease. As of August 1998, the Boards of the World Bank and the IMF had determined that six countries (Bolivia, Burkina Faso, Côte d’Ivoire, Guyana, Mozambique, and Uganda) were eligible for assistance under the HIPC initiative and had agreed upon the amount and timing of debt relief for these countries. (See table 2.1.) One country—Uganda—has completed the process. Projected relief for the six countries represents about $3 billion, or about 36 percent of the total projected HIPC debt relief of $8.2 billion (in 1996 present value terms), as of August 1998. For two additional countries—Benin and Senegal—the Boards of the World Bank and the IMF determined that debt relief from bilateral creditors on Naples terms would be sufficient to bring their debt to sustainable levels. Thus, they were not deemed eligible for relief through the HIPC initiative. Despite considerable debate on the amount and timing of debt relief to be provided, the World Bank and IMF Boards, in conjunction with principal creditors, have generally implemented the HIPC initiative to provide debt relief at the upper bounds of the negotiated framework. According to HIPC initiative documents, this is in response to the vulnerabilities facing recipient countries. This is specifically evident in lower end debt-to-export targets, which increase the amount of relief provided. In recognition of countries’ track records of reforms, the Boards have also generally shortened the second stage of the HIPC initiative, which provides debt relief sooner and can increase relief amounts in some cases. For five of the six countries deemed eligible to date, the expected debt relief amount is at or near the upper levels agreed to under the HIPC framework (Bolivia has a debt-to-export target of 225 percent). Nonetheless, in several cases, the cost of providing debt relief under the HIPC initiative has been a factor in determining the amount of debt relief to be provided. For five of the six countries for which target debt-to-export levels have been set, the target debt-to-export ratios are near or below the lower end of the 200-250 percent range under the HIPC framework. Two countries have been deemed eligible with debt-to-export ratios substantially below 200 percent (Côte d’Ivoire and Guyana), based on fiscal criteria that compare debt to government revenue rather than exports. The highest debt-to-export ratio set to date is 225 percent for Bolivia. Some creditor countries have stated that the target should normally be at or near the bottom of the range; others have maintained that the full range should be used. The United States supports a target of 200 percent or lower. According to HIPC documents, for the countries deemed eligible to date, the lower ratios reflect concern about countries’ significant economic vulnerabilities, such as dependence on a small number of exports and the resulting potential for volatility in export earnings. World Bank and IMF staff expect country-specific targets to be clustered more toward the bottom half of the 200-250 percent range. In the case of Burkina Faso, some countries argued for a lower ratio because of the uncertainty of the economic projections, particularly of future export prices. Other countries supported a higher range, noting that worker remittances were large in Burkina Faso and provided a cushion against possible risks. For some countries deemed eligible, such as Bolivia and Uganda, the potential cost of debt relief appears to have influenced their target ratios. In the case of Bolivia, while countries ultimately agreed to a target debt-to-export ratio of 225 percent, several supported a target of 200 percent while others supported a target in, or possibly above, the upper end of the 215-235 percent range. The target agreed to—225 percent—reflects, in part, concern about staying within the Paris Club’s limit on the amount of debt the Paris Club will reduce as well as the decision to limit the cost that Bolivia’s largest multilateral creditor, the Inter-American Development Bank, might incur in providing HIPC debt relief. The target set for Bolivia also reflects that the country is one of the least vulnerable of the potential HIPC recipients. The target debt-to-export ratio of 202 percent set for Uganda reflects, in part, the decision to stay within the Paris Club’s limit and within the terms of the burden-sharing arrangement multilateral and bilateral creditors had agreed to. As previously mentioned, implementation of the HIPC initiative involves two stages, each of which generally lasts 3 years. The Boards have shortened the length of the second stage for five of the six countries for which completion dates have been set (Bolivia, Burkina Faso, Guyana, Mozambique, and Uganda); four of these countries (Bolivia, Guyana, Mozambique, and Uganda) were given about a 1-year period. The actual length of the second stage could be longer if, for example, countries do not satisfactorily complete the required reforms. Countries’ views have differed significantly regarding the appropriate length of the track record of reform required for gaining HIPC debt relief. Some countries—such as the United States and Germany—have generally stressed that a longer time frame is important for ensuring a country’s commitment to critical reforms but have agreed to shortened second stages in some cases. Others, such as the United Kingdom, have stated that an overall reform period of 6 years is generally too long; they have supported efforts to give some recipients credit for their track records by reducing the length of the second stage. According to HIPC documents, the five countries with shortened periods are among the strongest performers of the potential HIPC debt relief recipients, and the shortened period reflects these countries’ past track records of good policy performance, including completion of successive ESAF and World Bank programs, and receipt of Naples terms from the Paris Club. Nonetheless, members of the Boards have debated the length of the second stage for these countries. There was considerable discussion about whether Uganda should have a second stage at all. According to HIPC documents, the remaining countries will likely require a 3-year period between the two stages—the decision point and the completion point. A shorter period between the decision point and the completion point means that countries will receive final HIPC debt relief sooner and may get more relief under certain conditions. In the case of Guyana, the decision to set the completion point in 1998 rather than the year 2000 (which would have been 3 years between the two stages) resulted in a projected increase in HIPC assistance of about 68 percent, or $103 million in present value terms. Significant negotiations have occurred on the question of how creditors will share the amount of debt relief to be provided through the HIPC initiative. When the HIPC initiative was announced in September 1996, creditors had not agreed on their shares of HIPC assistance, but Paris Club creditors had agreed to reduce up to 80 percent of the remaining eligible debt. The World Bank and the IMF had proposed an approach under which bilateral creditors would give debt relief up to 90 percent of eligible debt to a country first, with multilateral creditors providing the remainder required for the country to reach debt sustainability. Multilateral creditors sought to limit the type and amount of debt reduction they would provide because they were concerned that it would endanger their financial integrity and preferred creditor status. Bilateral creditors rejected the approach proposed by the multilaterals. The Paris Club creditors committed to provide relief up to 80 percent of the eligible debt countries owed them, and stated that, in exceptional cases, they may negotiate terms that expand the amount of relief they are to provide. They stated that multilateral creditors should contribute simultaneously with the bilateral creditors and provide a greater share of the total HIPC relief because (1) bilateral creditors had already provided debt relief and (2) servicing multilateral debt was a key part of poor countries’ debt burdens. Furthermore, according to a U.S. government official’s summary of the Paris Club’s position, the preferred creditor status is essentially a political judgment; it does not imply that the multilateral creditors should not provide debt relief. Nonetheless, one of the tenets of the HIPC initiative is to ensure the preferred creditor status of multilateral creditors. After much negotiation, in July 1997 creditors endorsed a broad burden-sharing arrangement, termed the “proportional approach,” under which bilateral and multilateral creditors would provide debt relief together and provide equal percentage reductions of debt owed them after the full application of existing debt relief mechanisms, including Naples terms. Using the proportional burden-sharing approach, World Bank and IMF staff determine how much debt relief the bilateral and multilateral creditors, as a group, are to provide a particular country. Within the Paris Club, bilateral creditors determine how much of this amount they will individually provide. The World Bank and the IMF determine the share of the debt relief each multilateral creditor is to provide, based on the share of debt owed to them by the recipient. Applying the proportional burden-sharing approach continues to involve negotiation among the creditors when they determine the specific relief amount for each recipient. Although creditors agreed to provide the same percentage of debt reduction, the dollar amounts of this relief will vary by creditor because creditors are owed different amounts of debt. For example, in the case of Burkina Faso, bilateral and multilateral creditors agreed to provide debt relief valued at about 14 percent of what they are owed. For bilateral creditors, this amounted to about $21 million in debt relief. For multilateral creditors, the same percentage reduction amounted to about $94 million. In some cases, poor countries’ debt levels are so high that the burden-sharing terms agreed to under the HIPC framework will not provide enough relief to reach the target debt-to-export ratio. This was, for example, the case for Mozambique. To reach the 200-percent target debt-to-export ratio, under the terms of the HIPC initiative burden-sharing approach, bilateral creditors were to provide $916 million in debt relief to Mozambique. To provide this relief, bilateral creditors would have to exceed the cap they had agreed to—that they would provide relief equivalent to up to 80 percent of eligible debt. The 80-percent reduction of debt would have provided only $553 million in debt relief. To address the $363 million shortfall, bilateral creditors agreed to provide exceptional amounts of relief beyond those terms, which has been termed “deep relief.” However, even after Paris Club creditors agreed to extend their terms and provide relief equivalent to 86 percent of eligible debt, a financing gap of $116 million remained. Individual bilateral creditors and donors as well as the World Bank and the IMF subsequently agreed to use various mechanisms, such as increasing the amount of debt relief or contributing funds, to finance the remaining gap. The agreement for Mozambique entailed significant negotiations among creditors because of the amount of relief needed to bring the debt-to-export ratio to the target level of 200 percent—Mozambique alone represents approximately half of the debt relief promised thus far. Creditors each determine how they will provide their share of the relief. Creditors may choose to provide relief through several means, such as rescheduling debt payments at lower interest rates, buying back the debt, making debt service payments as they come due, converting loans into grants, reducing the debt, and/or lending new funds on concessional terms to make debt service payments. A creditor’s decision about how it will provide debt relief to a particular recipient may be influenced by many factors, such as the amount of outstanding debt, the impact of providing debt relief on the creditor’s future budgets, the financial policies governing the creditor institution, and the needs of the recipient country. Multilateral creditors have said that they will not forgive debt outright; rather, they intend to provide debt relief in ways that maintain their preferred creditor status. According to the World Bank and the IMF, most of the multilateral development banks have obtained the institutional approval to participate in the HIPC initiative and defined the means they will use to provide relief, such as buying back debt or paying debt service through the HIPC Trust Fund or similar self-administered trust funds, rescheduling current payments or arrears on concessional terms, and refinancing on grant terms. The World Bank’s participation in the HIPC initiative is to be funded solely from the Bank’s own resources. Debt relief provided by the World Bank under the HIPC initiative is taking place primarily through contributions to the HIPC Trust Fund from IBRD income. The Trust Fund provides relief on debt owed to IDA, either through buying back some of its concessional debt or providing an unconditional commitment to pay debt service owed to IDA as it becomes due. Some of this relief may be advanced during the second stage when the World Bank could provide part of its lending program in the form of IDA grants instead of IDA credits, which are funded through general IDA resources. The IBRD has contributed about $750 million from its income to the HIPC Trust Fund to buy back or repay debt owed to IDA. The executive directors have recommended the approval of another transfer of $100 million from IBRD income to the Trust Fund. The HIPC Trust Fund has been specifically set up to keep the IDA and IBRD aspects of the World Bank’s operation at arm’s length. The HIPC Trust Fund also receives contributions from other participating multilateral development banks and bilateral creditors that are to be used primarily to help other multilateral development banks, such as the African Development Bank, to finance their share of HIPC debt relief. The multilateral development banks have stressed that the means used to provide debt relief through the Trust Fund should accommodate constraints specific to these institutions, such as policies against debt restructuring or forgiveness. As of August 10, 1998, 16 governments had made pledges or contributions to the HIPC Trust Fund totaling about $204 million. Also, nine countries proposed additional contributions totaling $92 million to relieve multilateral debt through reallocation of their excess resources in the World Bank’s Interest Subsidy Fund, which was set up in 1975 with donor contributions to subsidize the interest rates on IBRD loans to the poorest IBRD borrowers. (See app. IV for a list of contributors to the HIPC Trust Fund.) The IMF is participating in the HIPC initiative through special ESAF grants at the completion point that are deposited into an escrow account to meet debt service payments owed to the IMF under a predetermined schedule. The IMF is funding its contribution through its own trust fund financed from bilateral (member) contributions and the ESAF reserve account. To finance these grants, several countries have contributed or made investments for the benefit of the ESAF-HIPC Trust totaling approximately $46.5 million, as of June 1998. In May 1998, the IMF transferred about $54.5 million to the ESAF-HIPC Trust for fiscal year 1998 and expects to make a similar payment on a quarterly basis to the ESAF-HIPC Trust for fiscal year 1999. The IMF Board has authorized the transfer of up to an additional $332.5 million from the ESAF Trust Reserve Account to meet the IMF’s commitments under the HIPC initiative. Although all creditors will forgo future revenue to provide debt relief, bilateral creditors use different methods to budget for the cost of debt relief. Some creditors, including the United States, adjust for the probability of debtor countries not fully repaying their debt in the budgeting process. The United States has a complex methodology of estimating the market value of outstanding debt owed to it. For U.S. budgetary purposes, the cost of debt relief reflects the difference between the estimated market value of the loan before reduction compared to the value afterwards. Other creditors value the loan at face value at the time of initial approval. Thus, providing debt relief means they must budget for the face value of the debt when the debt is relieved. Estimates of the amount of relief to be given to countries under the HIPC initiative will continue to be influenced by decisions creditors make as the HIPC initiative is implemented as well as actions taken by debtor countries to establish the necessary track records of reform. The $8.2 billion estimate (in 1996 present value terms) depends on decisions, such as how many recipient countries participate in the HIPC initiative, what debt-to-export targets are established, what amount of time participants have to establish their qualifications for HIPC debt relief, and countries’ economic conditions. Therefore, the actual amount of relief to be provided under the HIPC initiative may be higher or lower than estimated. For example, the actual relief provided could be lower if fewer countries participate than anticipated or if target debt-to-export ratios are set higher than the 200 percent assumed in the projections. On the other hand, slower than projected growth in a country’s exports could substantially increase the amount of relief provided by the HIPC initiative. World Bank and IMF staff estimated that weaker export growth (2 percentage points lower annually for each country from 1995 onward) could increase the amount of relief provided under the HIPC initiative by about $1 billion in 1996 present value terms. The amount of relief provided also could increase if more countries become eligible. For example, since implementation of the HIPC initiative began, the Boards have agreed to modifications to the HIPC framework that have increased the number of countries eligible to receive relief and may raise the amount of relief for other countries. The changes contributed to increases in the projected amount of relief from $5.6 billion in June 1996 to $8.2 billion in August 1998 in 1996 present value terms. According to HIPC documents, these changes reflect the desire of some countries to make the plan more inclusive, concerns about the quality of available data on worker remittances, and updated information. A World Bank official said modifications to the HIPC framework respond directly to concerns of debtor countries and help to mitigate the countries’ vulnerabilities. Some countries have expressed concern about increased costs and cautioned that eligibility decisions should not be made before the financial implications of the agreed-to modifications are assessed. For example, expanding the eligibility criteria to specifically take into account government spending (fiscal criteria) allowed countries such as Côte d’Ivoire and Guyana to become eligible for the HIPC initiative and raised projected HIPC relief by about $600 million in present value terms. Advocates for the expansion of the eligibility criteria were concerned that certain countries with very open economies, and thus relatively low debt-to-export ratios, were improperly characterized as having a sustainable debt burden under the HIPC initiative. According to a World Bank official, the fiscal criteria reflect the Boards’ desire to maintain the original framework while allowing some flexibility in addressing the debt problems of very open economies. Moreover, the fiscal criteria may increase further the number of countries eligible for assistance. Additionally, changes to country-specific analyses and an increase in the potential assistance offered by the HIPC initiative for post-conflict countries, particularly the Democratic Republic of Congo (formerly Zaire), contributed about $1 billion to increased estimates of HIPC debt relief. According to HIPC documents, the increase for the Democratic Republic of Congo is based primarily on new projections that include slower growth in the volume of mineral exports and lower world prices as well as increased debt due to a buildup of late interest and arrears. The World Bank and the IMF caution that any estimates for post-conflict countries are subject to significant change. Changes made in 1997 in the way the amount of exports is calculated also have increased the projected amount of HIPC relief. The first change involved agreement that exports would be calculated using an average of 3 years of data, rather than 1 year as assumed in the first cost estimates of the HIPC framework. According to HIPC documents, this change was a compromise between the desire to obtain a recent actual measure for a country’s export capacity and the desire to smooth out export fluctuations by providing a longer-term base. While this change in methodology may seem like a small refinement, it increased the total estimated amount of HIPC debt relief by about $1 billion, according to HIPC documents. A second change involved the evaluation of worker remittances. These were originally intended to be added to exports but are no longer included due to limited data quality and availability. When worker remittances are not included, the estimated amount of export earnings available for servicing the debt is lowered. Thus, the exclusion of worker remittances increases a country’s debt-to-export ratio. The resulting higher ratio allowed at least one country—Burkina Faso—to become eligible and increased projected relief by about $130 million, according to the World Bank and the IMF. The amount of HIPC debt relief provided will also be influenced by the actions taken by debtor countries to establish the necessary track record of reform. If countries included in the estimates do not undertake required reforms and thus do not receive HIPC relief, the amount of relief could decrease. On the other hand, the total projected amount of debt relief could increase if countries that were not included in the estimates—such as Liberia, Somalia, and Sudan—establish the necessary track records of reform and become eligible for relief through the HIPC initiative. At the end of 1996, in present value terms, Sudan had $15.6 billion of outstanding debt—one of the highest debt levels among potential HIPC recipients. According to preliminary estimates from the World Bank and the IMF, if Sudan qualifies for HIPC debt relief, reducing its debt-to-export ratio to 200 percent would require about $4.5 billion in HIPC debt relief. Providing debt relief to Sudan through the HIPC initiative could, thus, significantly increase the cost of the initiative relative to current estimates. Undertaking the steps necessary to qualify some countries—such as Liberia, Somalia, and Sudan—for the HIPC initiative will involve significant efforts and resources because they have not established the necessary 3-year track record of reform. Further, some of these countries have significant unpaid debt, including debt owed to official creditors. For example, clearing unpaid debt for Sudan, which had arrears of about $6 billion as of year-end 1996, will involve significant financial resources. The IMF reported that Sudan made scheduled payments to the IMF in 1997 and has begun to reduce its arrears to the IMF but has increased its unpaid debt owed to other external creditors. In agreeing to the HIPC initiative, the IMF and the World Bank Boards established a broad framework for debt relief but left many of the specifics regarding the extent of that relief and how it would be carried out still to be determined. Different creditor country perspectives on matters ranging from burden sharing to required reform track records have required extensive negotiation during the implementation phase. For the first six countries to qualify for relief, the Boards have decided on relief amounts at the upper end of the framework as agreed to, and with shortened reform periods for five countries. The extent to which those decisions establish a precedent for future relief remains to be seen, especially with respect to reform periods, since the early qualifiers have relatively strong reform records. The costs of debt relief have influenced design and implementation decisions. The amount of relief that will be provided through the HIPC initiative is not yet known and is dependent on eligibility, timing, and relief amount decisions still to be made. The World Bank commented that substantial effort was put forth by creditors in the design and implementation of the HIPC initiative and that an expanded discussion of how the initiative is being financed would be useful. We agree and have expanded our discussion of these issues, including how the HIPC Trust Fund is being financed, and included a listing of the multilateral institutions providing relief through the HIPC initiative (see app. III). The HIPC initiative will provide benefits to recipient countries; however, many will remain vulnerable to future debt problems, even with sound economic policies. In conjunction with existing debt relief mechanisms, such as relief from Paris Club creditors, the HIPC initiative will reduce countries’ debts by varying amounts, some substantially. Reductions in the amount of recipient country resources that are used for debt service will also vary and are difficult to determine due to prior arrears and the use of donor resources in some cases to help make debt payments. The limited evidence for the particular debt targets in the HIPC initiative suggests that reducing debt-to-export ratios to near 200 percent is not likely to provide countries with a “cushion” to protect against adverse economic events. Strong export growth and substantial donor assistance are important to the HIPC initiative’s projections of sustainable debt burdens. For some countries, those export growth projections may turn out to be overly optimistic. If export earnings are lower than expected, financial support from bilateral and multilateral donors is assumed to increase. This assumption has been questioned given the budgetary pressures of major donor countries. The HIPC initiative has also focused attention on the limited capacity of some countries to manage their debt. Improvements in debt management are considered necessary for them to avoid future debt problems. The HIPC initiative will reduce the total amount of debt owed, in present value terms, by varying amounts for the first six recipient countries. This is because their initial debt and export levels vary widely. The present value of debt relief for these countries due to the HIPC initiative ranges from a low of 6 percent of debt for Côte d’Ivoire to a high of 57 percent for Mozambique, with the average reduction 22 percent for the first six participants. (See table 3.1.) Our analysis indicates that the amount of projected reduction in countries’ debt burdens since 1995 attributable to the HIPC initiative relief—as measured by the debt-to-export ratio—and the amount attributable to other factors vary greatly across the first six HIPC qualifiers. For example, we estimated that, for Uganda, 77 percent of the reduction in this ratio between 1995 and April 1998 (Uganda’s completion point under the HIPC initiative) is due to export growth, with 18 percent attributable to the HIPC initiative debt relief and 5 percent attributable to a combination of other debt relief and net borrowing. In contrast, for Côte d’Ivoire, 58 percent of the reduction can be attributed to other debt relief and net borrowing combined, 39 percent to export growth, and 3 percent to the HIPC initiative. (See app. V for information on the determinants of debt reduction for the first six countries.) The HIPC initiative is expected to reduce debt service obligations by varying amounts for the countries for which preliminary projections are available, although the reduction in how much countries will actually pay to service their debt is difficult to determine. Table 3.1 shows estimated reductions in debt service owed by HIPC recipients, based on HIPC documents and our analysis. The effective reduction from the HIPC initiative on the debt service actually paid by participants is hard to gauge for several reasons. First, some countries experienced substantial arrears in servicing their debt prior to receiving debt relief. The IMF has estimated how Mozambique’s debt service payments will be affected by debt relief.The scheduled annual debt service payments are expected to be dramatically reduced by the combination of the HIPC initiative and existing Paris Club (Naples terms) debt relief. The HIPC initiative itself will reduce scheduled debt service payments in 2000-03 by 42 percent (from $170.5 million to $98.7 million per year) from the obligations remaining after the Paris Club relief. (See table 3.2.) However, Mozambique was paying only about 30 percent of its scheduled annual debt service in 1995-98 ($113.2 million of $375.3 million). Thus, the projection is that the scheduled debt service payment of $98.7 million in 2000-03 will only be about 13 percent less than the annual debt service of $113.2 million Mozambique was actually paying in 1995-98, prior to relief. A second complexity in assessing the effect of debt relief on a country’s finances is that a substantial portion of the debt service paid by these countries is financed through donor and creditor resources. Thus, it is very difficult to determine how a reduction in debt service owed—or even debt service paid—by HIPC recipients will affect the net flow of resources to a recipient country, and we were generally unable to make that determination. This is most clearly evident in Uganda’s recent experience. In 1995, Uganda established a Multilateral Debt Facility through which bilateral donors, primarily Denmark, the Netherlands, Norway, and Sweden, channeled resources directly to pay Uganda’s multilateral debt service. Payments to this facility averaged $45 million per year in 1996 and 1997. In contrast, HIPC initiative debt relief is reducing Uganda’s annual debt service burden by an average of around $30 million per year. (Because the relief is being “front loaded” at Uganda’s request, debt service in the first 5 years will be reduced by about $39 million annually and by about $20 million annually in subsequent years.) According to Ugandan officials, they would need to continue to receive $15 million per year in assistance from these bilateral donors, in addition to other aid flows that were being received, to be in as strong a position after the HIPC initiative relief as before. This assumes that the Facility would continue to be funded at the same level. Ugandan officials told us that they hoped some of the future assistance would be channeled into social sector aid, such as education, although an IMF official noted that these funds were approved by donor governments for debt relief and shifting them into other types of aid may not be straightforward. Countries receiving debt relief through the HIPC initiative will need to maintain strong economic performance and, in most cases, continue to receive large amounts of donor assistance in order to service their debt. The limited analytical evidence that exists for the debt targets used in the HIPC initiative suggests that countries with debt-to-export ratios near the bottom of the 200-250 percent range may still have unsustainable debt burdens. The HIPC initiative’s projections assume that, after completing the HIPC initiative, countries will maintain sustainable debt levels in part through strong export growth. In addition, for most countries, substantial donor assistance is expected to continue, including balance-of-payments support. Finally, the HIPC initiative analysis assumes that, if adverse economic events do occur, such as a significant decrease in the price of a key commodity, the countries’ needs for financing will be met with increased donor assistance. There is no strong analytical evidence supporting the decision concerning the HIPC initiative’s target range of the debt-to-export ratio. The World Bank and the IMF have provided limited support for the conclusion that debt at the 200-percent debt-to-export ratio was sustainable, and no analysis to support ratios in the upper end of the target range (250 percent). World Bank reports have suggested that debt-to-export ratios above 200 percent indicate potential debt problems in poor countries, and a 1996 World Bank document noted that a debt-to-export threshold of 200 percent indicates that at this level a country is likely to have difficulty servicing its debt. World Bank and IMF officials cite two internal World Bank studies as support for their stated debt-to-export ratios. We believe these studies have limited relevance for determining the HIPC initiative’s target ratios for two reasons. First, their analysis was based primarily on middle-income countries and, second, they examined debt levels at which countries began to experience debt servicing problems, not when they might emerge from such problems. One study, done in 1990, analyzed 1980-87 data on 111 countries to determine at what level of debt relative to exports countries began to experience problems servicing their debt. The study found that countries that did not experience debt servicing problems generally had debt-to-export ratios below 200 percent. However, 30 percent of the countries that did experience debt service problems had debt-to-export ratios below 200 percent throughout the period. The second study, dated in 1996, examined approaches for predicting when countries would have problems with debt service. It concluded, based on examining Mexico’s ability to service debt at the height of its 1984-89 debt crisis, that a debt-to-export ratio above 198 percent could yield debt servicing problems. Debt-to-export ratios at or slightly above 200 percent are in the upper part of the range the World Bank uses to classify countries as moderately indebted. Our analysis shows that a number of countries classified as moderately indebted have subsequently experienced debt servicing problems. Of the 11 countries classified by the World Bank in 1991 as moderately indebted low-income countries, four (Benin, Central African Republic, Mali, and Togo) have subsequently had their debt rescheduled through the Paris Club. In addition to the concerns over the particular levels of the debt sustainability indicators under the HIPC initiative, there is a concern that the indicators are narrow. A particular concern is that the focus on export-based indicators does not directly consider the overall economic capacity of a country or the particular level of demand for government expenditures. For example, critics have argued that the export-based indicators do not reflect the extent to which governments’ social spending needs vary across potential recipient countries. A related concern is that since export revenues generally accrue to the private sector, they are not necessarily indicative of resources available to these governments. A recent study commissioned by the IMF to evaluate its ESAF programs cited the above concerns in suggesting that, in general, a more appropriate measure of a country’s debt burden would be the ratio of debt to its overall national income. The addition of fiscal, or government spending, criteria for determining debt sustainability has done little to satisfy critics. To qualify for debt relief under the HIPC initiative’s fiscal criteria requires that three conditions be met: a country’s present value of debt-to-government revenue ratio must exceed 280 percent, a country’s exports-to-GDP ratio must exceed 40 percent, and a country’s government revenues-to-GDP ratio must exceed 20 percent. These conditions are likely to be met by just a few countries. The World Bank and the IMF have not provided any economic justification for these particular levels. They have stated that the 280 percent debt-to-government revenue ratio is somewhat arbitrary. The HIPC initiative documents note that if this ratio were set much lower than 280 percent, the overall cost of the HIPC initiative would rise substantially. Some debt experts have questioned the statement in HIPC documents that the HIPC initiative debt relief will reduce countries’ debts to a point that will significantly diminish any debt overhang effect. Whether debt overhang constitutes a serious obstacle to investment in HIPC countries has been debated during the HIPC initiative’s implementation, with some officials and analysts doubting its significance and others continuing to cite reduction of debt overhang as a primary benefit of the HIPC initiative. Several analysts maintain that high debt levels do deter investment in HIPC countries but some also question whether the levels of debt reduction under the initiative will significantly reduce that effect. Some experts have observed that the way debt burdens are measured under the HIPC initiative—in present value terms—may not correspond to investors’ perceptions about how high a country’s debt burden is. Although present value is a useful way of comparing different debt burdens when the degree of concessionality of the debt varies widely, investors are more likely to look at debt in nominal terms, according to one debt expert. He noted that, beyond the creditors, the concept of present value is not widely understood. Since the present value of concessional debt is generally lower than its nominal value, countries will generally be left with debts that are higher in nominal than in present value terms. The economic projections in HIPC initiative analyses generally assume a steady growth in export revenues for HIPC countries. This assumption is an important element in the initiative’s expectation that HIPC recipients will have a sustainable debt burden. As exports grow, the indicators of indebtedness steadily improve, for a given level of debt. Exports have grown for most of the HIPC recipient countries in recent years. However, projections in HIPC documents assume significantly greater export growth in the years ahead. The first six countries deemed eligible for debt relief under the HIPC initiative had annual average growth rates in exports of 4.5 percent between 1985 and 1995. (See table 3.3.) HIPC documents project that in years after they receive relief under the initiative, these same countries will achieve an average annual growth in exports of 7.8 percent, a 75-percent increase over the previous period. Most of the countries that have been approved for relief under the HIPC initiative are dependent on a few primary commodities for a majority of their export earnings. (See table 3.4.) For this reason, their export earnings are considered to be particularly vulnerable to adverse economic events. For example, a significant fall in the price or output of a country’s primary export could bring the debt ratios to levels that once again exceed the HIPC initiative’s target levels for debt sustainability. In the case of Uganda, approximately 66 percent of its export earnings in 1995 derived from one commodity, coffee, whose world price was near a 10-year high. According to HIPC documents, a 20-percent drop in the international price of coffee would raise Uganda’s debt-to-export ratio by 30-40 percentage points. (Figures 3.1 and 3.2 illustrate the historical volatility of the world prices of coffee beans and also of cocoa beans, which are the main export commodity of Côte d’Ivoire.) Moreover, Uganda’s recent experience illustrates the sensitivity of export earnings to variation in the amount produced. World Bank and IMF officials have cited increases in Uganda’s export earnings (1995/96 and 1996/97) as evidence that the HIPC initiative’s assumptions of countries’ increased exports are reasonable when countries undertake necessary reforms. However, Uganda’s most recent export data (1997/98) underlines concerns about the volatility of exports, with Uganda’s exports projected to decline about 23 percent. Poor weather conditions in 1997/98 and the resulting decline in coffee exports are cited in HIPC documents as the reason for projected increases in Uganda’s debt-to-export ratio in 1997/98 through 1999/2000. Similarly, a single commodity, cotton, accounts for 46 percent of Mali’s exports, and its top three commodities account for 76 percent. According to HIPC documents, any one of three events would put Mali’s debt ratios at unsustainable levels through the projection period (2017). These events include (1) a drought similar to that experienced by Mali in 1972-75 and 1983-85, (2) a 15-percent decline in gold prices, or (3) a 20-percent decline in cotton prices. In the case of Burkina Faso, an important element in its projected increase in exports is a steady 10-percent growth in gold exports. However, a recent sharp decline in the price of gold has substantially reduced investment in this sector and, according to a World Bank official, created considerable doubt regarding the likelihood that Burkina Faso will achieve the projected increases in gold production. A key element in the HIPC initiative’s projection of debt sustainability is that countries receiving debt relief will continue to get substantial foreign aid well into the future. This expected assistance includes not just aid to support development projects within countries but also concessional financing, including balance-of-payments support. For example, macroeconomic projections done by the World Bank and IMF staff at Uganda’s April 1998 completion point show that, with its HIPC initiative debt relief of $347 million in present value terms, Uganda will continue to require donor assistance to meet its external debt and balance-of-payments needs until 2006. The inclusion of balance-of-payment support by donors within the HIPC initiative complicates the definition of debt sustainability and the establishment of a proper target level, since any level of debt could be considered “sustainable,” given a sufficient amount of donor support. According to World Bank officials, if a HIPC recipient country that is adhering to agreed-on reforms experiences circumstances that result in debt servicing problems, increased donor flows to that country will be forthcoming. A World Bank official cited commitments by some governments, for example, to provide the assistance Uganda needs in order to meet debt servicing obligations after the HIPC initiative relief, provided reforms continue. Future donor flows to potential HIPC recipients depend, of course, on many factors. However, the assumption that donor support of HIPC recipients will continue at current levels and will, under adverse conditions, increase has been questioned, given that net concessional flows from governments and multilateral institutions to poor countries have declined since 1990. Moreover, the World Bank observed in 1998 that the future prospects for official concessional financing worldwide are bleak due to fiscal pressures in Europe and Japan, the largest donor by volume, and to continued public concern over spending on foreign aid in the United States. Additionally, officials from the U.S. Treasury, other governments, and NGOs have raised questions about whether governments will simultaneously provide debt relief, increased concessional financing, and substantial contributions to replenish the international financial institutions, particularly in light of their own budget constraints. Officials we spoke to from other governments, including France and Germany, noted that creditors are likely to continue financially supporting countries, but the amounts are uncertain due to costs and fiscal pressures. The HIPC initiative has focused international attention on the limited debt management capacity of many poor countries. This limitation is a potential hindrance to their ability to emerge from their debt problems and avoid future unsustainable debt levels. Many HIPC participants and debt experts have noted that assistance with debt management has been a significant benefit of the initiative, although some have expressed frustration that the pace of improvement is slow. HIPC countries vary greatly in the quality of their capabilities for tracking and managing debt. Few HIPC countries have the capacity to analyze debt in a broader economic context, according to developing country experts, which limits their ability to participate fully in the analysis of their debt relief requirements under the initiative and to avoid future debt problems. Even for countries with basic debt management systems in place, analyzing how debt and debt reduction can affect their overall macroeconomic situations poses a major challenge. According to the World Bank and the IMF, in recent years almost every country classified as a HIPC has received a substantial amount of technical assistance intended to improve its ability to manage debt. Most of this assistance has been concentrated on information management—on improving accounting systems for recording and tracking financial obligations. These efforts have resulted in significant improvements in many HIPC countries. They have been largely organized by UNCTAD and the Commonwealth Secretariat, both of which have developed and installed debt management software and provided extensive training. In addition, these countries have received significant support from several bilateral donors. However, countries that are candidates for debt relief under the HIPC initiative vary greatly in the degree to which they have in place the technical and governance requirements for effective debt management. Two early qualifiers for the initiative, Uganda and Bolivia, stand out as countries that have relatively well-developed capabilities for tracking and managing debt. Uganda, for example, has been using the UNCTAD debt data management software since 1985 and operating it independently since 1993. This capability constituted a major challenge for the country, according to government officials, due in part to destruction from 2 decades of civil war that included the burning of the Treasury building. In addition to increasing technical capacity, Uganda moved on the constitutional front, in 1994 giving its parliament all powers to contract new debt. Similarly, according to government officials, Burkina Faso established a centralized committee that would have to approve any new government borrowing. The capacity of some HIPC countries to accurately track their financial obligations is still weak, however. Many African countries, especially, lack the capacity to maintain accurate loan records and track the timing and amount of debt servicing obligations. This can result in situations where various agencies within a government engage in external borrowing with no central control over, or even complete knowledge of, total debt amounts, according to officials from countries we visited. For some HIPC countries, initial examination of debt data has revealed inconsistencies, according to an official from UNCTAD. In some countries, the division between different agencies of institutional responsibilities for debt management and the inability to retain skilled staff have created problems. Due in part to concern about countries’ ability to manage their debt, under IMF-supported programs, ceilings are to be negotiated on countries’ new borrowing on nonconcessional terms. According to an IMF official, for heavily indebted poor countries, the ceiling is generally understood to be a low amount. Following receipt of debt relief through the HIPC initiative, Uganda has agreed to limit its nonconcessional borrowing to $10 million annually for the next several years. Burkina Faso and Côte d’Ivoire are two HIPC countries that are in the earlier stages of receiving assistance with debt management. At the beginning of the HIPC initiative process, the government of Burkina Faso was unable to project the effect of new debt on future debt service. The government asked a private consultant for assistance in understanding and qualifying for the HIPC initiative. Now, with the support of the Swiss government, Burkina Faso is in the process of implementing the UNCTAD debt management system. However, according to donor and recipient officials, improvements in debt management in Burkina Faso are moving very slowly. The country needs to develop up-to-date and accurate debt data, install computer equipment and software to replace the manual records currently being used, and train staff to manage the financial management system before the country can move much further. In Côte d’Ivoire, manual records are generally still used due to budgetary constraints, and the government has yet to receive significant outside assistance on debt management. UNCTAD recently completed a needs assessment, and expectations are that Côte d’Ivoire will have the UNCTAD system by January 1999. In addition to computer hardware and software, training and other related technical support are needed. Even with a system of basic debt data management in place, analyzing how debt and debt reduction can affect a country’s overall macroeconomic situation poses a challenge most HIPC participants cannot meet, according to officials from the United Nations and recipient governments. This is due both to a lack of accessible modeling techniques and limited technical expertise. World Bank and IMF staff have developed very complex and nonuniform spreadsheets to conduct debt sustainability analyses for countries potentially receiving debt relief under the HIPC initiative. World Bank and IMF officials acknowledged early in the HIPC initiative process that the absence of a uniform, documented standard for simulating debt reduction exercises would make it difficult for countries to participate fully in analyzing their debt situations. The World Bank set as a priority the development of such a model to be made available to interested countries. According to World Bank and IMF documents, this software was to be designed to be easily linked to the debt data management software put in place by UNCTAD and the Commonwealth Secretariat. However, as of August 1998, this software was not generally available for countries’ use. According to officials at UNCTAD, restructuring and downsizing at the World Bank has resulted in loss of the expertise needed to complete the software and make it available. According to a Bank official, versions of the software are being tested in some countries. Other efforts to assist countries in developing the capacity to independently formulate their own debt strategy and debt sustainability analysis include the program undertaken by Debt Relief International, an NGO based in London, with funding from the governments of Austria, Denmark, Sweden, and Switzerland. The program also intends to help governments maximize their ownership and leadership of debt reduction and to demonstrate to the donor and creditor community a high level of debt management. The program is thus driven by recipient countries’ requests for it. According to Debt Relief International, it has received requests for assistance from 16 heavily indebted poor countries. The capacity and will to closely monitor future borrowing after completing the HIPC initiative is critical to avoiding further debt problems, according to debt experts and some recipient country officials. Despite receiving relief through the HIPC initiative and implementing strong economic policies, many recipient countries will remain vulnerable to future debt problems. Although the HIPC initiative has focused attention on the debt problems of poor countries and is substantially reducing the debt burdens of some, an expectation that all recipient countries that follow sound economic policies will avoid future debt problems is unrealistic. Projections that debt burdens are sustainable for HIPC recipients assume that economic conditions for these countries remain favorable and donors remain committed to assisting these countries in meeting their development goals and debt obligations. These assumptions may prove to be optimistic given the cyclical nature of many of these countries’ major exports and recent declines in donor assistance. Furthermore, the expectation that recipient countries will effectively track existing debt and ensure that new debt is affordable may also prove optimistic in some cases. The organizations commenting on our report emphasized that recipients of debt relief through the HIPC initiative are and will remain vulnerable to economic difficulties. The IMF stated in particular that the ESAF program, as well as other donor support, could be used to help recipients facing economic shocks. Our analysis points out, however, that these countries will generally depend on support from the IMF and other donors to service their debt and cover other external financing needs, even under the HIPC initiative’s assumption of favorable economic conditions. Their overall economic vulnerabilities suggest that some are likely to need increased levels of such external financing, even after debt relief. The World Bank stated that the HIPC initiative recognizes the vulnerability of recipient countries, and this is reflected in the initiative’s choice of debt relief targets near 200 percent. The World Bank also stated that the report’s conclusion that many countries remain vulnerable to debt problems could be viewed as an implicit recommendation for increasing relief amounts. Our assessment that countries remain vulnerable to future debt problems is based on our analysis of the relief targets used in the HIPC initiative, the high concentration of these countries’ exports, and the reliance of these countries on donor flows for continued debt support, separate from their development needs. While our analysis concludes that countries remain vulnerable to future debt problems, we are not recommending greater relief. We recognize that debt relief under the initiative will benefit participants, but conclude that some recipient countries may once again experience debt problems. This assessment highlights the limitations of the initiative and should prove useful in future discussion among those responsible for policy decision in this area. In addition, the World Bank said that the initiative’s export projections derive from estimates made by the World Bank, the IMF, and the recipient country and are higher than the historical average due to the positive effect of sustained policy reform. Our conclusion that the initiative’s export projections are optimistic is based on analysis of countries’ historical export growth rates and the concentration of these countries’ exports. The cyclical nature of the prices of some of the primary export commodities of HIPC recipient countries is not accounted for in the underlying analyses of the initiative. Although sustained policy reform could improve these countries’ export prospects, changes in commodity prices and outputs can be outside the influence of individual countries. Our report provides an example of this when poor weather conditions resulted in declining coffee exports and increases in Uganda’s debt-to-export ratio in 1997 and 1998.
Pursuant to a congressional request, GAO reviewed the: (1) implementation of the Heavily Indebted Poor Countries (HIPC) Debt Initiative; and (2) initiative's potential to achieve its stated goal of bringing poor countries' debts to sustainable levels. GAO noted that: (1) the HIPC initiative will help reduce participating poor countries' debt burdens, in some cases, substantially; however, many will remain vulnerable to future debt problems even with sound economic policies; (2) the implementation of the HIPC initiative reflects compromise among the major official creditors on issues such as countries' eligibility and the total amount of debt relief to be provided; (3) in recognition of countries' economic vulnerablities, creditors have generally agreed on relief amounts that are at or close to the upper bounds of what the negotiated framework allows; (4) nonetheless, in order to avoid further debt problems, countries receiving debt relief through the HIPC initiative are assumed to maintain strong economic performance and continue to receive large amounts of donor assistance; (5) in most cases this assistance includes balance-of-payments support; (6) the HIPC initiative projections assume that countries will maintain sustainable debt levels in part through strong export growth; (7) these export growth assumptions may be optimistic for some countries; and (8) since many HIPC recipients rely upon a few commodities for their export earnings, they are particularly vulnerable to economic events such as a decline in the price or output of a primary export.
In fiscal year 2007, the federal government obligated nearly $27 billion for basic research, with DOD obligations accounting for $1.5 billion of that total. As shown in figure 1, more than half of DOD basic research funding was provided to schools in the form of research grants and contracts. “Other” includes Federally Funded Research and Development Centers and foreign performers. OMB Circular A-21, Cost Principles for Educational Institutions, establishes principles on how schools charge costs to federally funded research. Circular A-21 requires all costs for reimbursement to be allowable, allocable, and reasonable, and provides that the federal government bear its fair share of total costs, determined in accordance with generally accepted accounting principles, except where restricted or prohibited by law. While the federal government and schools share certain research goals, there is debate on what constitutes the federal government’s fair share of research costs. The federal government reimburses both direct and indirect costs associated with federally funded research. Direct costs can be specifically identified with individual research projects and are relatively easy to define and measure. They include, for example, the researcher’s salary, subawards, equipment, and travel. Indirect costs represent a school’s general support expenses and cannot be specifically identified with individual research projects or institutional activities. They include, for example, building utilities, administrative staff salaries, and library operations (see fig. 2). As shown in figure 2, indirect costs are divided into two main components, facilities costs and administrative costs. Facilities costs include operations and maintenance expenses, building use or depreciation costs, equipment use or depreciation costs, and library expenses. Administrative costs include general administration expenses, such as the costs associated with executive functions like financial management; departmental administration expenses, including clerical staff and supplies for academic departments; sponsored projects administration expenses, that is the costs associated with the office responsible for administering projects and awards funded by external sources; and student administration and services expenses, such as the administration of the student health clinic. Circular A-21 outlines the process for establishing an indirect cost rate for schools performing federally funded research. The indirect cost rate is the mechanism for determining the proportion of indirect costs that may be charged to federally funded research awards. The rate is established based on a historical fiscal year of cost data from a school, and is applied to individual research awards. The indirect cost rate is applied to a modified set of direct costs referred to as “modified total direct costs” (MTDC) (see figs. 3 and 4). MTDC includes the salaries and wages of those conducting the research, fringe benefits (e.g., pensions), materials and supplies, travel, and the first $25,000 of each subaward. MTDC excludes costs such as equipment costs, capital expenditures, tuition remission, equipment or space rental costs, and the portion of each subaward in excess of $25,000 (see fig. 4). The indirect cost rate is developed as follows: Each subcomponent of the facilities component of the indirect cost rate (e.g., building use, depreciation, operations and maintenance) is divided by MTDC and added together to derive the facilities component of the rate. Similarly, each subcomponent of the administrative component of the indirect cost rate (e.g., general administration, sponsored administration) is divided by MTDC and then added together to derive the administrative component of the rate. Then, the facilities component and the administrative component are added together to equal the indirect cost rate. A school’s indirect cost rate is negotiated between the school and the federal government. A school can establish three types of indirect cost rates (see table 1). Circular A-21 assigns rate-setting responsibility to either HHS or DOD, as the cognizant rate-setting agency. The Division of Cost Allocation (DCA) handles this responsibility within HHS and the Indirect Cost Branch within the Office of Naval Research (ONR) does so for DOD. Currently, HHS, with 50 rate negotiators in four field offices and headquarters, is the cognizant rate-setting agency for more than 1,000 schools, while DOD, with four negotiators and a director in one location, is responsible for 44 schools. As shown in figure 5, a school establishes its indirect cost rate by submitting a proposal to its cognizant rate-setting agency using a base year that represents a historical fiscal year of costs. HHS reviews the proposal while DOD generally sends the proposal to the Defense Contract Audit Agency (DCAA) to be audited. After the proposal has been reviewed or audited, the cognizant rate-setting agency and the school negotiate, and come to agreement on the rate. The rate is then documented in a formal indirect cost rate agreement. Across the federal government, there are limitations, or caps, placed on the reimbursement of indirect costs. Two related to DOD-funded research at schools are known as the administrative cap and the DOD basic research cap. In 1991, Circular A-21 incorporated an administrative cap limiting the administrative costs for which a school may be reimbursed to 26 percent of the MTDC for research awards. This cap is applied during the rate- setting process. This limitation only applies to higher education institutions, as stated in OMB Circular No. A-21. Despite the circular’s administrative cap, DOD regulations, which implement a statutory mandate, provide that for DOD contracts, versus grants or cooperative agreements, schools have the option to negotiate a separate rate that is not subject to the administrative cap. The Department of Defense Appropriations Act, 2008 incorporated a cap limiting the indirect costs for which a research performer (including a school) may be reimbursed to 35 percent of a DOD basic research award’s total costs. This cap, which applies to all nonfederal research performers, that is, schools, nonprofits, and private sector companies performing on contracts, grants, and cooperative agreements, was also included in the fiscal year 2009 and fiscal year 2010 defense appropriations acts. In contrast to the administrative cap, which is applied to the negotiated rate applicable to all of a school’s federal research awards, the DOD basic research cap is applied to individual DOD basic research awards, and while the school can monitor whether the charges have exceeded the 35 percent limitation as it performs the research, the school (or other performer) ultimately ensures government reimbursement does not exceed the cap once all costs are known. The DOD basic research cap is calculated with different cost bases than either the school’s indirect cost rate or the administrative cap. As previously discussed, a school’s indirect cost rate is applied to a modified set of total direct costs to arrive at the dollar amount of indirect costs applicable to a specific research award. In contrast, the 35 percent cap on DOD basic research awards limits indirect costs as a proportion of total costs. Total costs include all allowable direct and indirect costs on an individual award. Because of the different cost basis, the 35 percent DOD basic research cap does not mean that an institution with an indirect cost rate higher than 35 percent will be limited by the cap. The threshold for a school to be limited by the 35 percent DOD basic research cap is a negotiated indirect cost rate of 53.8 percent. This is demonstrated through the following example: if a school receives a DOD basic research award for $100,000, the maximum amount of the award that may be reimbursed as indirect costs is $35,000. This threshold holds true in situations where no direct costs are excluded from total direct costs. In this circumstance, total direct costs are equivalent to modified total direct costs. This allows us to use the formula for an indirect cost rate to calculate the threshold (see fig. 7 below). If a school has an indirect cost rate below this 53.8 percent threshold, it will not be affected by the DOD basic research cap on indirect costs at 35 percent of total award costs. If a school has an indirect cost rate above this threshold, it may not be reimbursed for all its indirect costs, depending on each award’s costs. We identified multiple types of variation in indirect cost rates for schools performing DOD basic research, driven by several different factors (see appendix III for detailed information on the factors and variations we identified). Across all schools, wide variation was identified in proposed rates, negotiated rates, and in the difference between the proposed and negotiated rates at schools receiving DOD research funding in fiscal year 2007. The difference between the proposed and negotiated rates was significantly larger for schools that negotiate with HHS than for those that negotiate with DOD. Differing policies and procedures employed by the two cognizant rate-setting agencies, including, for example, different approaches and differing use of rate types, may explain some of this variation. Another source of variation was that schools eligible for a rate increase of 1.3 percent to account for the cost of utilities, known as the utility cost adjustment, both proposed and negotiated higher rates than those not receiving the adjustment. The increase for the costs of utilities is received by a fixed list of schools that are listed in OMB Circular A-21. OMB has not reexamined the list of those receiving the adjustment since 1998 and DOD and HHS officials responsible for rate-setting were unclear on what the process should be for receiving and approving applications for use of the utility cost adjustment. The proposed and negotiated indirect cost rates at schools performing DOD research varied widely from one school to another. Figure 8 summarizes the distribution of schools with various levels of proposed and negotiated rates. For example, whereas about 14 percent of schools proposed a rate of less than 45 percent, about 17 percent of schools proposed a rate of 60 percent or higher. Similarly, while about 24 percent of schools negotiated rates that were less than 45 percent, about 7 percent of schools negotiated a rate of 60 percent or higher. Variation between what was proposed and what was negotiated can also be seen in this figure. For instance, while about 17 percent of schools proposed a rate of 60 percent or higher, only about 7 percent of schools negotiated a rate that high. The difference between a school’s proposed and negotiated rates varied significantly based on the cognizant rate-setting agency with which a school negotiated. Specifically, the average difference between proposed and negotiated fiscal year 2007 rates for schools with HHS’s Division of Cost Allocation as their cognizant rate-setting agency was about 4.5 percentage points. In contrast, the average difference between proposed and negotiated rates for schools with DOD’s Office of Naval Research as their cognizant rate-setting agency was less than 1 percentage point (see fig. 9 below). Schools’ explanations for the difference between their proposed and negotiated rates varied based on the schools’ cognizant rate-setting agencies. For example, we estimate that about 60 percent of schools with HHS as their cognizant rate-setting agency identified negotiation that was not clearly tied to specific aspects of their rate proposal as a part of the explanation for the negotiated rate reduction. For example, one school we surveyed stated that HHS officials told them that 2 percentage points would be the most that their rate could increase over the previous negotiated rate. The National Director of HHS’s Division of Cost Allocation confirmed that a limitation on the increase in negotiated rates of 2 percentage points had been the practice in one of the DCA field offices. Once this matter was brought to the attention of the National Director of HHS’s Division of Cost Allocation, he ordered that the practice be discontinued both to ensure consistent treatment across field offices and because it was not a practice supported by policy or regulation. For schools with DOD as their cognizant rate-setting agency, none of the 15 sampled schools identified negotiation that was not clearly tied to specific aspects of their rate proposal as a part of their explanation of the difference between their proposed and negotiated rates. When DOD was the cognizant rate-setting agency, school officials generally indicated the difference between the rate proposed and negotiated resulted from disagreements with DOD over specific costs or methodologies used in their rate proposal. For example, some schools said that DOD officials made changes to the school’s proposed classification and allocation of space to research. In addition to the findings from the survey, interviews with school officials revealed that schools perceived that the reasons for rate reductions varied depending on which agency the school negotiated with. For example, in discussions with a group of senior-level university research administrators, HHS negotiations were described as arbitrary, whereas administrators negotiating with DOD stated that they clearly understood why reductions were being negotiated. However, HHS officials stated that their negotiations are not arbitrary. Prior to the negotiation, HHS provides schools written documentation of its position that was developed based on its review of facts included in the school’s indirect cost rate proposal. The differences between the proposed and negotiated rates based on cognizant rate-setting agencies and schools’ perceptions of the reasons for the rate reductions may be related to differences between the processes employed by the two rate-setting agencies. For example, the two cognizant rate-setting agencies express differing approaches to executing responsibilities under Circular A-21 in terms of rate-setting goals. OMB Circular A-21 states that the cognizant rate-setting agencies are responsible for negotiating and approving indirect cost rates for schools on behalf of all federal agencies. DOD policy on rate-setting for indirect costs states that DOD is to implement relevant regulations (including OMB Circular A-21) in a manner that “ensure uniform and consistent treatment of indirect cost issues at all DOD cognizant institutions” and DOD officials have stated that a broader goal is to ensure that DOD is able to obtain high-quality research by reimbursing all allowable, allocable, and reasonable costs. In contrast, HHS’s approach, as identified in its rate- setting mission, includes two components – being “fair, reasonable and equitable when communicating and negotiating with the grantee community” and having “a fiduciary responsibility to protect the public funds.” HHS and DOD use different processes for evaluating a school’s rate proposal. DOD officials told us it generally performs audits of its indirect cost proposals to validate costs enumerated in the proposal. HHS does not generally perform an audit of indirect cost proposals, but they review the cost proposal data. HHS officials stated that the findings from the review are used as the basis for their negotiation with a school. The frequency with which the two cognizant rate-setting agencies approve predetermined and fixed with carry-forward rate types also varies. Although both cognizant agencies expressed a preference for negotiating predetermined rates, in part due to the burden associated with carry- forward adjustments, in our survey 5 out of the 15 DOD schools negotiated fixed with carry-forward rates, while no more than 1 percent of schools with HHS as their rate-setting agency did. The year on which a school based its proposal for 2007 rates also varied by the cognizant rate-setting agency, with schools negotiating with HHS using, on average, earlier base years than those negotiating with DOD. In addition, while none of the 15 DOD schools we surveyed used a base year prior to 2002 to negotiate 2007 rates, about 17 percent of HHS schools did. Further, schools that reported an early base year (2001 or earlier) negotiated an average rate of 6.5 percentage points below their proposed rate, compared to a 3.6 percentage point rate reduction for schools with more recent base years. The interaction between cognizant rate-setting agency, base year, and the degree of rate reduction may relate to the different policies the two agencies have related to extending rates. DOD officials stated that DOD does not allow extensions, whereas HHS policy allows for an extension of an existing rate agreement in some circumstances. HHS granted extensions to some of the schools included in our survey and, according to HHS officials, many of the extensions were granted with associated reductions in the rate, ranging from half a percentage point to 2.5 percentage points. This may account for some of the rate reduction observed at these schools. Our findings on different approaches used by the cognizant rate-setting agencies are similar to our findings of nearly 20 years ago. Specifically, in 1992, we reported that different approaches used by the two cognizant rate-setting agencies resulted in variation in negotiated rates. We found that DOD’s approach generally provided for full recovery of claimed allowed indirect costs, whereas HHS’s approach generally resulted in limiting the federal reimbursement of indirect costs. At the time, we reported that the average rate negotiated by DOD was about 59 percent, whereas the average rate negotiated by HHS was about 50 percent. For our fiscal year 2007 survey data, proposed rates averaged 53.4 percent for schools assigned to HHS and 51.7 percent for schools assigned to DOD. Negotiated rates averaged 49.1 percent for HHS schools and 51.6 percent for DOD schools. In both cases, the averages did not vary between the two cognizant rate-setting agencies by a statistically significant amount. However, the different approaches identified in the 1992 report are consistent with the different processes we found today. The utility cost adjustment—a 1.3 percentage point increase in the negotiated indirect cost rate—is linked to institutions with higher proposed and negotiated indirect cost rates. The utility cost adjustment was implemented in 1998 to replace a system of special utility cost studies. It was made available to 65 institutions identified in Exhibit B of OMB Circular A-21, based on whether they had submitted a special study in their most recent indirect cost rate proposal. Schools on the list receive this adjustment in addition to the utilities portion of indirect costs that a school negotiates based on its proposal. In fiscal year 2007, the average negotiated rate for schools that reported receiving the utility cost adjustment was 54.7 percent and the average for those reporting not receiving the utility cost adjustment was 47.6 percent. Although OMB Circular A-21 states that, beginning in July 2002, federal agencies must reevaluate periodically the eligibility of institutions to receive the utility cost adjustment, no changes have been made to the list since the utility cost adjustment was implemented in 1998. Also, OMB Circular A-21 states that federal agencies may receive applications for use of the utility cost adjustment from schools not on the list. An OMB official stated that OMB considers the list of utility cost adjustment recipients to be final for the time being, and the eligibility list has remained unchanged since 1998. The official also told us that OMB has not been asked to reassess the utility cost adjustment by federal agencies. DOD and HHS officials responsible for rate-setting reported that schools have requested to be added to the eligibility list; however, these officials also stated they were unclear on what the process should be for receiving and approving applications for use of the utility cost adjustment. The limitation on government reimbursement of administrative costs affects most schools. Based on our survey results, about 83 percent of schools had fiscal year 2007 administrative costs above the administrative cap, with a reported average administrative rate component of 31 percent. The cap was established in 1991 with the intent of limiting federal reimbursement for schools’ indirect costs. When the cap was originally proposed in 1986, it was established at 26 percent for that year for the administrative portion of indirect costs because it was the 5-year average administrative cost reimbursement rate for all major universities. OMB has not formally reexamined this cap since its implementation in 1991. In survey responses and interviews, school and association officials reported that growing administrative costs were associated with modern research and complying with federal regulations. Some government officials also attributed the potential increase to federal regulations, particularly those enacted since September 11, 2001. The administrative cap limits reimbursement of indirect costs in different ways than the DOD basic research cap. For example, whereas the administrative cap is applied to a school’s negotiated indirect cost rate and limits reimbursement of administrative costs on all federal awards to the school, the DOD basic research cap is applied at the close of an award and limits reimbursement only on DOD-funded basic research awards. We estimate the DOD basic research cap may affect some awards at about 22 percent of schools, based on schools’ negotiated indirect cost rates for fiscal year 2008. It is difficult to pinpoint the extent to which the DOD basic research cap limits indirect cost reimbursement at a school in part because it operates differently than the guidance for rate-setting and reimbursement familiar to schools, as outlined in OMB Circular A-21. For example, unlike for the administrative cap, the DOD basic research cap’s impact cannot be determined up front on an institution-wide basis because its limitation on indirect costs depends on the types of costs included in each individual award. In addition, the cap’s impact cannot be fully determined until total costs for an award are known, making it difficult for schools to know up front whether their reimbursement will be limited for a given award. These differences between the DOD basic research cap and the rate-setting and reimbursement structure familiar to schools under Circular A-21 may contribute to confusion reported by schools about how the cap is applied to awards. The administrative cap limits reimbursement at most schools. An estimated 83 percent of schools reported administrative costs that were higher than the 26 percent administrative cap. During the rate-setting process, schools generally provide cost information in their proposal that identifies the administrative component of their indirect cost rate based on their total administrative costs, regardless of the 26 percent cap. These rate components averaged 31 percent in fiscal year 2007, which represents an average 5 percentage point difference between these proposed administrative rate components and the cap at 26 percent. The fact that about 83 percent of schools had administrative costs in fiscal year 2007 higher than the administrative cap indicates that the cap controls government costs through limiting reimbursement. The cap was enacted in 1991 to stop abuses related to indirect cost reimbursement at schools. In addition, the federal government acknowledged that indirect costs were rising rapidly, and characterized the situation as problematic and therefore in 1986 proposed what it considered to be a reasonable ceiling on all administrative costs for that year. To determine the ceiling, OMB used the 5-year average administrative rate component for all major universities. OMB first proposed establishing the cap at 26 percent and subsequently reducing the reimbursement rate to 20 percent after a year. This further reduction was not included in the final revision of the circular, but reflects the initial goal of controlling government costs even below the average reimbursement rate for administrative costs. An OMB official stated the agency believes that over time the administrative cap has forced the schools to be more efficient with their administrative effort and to be more disciplined in spending. OMB has not reopened the administrative cap issue since its implementation because they have not seen evidence that this is a priority issue. According to school and association officials we spoke with, administrative costs have been rising over time. They attribute these changes, in part, to increased federal regulations, such as regulations related to national security standards, human subjects and animal care, as well as reporting and audit requirements, having large impacts on their indirect costs. However, school and association officials were unable to provide an estimate of the increased costs associated with federal regulations. In response to these regulations, schools we surveyed report taking a number of actions that have raised administrative costs. These include the following: hiring new staff to, for instance, report data on grants and subrecipient opening new offices to monitor compliance with federal regulations, implementing new information technology systems, developing processes for improving security and safety, and training staff on new systems and compliance efforts. In order to respond to the government’s research needs with respect to complex research topics, such as nanotechnology, some schools we surveyed report making investments in research capabilities which could require hiring personnel to manage programs in new or upgraded facilities. Schools claim these indirect costs may not be fully reimbursed because of the administrative cap. Since the implementation of the administrative cap, some schools tell us they have had to identify additional sources of funding to conduct the research. When asked how they bridge the gap between actual administrative costs and the reimbursement from the federal government for administrative costs, school officials offered examples including the use of funds from university endowment and investments, and student tuition. Some government officials have also observed increasing administrative costs over time. For example, HHS officials who review indirect cost rate proposals told us they have seen a trend of increasing administrative costs reflected in schools’ rate proposals. These officials attributed the increased administrative costs to federal regulations such as post-9/11 regulations related to security standards and foreign students that have led schools to spend resources on security clearances, student visas, and other screening efforts. Despite reported increasing administrative costs and related under-reimbursement, some DOD officials who are responsible for awarding basic research told us that schools continue to compete for federal research awards and produce the research that meets the government’s needs. The administrative cap and the DOD basic research cap limit government reimbursement of indirect costs in distinct ways, as shown in table 2 below. Whereas the administrative cap limits reimbursement of administrative costs specifically, the DOD basic research cap limits reimbursement of all indirect costs for an applicable award. Additionally, the design of each cap differs in terms of when the cap is calculated, what entities and awards the cap applies to, and what cost base is used to calculate the cap’s impact. The differences in the way the caps limit reimbursement also means each cap’s impact may differ and cannot be compared to the other. Approximately 22 percent of schools had a fiscal year 2008 indirect cost rate high enough for awards to be potentially limited by the 35 percent cap on indirect costs of DOD basic research awards. While the legislative intent for including this cap in certain defense appropriations acts is sparse, House Report 110-279 for the Department of Defense Appropriations Bill, 2008 indicated that overhead costs had grown to unwarranted levels, and the House Committee on Appropriations recommended that DOD limit the percentage of overhead costs that would be reimbursed for basic research awards. Because the cap could have affected only some of the fiscal year 2008 awards at about 22 percent of schools, the scope of its effect might have been limited. It is difficult to determine the extent to which the DOD basic research cap will affect schools in part because its key features differentiate it from the rate-setting and reimbursement structure outlined in OMB Circular A-21, the guidance familiar to schools. The DOD basic research cap limits reimbursement of indirect costs as a proportion of total award costs, instead of MTDC, which has two important outcomes on the way its impact is determined. First, the final impact on reimbursement cannot be determined until total costs are tallied. Second, the cap’s limitation depends on the types of costs included in each individual award, and therefore its impact cannot be determined up front on a schoolwide basis. Moreover, because the cap uses a base of total costs instead of MTDC, the cap is not structured in the same way as an indirect cost rate. Consequently, the cap at 35 percent of total award costs does not require schools to negotiate an indirect cost rate below 35 percent. However, in our survey we found that some schools mistakenly perceived they would be affected by the cap if their negotiated indirect cost rate was above 35 percent. For example, two schools stated that the 35 percent rate cap was lower than their indirect cost rates of approximately 50 percent, and therefore they believed the cap would decrease their reimbursement. In another example, a school with a 45 percent indirect cost rate stated the cap at 35 percent of indirect costs would result in a 10 percent reduction in recovery of indirect costs on each of their awards. These schools misunderstood how using a base of total costs instead of MTDC makes the cap different than their indirect cost rates. In fact, none of the schools in the examples stated above would be affected by the cap, because their negotiated indirect cost rates were below 53.8 percent, a mathematically determined threshold below which no school is affected by the cap. The 53.8 percent indirect cost rate at which awards may begin to be affected by the DOD basic research cap is a minimum and does not mean that all awards above this rate will be affected by the cap. Multiple schools may have the same indirect cost rate above this threshold, but each school may experience different effects from the cap, depending on the proportion of direct costs that are excluded from the MTDC base for each individual award. For example, we looked at a DOD award for each of two schools in our survey that had indirect cost rates of 57 percent in fiscal year 2007. One of the awards was for research on laser technology at a large private school. The other award was for research on prostate cancer at a smaller private school. With an indirect cost rate of 57 percent, if more than about 6 percent of the total direct costs are excluded from MTDC, reimbursement on an award is not limited by the DOD basic research cap. Only one of the two sample awards would have been limited by the DOD basic research cap because of the level of exclusions for the award. The affected award—research on prostate cancer at the smaller school—had no total direct costs excluded from its MTDC, making the proportion of indirect costs to total costs above 35 percent. The award for laser technology research at the larger school had more costs (9 percent) excluded from its MTDC, and therefore the proportion of indirect costs to total costs was below 35 percent. The higher the percentage of costs excluded from an award’s MTDC, the less likely the award would be affected by the DOD basic research cap. Whether the DOD basic research cap limits reimbursement depends on the level of an award’s exclusions; therefore, it is difficult for a school to pinpoint and predict the effects of this cap on a schoolwide basis. DOD identified three methods it uses to oversee indirect cost reimbursement for research grants awarded to schools: the annual single audit, the award closeout process, and agency audits, performed by DCAA or by cognizant agencies for audit. However, we identified weaknesses in DOD’s use of each of these methods. DOD relies primarily on the single audit, but some schools we reviewed were not individually audited as a part of the single audit. The second method identified by DOD, the closeout process, is conducted by DOD administrative grants or contracting officers using various processes. However, DOD officials told us they do not verify mathematically if the correct indirect cost rate and dollar amount was charged at grant closeouts. The third method, audits by DCAA or by cognizant agencies for audit, covered only a limited number of the schools in fiscal year 2008, and cognizant agencies for audit had inconsistent approaches to auditing the awards of other agencies, with only HHS conducting a limited number of audits on DOD awards. At least one of the three methods was used in fiscal year 2008 at 25 of the 32 schools we reviewed. However, 4 schools were not covered by any of the three methods, indicating a gap in coverage (see table 3). In our discussions with cognizant agencies for audit, we learned that HHS has increased the audits of research awards to schools in recent years, which have led to some significant findings of improper billings of indirect costs. DOD reports that its primary method for overseeing compliance with indirect cost reimbursement on research grants is the annual single audit under the Single Audit Act, as amended. The Single Audit Act adopted a single audit concept to help meet the needs of federal agencies for grantee oversight and accountability as well as grantees’ needs for single, uniformly structured audits. The act was intended to promote sound financial management with respect to federal awards administered by nonfederal entities. An audit performed in accordance with the Single Audit Act is directed at operations of an entire entity. While the auditors must conduct audit procedures that address particular compliance requirements as they apply to specific federal programs identified by the auditors as high risk, the Single Audit Act does not require that auditors test all federal programs administered by an entity for compliance with all related requirements. For a large and complex organization such as a state government or a university system, the auditors examine selected federal programs administered by the entity based on guidance in OMB’s Circular No. A-133 Compliance Supplement. OMB’s Compliance Supplement identifies the compliance requirements relevant to audits that are applicable to the major programs, including research and development, and provides suggested audit procedures for testing compliance with those requirements, including sampling. The Compliance Supplement includes a section on testing for allowable costs, including allowable indirect costs, and the auditor may sample a certain number of transactions (claims for reimbursement) from federal awards. Table 4 below shows the indirect cost audit objectives provided by the Compliance Supplement for use by auditors for federal awards selected for audit. For single audit reporting purposes, the parameters of the reporting can differ from one school or school system to another. As a result, in some cases individual educational institutions are considered separate entities and audited separately, while in other cases they may be audited as a part of a university system, or even as a part of an entire state government, which includes numerous institutions or agencies within the reporting entity. The fact that the single audit may not be sufficient to provide assurance that an entity is in compliance with requirements for indirect costs charged to research and development grants was reflected in our findings. In 2008, no research and development awards or transactions were selected for review at 7 of the 32 schools we reviewed. For 6 of these schools no research and development awards or transactions were selected because the schools were a part of larger reporting entities and the auditors told us that the research and development program from other schools within the larger reporting entity were selected for audit in 2008. Each of the 6 schools was either a campus at a major public university and the entity was defined at the university-wide level, or the entity was defined at the statewide level and the school was included under the umbrella of the state. However, in terms of receiving federal awards, each school functioned independently by, for example, negotiating an indirect cost rate agreement with the federal government that was only applicable to that school. Further, based on data from the National Science Foundation, each of these schools received more than $97 million in federal research and development funding in 2007, an amount much greater than the $500,000 threshold over which an entity is required to receive a single audit. For example, the University of California (UC) was defined as one entity subject to the single audit, although each of its 10 campuses and the Office of the President separately negotiate an indirect cost rate agreement. In 2008, no research and development awards or transactions were sampled for any of the 4 UC campuses included in our 32 schools, including UC campuses at Berkeley, Los Angeles, Santa Barbara, and San Diego, because the University of California is defined as one entity. Similarly, in 2008 the University of Virginia was defined as a part of the Commonwealth of Virginia for the purposes of the single audit, although the university negotiates its own indirect cost rate agreement with the federal government. Because the university falls under the umbrella of the state as an entity, in 2008 no research and development awards or transactions were selected from the university for the single audit. Table 5 breaks down how many of the 32 schools that we reviewed had awards or transactions selected as a part of the single audit in 2008, and how many did not. Furthermore, as shown in figure 10, based on data provided by independent public accounting firms, few awards were sampled as a part of the annual single audit in 2008, and even fewer DOD awards were included in the sample. Specifically, for the 22 of the 32 schools where research and development awards were sampled as a part of the 2008 single audit, the average number of total awards sampled was 21 awards, and the average number of DOD awards sampled was 5. Also, the percentage of total federal award dollars sampled for the annual single audit ranged from 0.5 to 36 percent, and the DOD award dollars sampled represented between 0.1 and 65 percent of all federal award dollars. DOD also reported that it uses the award closeout procedure—part of the postaward administration process conducted on an award after its period of performance has ended—for overseeing indirect cost reimbursement to schools. According to DOD, for grants to schools, postaward administration responsibilities are generally delegated to administrative grants or contracting officers at DOD’s ONR. As a part of award closeout, an ONR administrative grants or contracting officer reviews the costs incurred under a given award to determine whether all costs are allowable, allocable, and reasonable. According to DOD officials, an administrative grants or contracting officer may also request an evaluation of a final voucher be conducted. To facilitate a financial review, schools fill out a final financial status report within 90 days of the end of the period of performance, which is to be reviewed and approved by the administrative grants or contracting officer. The Standard Form 425, or the “Federal Financial Report”, is used for the final financial status report and includes a section for recording indirect costs charged by the school on the award. DOD officials at ONR asserted that they use the award closeout procedure as a method for overseeing indirect cost reimbursements. However, DOD administrative grants and contracting officers told us that they do not regularly use the closeout procedure to determine whether or not the dollar amounts of the indirect costs charged were correct. For example, DOD administrative grants and contracting officers at two ONR regional locations and two other DOD service locations that provide contract administration services for school awards informed us they did not regularly check indirect costs to determine if they were accurately charged, even though we found that they can use information related to indirect costs provided by awardees in the indirect expense section of the Federal Financial Report to do so. OMB officials told us that, at a minimum, awards officers are to use information in the Federal Financial Report at award closeout to determine whether or not the correct indirect costs were charged to the government. However, the Federal Financial Report Instructions by OMB on completing the Indirect Expense section of the form says to complete this section only if required by the awarding agency and in accordance with agency instruction. According to DOD officials, this section may or may not be required by DOD awarding agencies and it is not the current practice of DOD administrative grants or contracting officers that we spoke with to use information in the Indirect Expense section to mathematically calculate if indirect costs were accurately charged. While ONR administrative grants and contracting officers stated they do not use information entered in the Federal Financial Report to verify that indirect costs were charged accurately on a particular award, ONR does use information in the form’s Indirect Expense section, and other information in the form, to determine whether or not a school charged indirect costs at or below the DOD basic research cap of 35 percent of total costs. For example, at the two regional ONR offices we visited, the administrative grants and contracting officers informed us that ONR developed a form that incorporates data from the Federal Financial Report for use in determining whether or not an awardee receiving DOD basic research funds was below the cap of 35 percent of total costs. If the percentage is over the 35 percent cap, then the awardee is responsible for paying back to the awarding DOD agency the amount of the reimbursement over the limit. The third method DOD identified to oversee reimbursement of indirect costs on its research grants was audits conducted by DCAA or by cognizant agencies for audit. DCAA and HHS in its role as a cognizant agency for audit conducted audits of DOD awards at some of the 32 schools in fiscal year 2008, but DCAA’s coverage of the schools was limited and the practices of the various cognizant agencies for audit differed. DCAA performs audits of DOD awards, generally on contracts but also grants. DCAA’s audit types include pre-award audits (such as an audit of a proposal), postaward evaluations or audits (such as an evaluation of a final voucher, or an incurred cost audit of an institution), and system audits (e.g., an audit of an institution’s billing system or accounting system). For audits related to reimbursement of indirect costs, for example, a DOD administrative grants or contracting officer may request that DCAA conduct an audit of a school’s final incurred cost submission or an evaluation of a final voucher on an individual award. In fiscal year 2008, DCAA performed a total of 88 audits or evaluations at 10 of the 32 schools that accounted for more than half of fiscal year 2007 DOD basic research funding. Approximately one-third of the audits and evaluations performed were conducted at 1 of the 10 schools (see table 6 below). We asked the DCAA Chief in the Policy Programs Division whose areas of responsibility include audits of schools to identify which types of audit services would include a check on indirect costs. Based on the official’s descriptions of the audit services, about two-thirds of the audits were of the type where indirect costs were a main purpose, about a third of the audits were audit types where indirect costs were not the subject of the audit but were tested as a part of the basis of the audit opinion, and 3 of the 88 audits were types of audits that typically do not test indirect costs. Of note, 38 of the 88 audits were of the type conducted prior to an award, for example, an audit of a proposal. In some cases, an audit that takes place prior to an award may not oversee compliance with reimbursement of indirect costs since the costs have not yet been incurred. In addition to performing DCAA audits, DOD officials noted that DOD awards to schools may be audited by other agencies, in particular when another agency is designated the role of cognizant agency for audit for that school. Award recipients expending more than $50 million in federal funding are assigned a cognizant agency for audit in accordance with OMB Circular No. A-133. Generally, the cognizant agency for audit is the federal agency that provides the predominant amount of direct funding to a recipient. Some of the responsibilities of the cognizant agency for audit include coordinating audits by federal agencies of the school, performing quality control reviews of audits by nonfederal auditors, and coordinating a management decision for audit findings that affect federal programs of more than one agency. We determined that there were four cognizant agencies for audit (including DOD) for the top 32 schools we reviewed, and these agencies differ in their practices for addressing other agencies’ awards. Of the four cognizant agencies for audit, Education and NSF told us they have not audited the awards of other agencies. HHS has a reimbursable audit program administered out of its Office of Audit Services. The Office of Audit Services establishes memorandums of understanding with other agencies, including DOD, stating they will perform audits on a reimbursement basis. Through the program, HHS retains its right, as the cognizant agency for audit, to perform audits of DOD awards at DOD’s request. In addition to conducting audits for the three schools in our review for which it is cognizant, DOD reported that it, like HHS, establishes memorandums of understanding with other agencies to conduct audits of their awards. Circular A-133 requires, to the extent practical, cognizant agencies for audit to coordinate audits or reviews made by or for federal agencies in addition to the single audit. Consistent with their stated practices, for the three cognizant agencies for audit besides DOD included in our review, only HHS conducted audits of DOD awards in fiscal year 2008. Specifically, HHS performed 12 audits of DOD awards at 6 of the 32 schools we reviewed. These audits were generally closeout or incurred cost audits on individual awards. Table 7 identifies the breakdown of cognizant agencies for audit for the 32 schools and the audits conducted by each of these cognizant agencies for audit. HHS officials told us that in 2003, a lack of confidence in the single audit and a significant increase in the amount of money awarded to schools led the agency to seek other ways to oversee the reimbursement of research costs. As a result they identified five areas where additional audits were necessary including audits of administrative and clerical salaries, an area that is associated with indirect costs. HHS officials also told us that there have been significant findings in the area of administrative and clerical salaries. For example, the HHS Office of the Inspector General (OIG) conducted a review of administrative and clerical costs at Duke University and found the school claimed an estimated $1.7 million in unallowable charges by improperly billing indirect costs as direct costs. The HHS OIG found that these unallowable claims occurred because the school had not established adequate controls to ensure consistent compliance with the federal requirements applicable to charges for administrative and clerical costs. HHS officials also told us they are currently undertaking a review of administrative and clerical costs at another school as a follow-on to significant problems found during a closeout audit of one of the school’s awards. Inconsistencies in the rate-setting and reimbursement processes lead to perceived and actual differences in the treatment of schools performing DOD basic research. The difference between the proposed and negotiated indirect cost rate varied based on whether a school negotiated with DOD or HHS and leads schools to perceive unequal treatment, though negotiated indirect cost rates were not different. Even though this is only a perceived difference there are actual differences in how schools may be defined for rate-setting purposes versus the oversight of cost reimbursement purposes. Schools are treated differently in terms of the oversight on their reimbursement of indirect costs because of the flexibility in how the definition of a nonfederal entity is applied, in some cases, creating a situation where a school that expends $500,000 or more in federal funds may not be audited as a separate entity, but is included as part of a larger entity. DOD does not effectively use the other methods to oversee indirect costs when a school is not separately audited. As a result, DOD lacks assurance that it is reimbursing indirect costs appropriately. In addition, guidance for the indirect cost rate-setting and reimbursement processes contains provisions that have been in place for a long time, but are overdue to be reviewed and updated. Because the utility cost adjustment eligibility is based on information that is 12 years old and therefore does not necessarily reflect schools’ current costs, the OMB guidance runs the risk of inadvertently providing benefits to some schools and not others. Similarly, because the rate at which administrative cost reimbursement is limited has not been reviewed since it was implemented approximately 20 years ago, and administrative costs may have changed over time, it is unclear whether the current limitation achieves the desired balance between controlling government costs and sponsoring government’s fair share of research costs. To address different processes for negotiating rates by the two cognizant rate-setting agencies for higher education institutions, we recommend that the Director of OMB: Identify methods to ensure that the rate-setting process is applied consistently at all schools, regardless of which agency has rate cognizance. This would include identifying ways to ensure that differences in cognizant rate-setting agencies’ approaches, goals, policies, and practices do not lead to unintended differences in schools’ rate reductions for indirect costs. To ensure that indirect cost reimbursement practices are consistent with the current state of indirect research costs at schools providing federal basic research, we recommend that the Director of OMB: Clarify the roles and responsibilities of federal agencies (including DOD, HHS, and OMB) in accepting applications and reevaluating the eligibility of schools to receive the utility cost adjustment. Reexamine and determine whether reimbursing administrative costs at a maximum rate of 26 percent achieves the appropriate level of cost control and achieves the government’s objective that the federal government bears its fair share of total costs. To improve DOD’s ability to oversee reimbursement of allowable indirect costs to schools, we recommend that the Secretary of Defense direct that the Under Secretary of Defense for Acquisition, Technology and Logistics: Establish a process for administrative grants/contracting officers to verify at grant closeout whether a school has requested reimbursement at the accurate indirect cost rate and dollar amount, which includes calculating whether the dollar amount reflects the appropriate application of rates for that award. Assess the current level of audit coverage for monitoring DOD indirect cost reimbursement for schools and determine what level is sufficient and whether to expand use of closeout audits and other audits to oversee compliance. Develop a policy for oversight of indirect costs that includes the use of alternative oversight information (1) for those schools not individually audited under the single audit, and (2) for those schools where the audit coverage of research and development awards is not sufficient for oversight of indirect costs. We provided a draft of this report to DOD, HHS, OMB, Education, and NSF for review and comment. In written comments, DOD generally agreed with all three recommendations. Specifically, DOD concurred with two of the recommendations and partially concurred with the third. DOD cited short-term actions it planned to take to address some of the recommendations. For example, to address our recommendation that DOD establish a process for verifying whether a school has requested reimbursement at the accurate indirect cost rate and amount, DOD stated it would require university recipients of research grants to complete the field for indirect expenses on the final submission of the Federal Financial Report and have postaward administrators conduct the recommended verification on a sample of awards each year, using a risk-based assessment. While DOD concurred with our recommendation that DOD assess the current level of audit coverage and determine what level is sufficient, DOD did not identify any new actions it would take to do so, relying instead on continued efforts. Given our findings that there was limited audit coverage by DCAA and cognizant audit agencies of DOD grants, the intent of our recommendation was for DOD to identify additional actions. In response to our recommendation that DOD develop a policy for oversight of indirect costs that includes the use of alternative oversight information in certain circumstances, DOD partially concurred, indicating it would look to identify alternative approaches, to the extent DOD identifies insufficiencies in its oversight. In this recommendation and the others, DOD emphasized that seeking improvements so DOD can continue to rely on the single audit to oversee compliance with reimbursement of indirect cost reimbursement for grants is its preferred approach. However, while the single audit is a valuable tool, it may not always be the right tool for DOD to ensure compliance with indirect cost reimbursement for research grants. In addition to written comments, DOD provided technical comments, which we incorporated as appropriate. The department’s written comments are included in their entirety in appendix II. OMB provided oral comments, indicating they generally agreed with the recommendations, and technical comments, which we incorporated into the report as appropriate. Education also provided technical comments, which were incorporated into the report as appropriate. HHS and NSF had no comment. We are sending copies of this report to the Secretary of Defense, the Secretary of Health and Human Services, the Director of the Office of Management and Budget, the Secretary of Education, the Director of the National Science Foundation, and other interested parties. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff making key contributions to this report were Penny Berrier Augustine, Sharron Candon, Pamela Davidson, Morgan Delaney Ramaker, Anne-Marie Fennell, Art James, Janet McKelvey, Ruben Montes de Oca, Amy Moran Lowe, Susan Neill, Kenneth Patton, Angela Pleasants, Scott Purdy, Mark Ramage, Sylvia Schatz, and Suzanne Sterling. The objectives of this study were to examine the following issues related to higher education institutions performing basic research for the Department of Defense (DOD): (1) the variation in proposed and negotiated indirect cost rates and factors that may contribute to variations; (2) how and to what extent the administrative cap and the DOD basic research cap limit reimbursement of indirect costs; and (3) the methods DOD uses for overseeing compliance with indirect cost reimbursement for grants and the extent to which each method was used. To identify the proposed and negotiated indirect cost rates of schools performing basic research for DOD and factors that may contribute to variation in the rates, we collected and analyzed information from a probability sample of schools that performed basic research for DOD in fiscal year 2007, according to DOD-provided data. Detailed information on the survey is available below. We also interviewed government officials at DOD and other government agencies on regulations and policies relating to the reimbursement of indirect costs for research. This included cognizant rate-setting officials in DOD’s Office of Naval Research and the Department of Health and Human Services’ (HHS) Division of Cost Allocation; DOD officials in offices awarding and overseeing basic research, such as the Air Force Office of Scientific Research; the Army Research Office; the Office of the Director, Defense Research and Engineering; the Defense Threat Reduction Agency; and the Defense Advanced Research Projects Agency; and regulatory officials overseeing indirect cost policy in the Office of Management and Budget (OMB). Through interviews with these officials, we obtained views and documentation on the indirect cost process and on factors that they believed might contribute to variation in rates. In addition, we spoke with representatives of the academic community to obtain information on their perspectives on the indirect cost rate-setting process and to better understand the information that would be available through our survey. These representatives included university faculty and research administrators, as well as associations representing the research community, including the Association of American Universities (AAU), the Council on Government Relations (COGR), the Association of Public and Land-grant Universities, and the Federal Demonstration Partnership. Through these interviews, we obtained perspectives on factors that they believed might contribute to variation in rates and information on the impacts of federal regulation on the research community. Finally, we reviewed reports and documentation pertaining to indirect cost regulation, policies, and processes, most notably OMB Circular A-21, governing school indirect cost reimbursement, but also including cognizant rate- setting agency process documentation, past GAO reports, and reports on indirect costs by other organizations, such as the RAND Corporation and Arthur Andersen. To determine how and to what extent the administrative cap and the DOD basic research cap limit reimbursement of indirect costs at higher education institutions performing DOD research, we collected and analyzed information from a probability sample of schools that performed DOD basic research in fiscal year 2007, according to DOD-provided data (see detailed survey information below). We also interviewed government officials at OMB and at the cognizant rate-setting agencies on regulations and policies relating to the application of caps to indirect cost rates. Through interviews with these officials, we obtained views on how the caps were developed and their perspectives on how the selected caps may affect research institutions. In addition, we spoke with representatives of the academic community to obtain information on their perspectives on the selected caps and to better understand the information that would be available through our survey. These representatives included university faculty and research administrators, as well as associations representing the research community. Through these interviews, we obtained information on the impacts of selected caps on the research community, as well as information about what information would be available to evaluate these impacts. Finally, we reviewed documentation pertaining to selected indirect cost caps, including OMB Circular A-21, the Department of Defense Appropriations Act, 2008, and legislative and regulatory histories related to the caps. To determine the methods DOD uses to oversee indirect cost reimbursement on grants, we interviewed DOD officials at the office of the Director, Defense Research and Engineering and the Office of Naval Research (ONR). To identify the extent to which each of the three methods identified by DOD was used, we focused on the 32 schools representing more than half of DOD’s fiscal year 2007 basic research obligations, based on DOD data. We obtained information about the extent to which these methods are used from DOD, independent public accounting firms conducting the annual single audit, higher education institutions, cognizant agencies for audit, and previous reports by GAO and others. We also interviewed officials at DOD, OMB, HHS, the National Science Foundation and the Department of Education, as well as independent public accounting firm representatives and higher education representatives. To determine the extent to which the first method, the single audit, was used, we collected and analyzed data provided by independent public accounting firms, state auditors, and DCAA on their sampling of research and development awards for the fiscal year 2008 single audit. To determine the extent to which the second method, the grant closeout process, was used by DOD, we reviewed DOD documentation and examples related to the various processes to closeout a DOD basic research grant. For example, we reviewed examples of financial information on a standard form that tracks the status of the grant, including direct and indirect costs. We discussed with the DOD award officers what and how the process and procedures and forms for grant closeout check indirect costs to determine if these costs were accurately charged during the award. In addition, we interviewed OMB officials to determine the purpose of the Indirect Expense section of the form developed by OMB, the Federal Financial Report (SF-425). To determine the extent to which the third method, audits by DCAA and cognizant agencies for audit, was used, we identified the agencies with audit cognizance for the 32 schools we examined. We requested and obtained information from the four agencies with cognizance for the 32 schools—NSF, Education, HHS, and DOD. In addition, we collected information on the number and type of audits of DOD awards conducted by DCAA and by HHS at schools in fiscal year 2008 related to indirect costs. Through interviews with officials from the four agencies, we obtained information and documentation related to their audit cognizance programs. We also learned about school audit programs at one of the cognizant agencies for audit and heard their views on why they no longer rely on the single audit to oversee the reimbursement of indirect costs. To address our first and second objectives, we designed and conducted a mail-based survey of a sample of schools in the U.S. that received basic research awards from DOD that were active in fiscal year 2007. The study population consisted of all U.S.-based schools receiving more than $100,000 in DOD funds to do basic research in fiscal year 2007. We developed our sample frame from DOD-provided award-level data. After excluding schools with $100,000 or less in DOD basic research dollars in fiscal year 2007, there were a total of 343 schools in our population. From this population, we selected a random sample of 178 schools, stratified by total award dollars. The first stratum was comprised of the 32 institutions with the highest amount of DOD basic research award dollars in fiscal year 2007, accounting for over 50 percent of the DOD basic research dollars for that year. All of these 32 schools were selected in our sample. From the remaining 311 schools in the second stratum, we selected a random sample of 146 for our study. The population, sample, and survey disposition by stratum is displayed in the following table. Several of the sample schools were determined to be “out-of-scope” for the purposes of this study. In particular, we determined that 13 of the institutions selected in stratum 2 either did not receive basic research awards in fiscal year 2007 (e.g. the award really was to a different institution) or the research institution was not a university (e.g. the award was to a university-affiliated nonprofit not subject to higher education indirect cost rules). These schools were dropped from the analysis. Of the 178 selected institutions, we obtained useable responses from 144 for an overall response rate of about 87 percent. The survey data were collected from July 2009 to October 2009. The survey was designed to collect information about a school’s indirect cost rates and demographic information, as well as the university administrator’s opinions on factors influencing the rates. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. We express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Unless otherwise noted, in this report estimates of indirect cost rates based on our survey have 95 percent confidence intervals within +/- 2 percentage points of the estimate itself. Estimates of the percentage of schools with particular characteristics have 95 percent confidence intervals within +/- 7 percentage points of the estimate itself. Estimates of totals based on this sample are presented along with their corresponding 95 percent confidence interval in the report. At each school, we also selected a nongeneralizable sample of one DOD research award with over $10,000 of activity in fiscal year 2007 and requested award and reimbursement data pertaining to that year of the award. Where appropriate, we followed up on unclear answers by calling the school-identified key contact at the sampled institutions for clarification. Each school in our sample can have multiple proposed and negotiated indirect cost rates, for a number of categories of rate agreements (e.g., on- campus organized research, off-campus organized research, instruction, or other sponsored activities). Unless otherwise specified in this report, estimates relating to proposed or negotiated rates are limited to the rate that is associated with on-campus organized research. We focused on this rate because it represents a comprehensive measure of indirect costs for research to be reimbursed by the federal government, including both facilities and administrative costs. In our questionnaire, we asked the university administrator to provide the proposed indirect cost rate applicable to fiscal year 2007, along with the elements of that proposed rate (the portions for facilities, administration, and carry-forward). Throughout our report estimates related to the proposed fiscal year 2007 indirect cost rates are not based on the proposed rate provided by the school. Instead we calculated each school’s proposed rate based on the information in the survey responses. Specifically, the GAO version of the proposed rate is the sum of the fiscal year 2007 proposed facilities component of indirect cost rate, the carry- forward, and the lesser of the proposed administrative component or 26 percent. We used this as the proposed rate for our analysis to ensure proposed rates were comparable across schools. In addition to sampling error, the practical difficulties of conducting any survey may introduce nonsampling errors, such as nonresponse bias or measurement errors. We took steps in developing the questionnaire, collecting the data, and analyzing the data to minimize such errors. To gain an initial understanding of the information reasonably available from schools about their research costs, as well as their thoughts on factors that may influence a school’s indirect cost rate, we reviewed regulatory documents pertaining to indirect cost rates, particularly OMB Circular A-21, and interviewed officials from government agencies, including rate-setting officials from HHS’s Division of Cost Allocation and DOD’s Office of Naval Research. We also spoke with university research administrators and representatives of associations representing the interests of research institutions and reviewed past GAO reports and analysis by other organizations to identify possible influencing factors. We pretested our survey instrument with six schools to determine if the questions were clear, if the survey questionnaire placed an undue burden on schools, and to determine the sources of information schools would use to answer the questions. The information learned through the pretests was used to refine the survey. To ensure the best possible response rate, we contacted individuals from each sampled school via telephone to identify the person most qualified to answer our survey questions and to confirm the mail, phone, and e-mail information for that individual. After revising the survey to incorporate pretest comments and identifying the best survey contact, we mailed the survey to sampled schools and followed up with nonrespondents by e-mail and telephone to encourage their responses, and we followed up with survey respondents to clarify unclear or unlikely responses. We conducted this performance audit from October 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Analysis of average negotiated rates across a 10-year period indicates that these rates have remained generally stable over time. The average negotiated rate across schools increased by less than a percentage point in total over the 10-year period, from 48.4 percent in 1999 to 49.3 percent in 2008. Schools perceive different HHS Division of Cost Allocation (DCA) field offices as using differing and inconsistent practices for negotiating rates. For example, university officials GAO interviewed described negotiations with the New York office as less arbitrary, whereas they described “unwritten rules” to limit reimbursement at the Dallas office. While we did not examine the particular practices at DCA field offices, our survey results indicate there was not a significant difference in negotiated rate reductions across these offices. Specifically, average rate reductions negotiated by each of DCA’s four field offices are all relatively close to one another, and HHS described processes that it uses to ensure consistency across the field offices. Variation in schools’ proposed and negotiated rates based on the DCA field office with which a school negotiates may be driven more by geographic factors than by the specific actions of the offices. Schools proposed and negotiated significantly higher rates to the New York field office (with an average proposed rate of 60.6 percent and an average negotiated rate of 55.9 percent) than those rates proposed to and negotiated by the other DCA field offices (average proposed rates in the other three regions all fall between 51 and 52 percent and average negotiated rates in the other three regions all fall between 46 and 48 percent). This result is consistent with similar findings for average rates based on the school’s geographic region, regardless of the cognizant rate-setting agency. Specifically, schools in the Northeastern region negotiated an average rate of 56.4 percent, with all other regions falling near 48 percent. According to the Bureau of Labor Statistics the cost of goods and services, measured by things such as housing, health care, fuel, and utilities, varies by region, and the Northeast region is generally higher than other regions. On average, schools receiving the largest proportion of DOD basic research money proposed and negotiated higher rates than those schools with a smaller DOD basic research volume. For fiscal year 2007, more than half of DOD’s basic research funding was obligated to just 32 schools, all of which were included in GAO’s survey sample. These schools proposed an average rate of 57.3 percent and negotiated an average rate of 53.0 percent. For the remaining schools in the population, the average proposed rate was 52.5 percent and the average negotiated rate was 48.8 percent. These differences may be related more generally to the size or research-intensive nature of the schools. Another factor that may be related to the amount of funding received is whether a school negotiated a separate rate for DOD contracts that allows for reimbursement of an administrative component without the standard 26 percent cap. Schools negotiating such a rate, on average, proposed higher indirect cost rates (i.e. for awards other than DOD contracts) than those schools that did not—57.5 percent versus 52.6 percent. A similar relationship held true for negotiated rates, with schools negotiating a separate DOD contract rate receiving an indirect cost rate of 53.0 percent, compared to 48.9 percent for schools not negotiating a separate rate. One of the main reasons schools provided for not negotiating a separate DOD contract rate was that they did not have enough research volume in DOD contracts to make it worthwhile. Consequently, this difference in estimated rates may be driven by higher overall rates at schools doing more research for DOD that are more likely to find value in a separate DOD contract rate. Based on survey responses, there was generally not much variation between the negotiated rate and the rate applied to a specific DOD basic research award. As part of GAO’s survey, data were collected on a nongeneralizable selection of one DOD award from each of the schools surveyed. Most schools reported that the rate applied to the selected award was their negotiated fiscal year 2007 rate. However, about one-third of schools reported that in 2007 a reimbursement rate other than their negotiated fiscal year 2007 rate was applied to the award on which GAO requested information. When there was a variation the primary reason schools identified for the difference was the OMB guidance that requires a school to use the rates negotiated at the time of an award throughout the life of that award, known as the “fixed for the life of the award provision.” For example, one such school reported a fiscal year 2007 negotiated rate of 55.5 percent, but indicated that the selected award was reimbursed at a rate of 54.0 percent, representing the school’s negotiated rate in fiscal year 2003 when the award was initiated. Tables 9 and 10 contain a list of the factors GAO analyzed to identify which ones may contribute to variations in proposed indirect cost rates, negotiated indirect cost rates, and the difference between proposed and negotiated indirect cost rates. In addition to those discussed in the body of the report, we found statistically significant differences in relation to the following factors: Geographic region: Northeast region schools both proposed and negotiated higher rates than other regions. HHS DCA field office: Schools negotiating with the DCA Northeastern field office both proposed and negotiated higher rates than other DCA regions. Stratum: The 32 schools with the largest DOD research volume both proposed and negotiated higher rates than the schools with smaller research volume. Type of institution: Private schools both proposed and negotiated higher rates than public schools. Negotiation of separate indirect cost rate for DOD contracts: Schools negotiating a separate rate for DOD contracts both proposed and negotiated higher rates for non-DOD-contract awards than schools that did not negotiate such a rate. Proposal cost type: The difference between the proposed and negotiated rate was higher at schools using actual base year costs in their proposal than at schools using projected costs. The following data represent the responses of schools that reported using the standard form for their indirect cost rate proposal in fiscal year 2007. These schools receive more than $10 million in annual federal grants. In this report we produce population estimates for the schools within the United States that use the standard format, which we estimate would correspond to about 263 schools. All estimates based on this survey are subject to sampling error. The 95 percent confidence interval for this estimate is from 244 to 283 schools. Percentage estimates presented in this appendix have 95 percent confidence intervals within +/- 7 percentage points of the estimate itself, unless otherwise noted. Dallas (Central) New York (Northeastern) San Francisco (Western) Washington, DC (Mid-Atlantic) percentage points of the estimate itself. Cost not allocable or allowable Cost associated with cost sharing Fixed for the life of the award provision percentage points of the estimate itself. What F&A cost rates for organized research did your institution have for FY1999 to FY2008 for federal grants and contracts? Some schools submit their indirect cost rate proposals to the federal government using a simplified method instead of the standard form. Each of these schools’ total direct cost of work is no more than $10 million and they are therefore considered small enough to use a simplified method of proposing a rate. Our survey sample included schools that may have used either the standard form or the simplified method for their rate proposal for fiscal year 2007. The following data represent the responses of schools that reported using the simplified method in fiscal year 2007. Because this survey was not designed to produce reliable estimates for simplified method schools, the 95 percent confidence intervals are wider than for other survey estimates and are noted along with each table. Dallas (Central) New York (Northeastern) San Francisco (Western) Washington, DC (Mid-Atlantic)
In fiscal year 2007, the majority of the Department of Defense's (DOD) basic research obligations were provided to higher education institutions. DOD reimburses these institutions for both direct and indirect costs for research. Two federal agencies, DOD and the Department of Health and Human Services (HHS), negotiate indirect cost rates used to reimburse higher education institutions for indirect costs on federally funded research awards, including DOD awards. GAO was asked to examine the following issues related to higher education institutions performing basic research for DOD: (1) the variation in proposed and negotiated indirect cost rates and factors that may contribute to variations; (2) how and to what extent the administrative cap and the DOD basic research cap limit reimbursement of indirect costs; and (3) the methods DOD uses for overseeing compliance with indirect cost reimbursement for grants. GAO surveyed a generalizable sample of higher education institutions performing basic research for DOD; reviewed agency guidance and policies; and interviewed officials from federal agencies, independent public accounting firms, and higher education institutions. GAO identified wide variation in indirect cost rates at schools receiving DOD funding in fiscal year 2007, which may be related to a number of factors. For example, the average difference between a school's proposed and its negotiated rate was much larger for schools with HHS as the cognizant rate-setting agency than for those with DOD, in part due to the agencies' differing approaches to negotiation. GAO also found that schools receiving a 1.3 percent add-on to their rate to assist with the cost of utilities both proposed and negotiated higher rates than those without the adjustment. Contrary to guidance to periodically review school eligibility, the fixed list of schools eligible to receive this add-on has not been revisited since established in 1998. The cap on the administrative portion of the indirect cost rate limited fiscal year 2007 reimbursement for about 83 percent of schools. The cap was established nearly 20 years ago with the intent of limiting federal reimbursement for schools' administrative costs, and OMB has not reexamined this cap since its implementation. We estimate the DOD basic research cap might have limited fiscal year 2008 reimbursement for some awards at about 22 percent of schools, but the limitation depends on the types of costs included in each individual award and is difficult to determine up front on a schoolwide basis until total costs for each award are tallied. GAO identified weaknesses in the three methods DOD says it uses to oversee that indirect costs for research grants are reimbursed appropriately: the single audit, the closeout process, and audits by DOD's Defense Contract Audit Agency or by cognizant agencies for audit. At least one of the three methods was used at most of the schools we reviewed, but four schools were not covered by any of the methods, indicating a gap in coverage. In our discussions with cognizant agencies for audit, we learned that recent audits of research awards to schools at HHS have led to some significant findings of improper billings of indirect costs. Inconsistencies in rate-setting and reimbursement processes lead to perceived and actual differences in the treatment of schools. Moreover, because of the weaknesses in its oversight methods, DOD lacks assurance that it is reimbursing indirect costs appropriately. GAO is making recommendations to address consistency in rate-setting and to improve oversight of indirect cost reimbursement. The agencies generally agreed with these recommendations.
The Tongass, one of 154 national forests managed by the Forest Service, is located in southeast Alaska and is the largest national forest in the country (see fig. 1). Given its size, the Tongass, within the Forest Service’s Alaska Region, is divided into 10 ranger districts. The Tongass is approximately 16.7 million acres, about 10 million acres of which are forested. Of the forested acres, the Forest Service classifies approximately 5.5 million acres as being “productive forest.” Like other national forests, the Tongass is managed for multiple uses, of which timber harvest is one. Timber harvest on national forests is generally carried out under timber sales conducted by the Forest Service. To conduct a timber sale, the Forest Service identifies a sale area, conducts the required environmental analyses, appraises the timber, and solicits bids from buyers interested in purchasing the timber. The Forest Service then prepares the timber sale contract and marks the sale boundary and the trees to be cut or left. The purchaser is responsible for cutting and removing the timber, with the Forest Service monitoring the harvest operations. The Forest Service expends funds to prepare, manage, and oversee timber sales and to conduct required environmental analyses. It also receives revenues for the timber it sells. The Forest Service reported an average of $12.5 million annually in timber-related expenditures for the Tongass from fiscal years 2005 to 2014. During that period, it reported receiving an average of $1.1 million in revenues associated with timber harvested from the Tongass. The National Forest Management Act requires the Forest Service to develop forest plans to govern management activities such as timber harvesting. For timber harvest activities, forest plans typically identify areas where timber harvest is permitted to occur and set a limit on the amount of timber that may be harvested from the forest. The Forest Service is required by the act to update forest plans at least every 15 years and may amend a plan more frequently to adapt to new information or changing conditions. Under the current Tongass forest plan, as amended in 2008, the Forest Service authorized up to 267 million board feet to be harvested annually from the Tongass. The 2008 plan generally prohibits timber harvest in roadless areas and in certain environmentally sensitive areas, such as near streams and beaches. Forest plans are subject to the National Environmental Policy Act, under which the agency evaluates the likely environmental effects of its actions using an environmental assessment or, if the actions likely would significantly affect the environment, a more detailed environmental impact statement (EIS). The Forest Service began offering timber sales in the Tongass in the early 1900s. Timber harvest increased substantially in the 1950s, according to Forest Service statistics, as construction of pulp mills in Ketchikan and Sitka generated higher demand for Tongass timber (see fig. 2). Timber harvest peaked at an annual average of approximately 494 million board feet in the 1970s. Harvest has since declined, to an annual average of approximately 46 million board feet for 2000 through 2009 and to approximately 33 million board feet for 2010 through 2014. Timber industry employment has also declined, from approximately 2,500 in 1982 to 249 in 2014, according to Forest Service documents. A number of laws and regulations have reduced the number of acres where timber harvest is allowed on national forests, both nationwide and in the Tongass. Specifically, according to statistics provided to us by Forest Service officials, of the approximately 5.5 million acres of productive forest in the Tongass, approximately 2.4 million acres are not available for harvest because of statutory provisions, such as wilderness designations, and another 1.8 million acres are not available for harvest because of other factors, such as USDA adopting the roadless rule. From the early 1900s through 2014, approximately 462,000 acres of timber were harvested in the Tongass, according to Forest Service officials, a figure representing approximately 8 percent of the productive forest originally found in the Tongass. Larger trees, which are important for wildlife habitat and biodiversity, have been harvested at a higher rate; the Forest Service has reported that 20 percent of Tongass acres containing the largest classes of trees have been harvested. Many of the areas in southeast Alaska with the largest classes of trees, however, are located on lands not managed by the Forest Service, such as lands owned by Alaska Native corporations or the State of Alaska. Across all land ownerships, the Forest Service reported that 32 percent of the acres in southeast Alaska with the largest trees had been harvested. In 2010, USDA announced its intent to transition the Tongass timber program to one predominantly based on young growth. The Secretary of Agriculture subsequently said that the transition would allow for more ecologically, socially, and economically sustainable forest management. In November 2015, the Forest Service released for public comment a draft EIS that analyzed five alternatives for undertaking the transition to young-growth harvest in the Tongass. The Forest Service expects to issue a final EIS describing the agency’s final decision regarding how it will implement the planned transition in December 2016. Figure 3 shows a timeline of events associated with the planned transition to young growth. The draft EIS concluded that a substantial reduction in old-growth harvest relative to what the Forest Service allowed under the 2008 forest plan (e.g., by transitioning to young-growth harvest) would enhance the Forest Service’s old-growth conservation strategy for the Tongass over the long term. In reaching this conclusion, the draft EIS noted that while many wildlife species in the Tongass are associated with more than one habitat type, most inhabit old-growth forests or prey on species that inhabit old- growth forests, and that certain areas of old-growth forest that are particularly important to many wildlife species had been heavily harvested. It also recognized that recent legislation had removed from the Tongass certain old-growth reserves that had been designated as part of the agency’s old-growth conservation strategy. The five alternatives described different time frames for making the transition (see app. II). In developing the alternatives, the Forest Service established 46 million board feet as the projected annual timber sale quantity—the estimated quantity of timber that the agency expects to sell each year during the first 15 years of the transition. The Forest Service considered different mixes of old- and young-growth harvest over a 100- year period, with the proportion of old-growth harvest decreasing over time until it reached the agency’s target of 5 million board feet. In the draft EIS, the Forest Service evaluated the five alternatives on a number of factors, including the time the agency projected it would take to reduce the annual old-growth harvest to 5 million board feet, and identified its “preferred alternative,” which the agency projected would allow it to make the transition within 16 years after adopting the forest plan amendment (see table 1). To achieve the young-growth harvest levels projected in the preferred alternative, the Forest Service stated that it would allow some harvest in areas where it is not allowed under the 2008 forest plan, such as certain areas near streams and beaches. According to Forest Service officials, these areas were often among the first to undergo old-growth harvest in the 20th century and contain some of the most mature young-growth stands in the Tongass. Without access to these areas, Forest Service officials told us, it will be difficult for the agency to achieve the young- growth harvest levels associated with the preferred alternative. As a result, Forest Service officials said, allowing limited harvest in these areas is needed for the agency to increase its harvest of young-growth timber in the early years of the transition sufficiently to reduce the harvest of old- growth timber. Timber harvest in the Tongass also affects other economic sectors in southeast Alaska that depend on natural resources—including fishing and tourism, which, as noted, represent approximately 25 percent of employment in the region. For example, salmon, which spawn in streams in the Tongass, are key species for the commercial fishing industry, and timber harvest can alter water flow and sediment runoff, both of which can affect salmon. Timber harvest may also diminish the scenic and natural values that attract some visitors to the region, potentially affecting the tourism industry. Conversely, roads that are constructed as part of timber sales may provide easier access to hunting and berry-picking sites in the Tongass. In addition, numerous small communities are located in or adjacent to the Tongass. The Forest Service, in its draft EIS, recognized that its management decisions affect those communities and also that some communities may be disproportionately affected by these decisions. The USDA Investment Strategy in Support of Rural Communities in Southeast Alaska 2011-2013 identified four federal agencies with diverse missions—the Forest Service, Farm Service Agency, and Rural Development within USDA and the Economic Development Administration within the Department of Commerce—involved in actions to help support the timber industry and other economic sectors as part of the planned transition to young-growth harvest. The Forest Service manages 154 national forests and 20 national grasslands for multiple uses, including timber, recreation, and watershed management and to sustain the health, diversity, and productivity of these lands to meet the needs of present and future generations. The Farm Service Agency administers a variety of programs benefitting farmers and ranchers, including farm commodity programs, farm loans, and conservation programs. Rural Development administers financial programs to support public facilities and services such as water and sewer systems, housing, health clinics, and emergency service facilities. It also provides grants, loans, and loan guarantees to farmers, ranchers, and rural small businesses to assist in developing renewable energy systems and improving energy efficiency. The Economic Development Administration fosters regional economic development efforts by, for example, offering grants to support development in economically distressed areas. The Forest Service has initiated some steps to assess whether its planned transition to young-growth harvest in the Tongass is likely to support a viable timber industry in southeast Alaska—one of the key goals laid out in the Secretary of Agriculture’s 2013 memorandum discussing the transition. The Forest Service has estimated the volume of young-growth timber available for harvest over the next 100 years and has also identified a number of factors that may affect the viability of a young-growth timber industry in southeast Alaska. Forest Service officials told us the agency has also begun an effort to compare the potential market prices for young-growth timber or products to the cost to harvest, transport, and process the timber. One key factor in the viability of the timber industry in southeast Alaska is the volume of timber—both young growth and old growth—available to be harvested. To support its planned transition to young-growth harvest, the Forest Service identified the number of acres of young-growth forest suitable for timber production in the Tongass—251,000 acres—and used a model that projects forest growth to estimate the volume of timber those acres will contain over the next 100 years. Using this information, the Forest Service in November 2015 published its draft EIS that evaluated five alternatives for amending the forest plan for the Tongass to facilitate the transition to young-growth harvest. In its draft EIS, the Forest Service reported taking a number of steps to refine its data on the amount of young-growth timber available for harvest in the Tongass. For example, it reported updating its young-growth timber inventory, including removing from agency databases those lands previously managed by the Forest Service that have been conveyed to other parties. It also reported contracting with a consultant to develop the model used to project future growth and timber yields from young- growth timber stands in the Tongass. The Forest Service also recognized that a number of factors could reduce the harvest of young-growth timber below the volume the agency estimated to be available and took steps to account for this potential reduction—referred to as “falldown”—in its estimates of young growth availability. Agency data on young-growth volume used in the draft EIS include some timber that will not be economically feasible to harvest or that is located in areas where harvest will not be allowed. For example, a Forest Service official told us that some young-growth areas consist of small or isolated areas where the volume of timber is insufficient to warrant the cost of harvesting it. In addition, timber harvest is not allowed in proximity to fish-bearing streams, and some young-growth areas may contain fish-bearing streams that were not previously identified by the agency. The official explained that factors such as these are likely to reduce the volume of young-growth that will be harvested but are often not discovered until the agency begins to prepare a timber sale in a particular area. In developing the alternatives for the draft EIS, the Forest Service reduced its estimate of the volume of young-growth timber available to be harvested to account for such falldown. The Forest Service also identified factors—such as the agency’s cost of preparing timber sales and potential delays because of appeals and lawsuits—that could affect its ability to sell the volume of timber it projected in the draft EIS. The Tongass Advisory Committee—a group convened by the Secretary of Agriculture under the Federal Advisory Committee Act—also recognized the uncertainty surrounding the volume of timber that will be able to be harvested, and recommended in December 2015 that the Forest Service support a stakeholder group that would monitor progress in achieving the timber harvest levels proposed in the draft EIS. In January 2016, Forest Service officials told us they agreed that monitoring would be important to help the agency and its stakeholders understand the extent to which the agency was meeting its projected harvest levels, but had not decided on how they would do so. The officials said that they expected the final forest plan amendment to describe the agency’s planned monitoring activities. Officials also told us that the Forest Service intends to continue refining its young-growth timber data, noting, for example, that in July 2015 the agency signed a cost-share agreement with the State of Alaska to survey additional young-growth areas. In addition to the supply of timber available, the viability of a young- growth timber industry in southeast Alaska is affected by the demand for young-growth wood, which in turn is affected by the value (i.e., market price) of the wood products made from it; the value of these products depends in part on the cost of producing them. Young growth has different wood characteristics, such as appearance, than old growth, which can affect its value. According to the draft EIS, southeast Alaska is one of the few places in western North America that produces wood from slow-grown, large trees (i.e., old growth). Wood from such trees may have more attractive grain characteristics and be used for higher-value products—such as musical instruments or certain types of window frames and doors—where appearance is important. In contrast, the draft EIS reported that wood from young-growth trees from the Tongass is more likely to be used for lower-valued products, such as dimension lumber (i.e., lumber used for structural framing), where appearance is not as important. With regard to production costs, the Forest Service has identified several challenges facing the timber industry in southeast Alaska—including higher labor and energy costs and the industry’s distance from markets in the contiguous United States—that raise its costs compared to other timber-producing areas of North America. On the other hand, southeast Alaska is closer to Asia—historically a significant market for timber from southeast Alaska—than these other timber- producing areas, which Forest Service officials told us could result in lower relative costs to ship timber from the Tongass to Asian markets. Forest Service officials told us they recognized these factors, and that both the agency and the industry are exploring the types of products that can be produced in an economically viable manner from Tongass young growth. Young-growth timber harvested from the Tongass can be either shipped unprocessed out of the region or processed into lumber or other products in southeast Alaska. In either case, timber and products from the Tongass compete in broad economic markets and are likely to face challenges competing in those markets, according to the Forest Service’s draft EIS. For example: Young-growth logs for export. Exporting sawlogs (i.e., unprocessed logs) is likely to be a major component of the southeast Alaska timber industry during the transition, according to the draft EIS. The draft EIS reported that most timber harvested in southeast Alaska, including from the Tongass and from lands owned by Alaska Native corporations and the State of Alaska, is exported as sawlogs to Asia. The transition to young-growth timber may affect this market (e.g., by increasing the proportion of lower-value timber harvested), but the draft EIS indicates that the agency expects that timber purchasers are likely to continue to rely heavily on exporting sawlogs overseas. However, the Forest Service also recognized that the ability of purchasers to export sawlogs harvested from the Tongass is limited under current Forest Service policy to 50 percent of timber volume sold. Young-growth lumber. The Forest Service, in its draft EIS, concluded that demand for lumber (as opposed to unprocessed logs) produced in southeast Alaska was relatively low. The existing export market for lumber produced in southeast Alaska is primarily for higher-graded lumber made from old-growth trees, while the major use for young-growth lumber processed in southeast Alaska is likely to be for dimension lumber (i.e., lumber used for structural framing), for which demand may be lower, according to the Forest Service. In its draft EIS, the Forest Service assumed that Asian purchasers would not be willing to substitute dimension lumber produced from young- growth trees for the higher-graded lumber they had previously been purchasing. Dimension lumber produced in southeast Alaska could also be used within southeast Alaska or shipped to the contiguous United States. However, Forest Service officials and stakeholders told us that these markets are already served by relatively large, efficient mills located in the Pacific Northwest and that because production costs are higher in southeast Alaska, it will be challenging for dimension lumber from the Tongass to compete with lumber from existing suppliers. In addition, the Forest Service has reported that existing southeast Alaskan mills have limited capacity to process young growth and will likely have to invest in new milling equipment if they are to significantly expand their production of lumber produced from young growth. Forest Service officials and industry representatives also told us the industry is unlikely to invest the needed funds without more certainty about the amount of timber that will be offered for sale and harvested. Young-growth utility logs. Another potential use for Tongass young- growth noted in the draft EIS is as “utility logs”—that is, logs of insufficient quality to use for dimension lumber but suitable to be made into chips or used as biofuel. Increasing the use of biofuels in southeast Alaska could increase demand for utility logs from the Tongass and contribute to the viability of the timber industry in the region, according to the draft EIS. Doing so, however, would require investment in new infrastructure to produce and use these products. Forest Service officials told us that such investment is likely to be difficult because of both the uncertainty of demand in the region and the availability of large quantities of biofuel produced by facilities in the Pacific Northwest. Consistent with these statements, the Forest Service reported in a document developed to support the draft EIS that it found no evidence of market demand for utility logs from the Tongass. The viability of the timber industry depends upon the relationship between the market price of the final product (whole logs, dimension lumber, biomass, or other products) and the cost of producing it, including the cost to harvest, transport, and process it. In preparing the draft EIS, the Forest Service analyzed information regarding the economics of the Tongass timber industry. In 2015, the Forest Service also initiated a separate study of the costs of producing products from young-growth wood and the resulting value. These officials told us they initiated the study partly in response to the May 2015 draft recommendations from the Tongass Advisory Committee and said they expect to finalize the scope and time frames for the study in spring 2016 and to receive initial results in 2017. The Forest Service scientists leading the study told us the agency plans to harvest young-growth timber from randomly selected sites within the Tongass and process the timber in several mills in southeast Alaska and the Pacific Northwest. They said the agency intends to evaluate both the mills’ efficiency in processing the young- growth wood and the strength and appearance of the resulting products and to obtain information related to the processing costs and value of the products. Forest Service officials said the study’s results may help the agency assess the economic viability of a Tongass young-growth timber industry. Even with these steps, however, in its November 2015 draft EIS the Forest Service stated that there is a high degree of uncertainty surrounding its goal of preserving a viable timber industry. USDA and the Forest Service identified various actions they and other federal agencies would take to support the timber industry and other economic sectors during the transition to young-growth harvest in the Tongass, and the agencies have taken steps to implement some of these actions. These actions, which are identified in three documents issued by USDA and the Forest Service since 2010, focus on four economic sectors in southeast Alaska: timber, fishing and aquaculture, tourism and recreation, and renewable energy. However, the agencies have not implemented other actions they said they would take, because of other priorities or consideration of other approaches, according to agency officials. USDA and the Forest Service have taken steps to implement some of the actions they stated they would take to support the timber industry in southeast Alaska during the young-growth transition. For example: The USDA Investment Strategy in Support of Rural Communities in Southeast Alaska 2011-2013 stated the Forest Service would improve its Tongass timber planning processes by simplifying small timber sales to assist small-mill owners. Forest Service officials told us the agency has met with small-mill owners to discuss ways to address the mill owners’ needs. As a result of this outreach, the Forest Service lengthened the duration of some timber sale contracts for small sales; according to Forest Service officials, small sale contracts typically last from 1 to 3 years, but the agency lengthened the duration to 4 to 6 years for 8 of the approximately 60 small sales in the Tongass in fiscal years 2014 and 2015. This action provided small-mill owners with flexibility to harvest at more-advantageous times, according to Forest Service officials. The 2013 Secretary’s Memorandum 1044-009: Addressing Sustainable Forestry in Southeast Alaska stated that USDA would continue to work with Congress to exempt a limited amount of young growth in the Tongass from the general prohibition on harvesting a stand until it reaches its maximum growth rate. The memorandum said providing this flexibility is essential for developing economically viable young-growth projects within the timeframe of the transition. In 2014, Congress approved additional flexibility, which gave the Secretary of Agriculture authority to allow the harvest of these young- growth trees in areas that are available for commercial timber harvest. The 2013 Leader’s Intent: Forest Stewardship and Young Growth Management on the Tongass National Forest document, signed by officials from the Forest Service’s Alaska Region and the Tongass, stated the Forest Service would expand collaborative projects and partnerships with local communities, businesses, and nonprofit groups to support job creation through sustainable forest management. In 2015 the Forest Service entered into a partnership with the Native and Rural Student Center, which provides leadership training and academic support to Native Alaskan college students on University of Alaska campuses, and the Hoonah Indian Association, a tribal government in southeast Alaska. Forest Service officials told us that under this partnership, a local work crew is being developed to gain forestry skills and complete projects such as tree thinning in the Tongass. The officials said the first projects under this partnership are expected to be completed in 2016 or 2017. Documents on the transition issued by USDA and the Forest Service stated that the Forest Service would support the transition by studying young-growth supply, the cost of harvesting, transporting, and processing young-growth timber, and the value of the resulting products. As discussed previously, the agency has taken steps to study these issues. The agencies have not implemented other actions they said they would take because of other priorities or consideration of other approaches. For example: The Investment Strategy stated that the Forest Service would promote and facilitate the use of young-growth timber in southeast Alaska by using young-growth wood for cabins and other recreational structures, and that the Forest Service would request an additional $1 million in funding to construct cabins made from young-growth timber in high- visibility campgrounds. However, Forest Service officials told us that the agency did not request funding because of other spending priorities, and that no cabins have been built since the Investment Strategy was published in 2011. A few conservation organization stakeholders we interviewed told us that the Forest Service’s limited progress in using young-growth timber in its own facilities hinders the agency’s ability to achieve its goal of demonstrating the economic viability of producing young-growth products in southeast Alaska. Forest Service officials told us that other approaches, such as demonstrating the demand for dimensional lumber, might be a better option than constructing cabins for showing the economic viability of young-growth products. Forest Service officials told us the agency is collaborating with the National Forest Foundation to work with a local conservation group to demonstrate uses for young-growth timber, including the construction in 2012 of a private home built primarily from young-growth timber. The 2013 Secretary’s Memorandum asked the Forest Service to work with Rural Development to develop a plan by December 31, 2013, for providing financial assistance to help the timber industry retool to handle young-growth timber. As of December 2015, the agencies had not developed such a plan because they had been focusing on other priorities related to the transition, such as completing the draft EIS, according to Forest Service officials. Forest Service officials told us in January 2016 that they were developing a request for proposal for an outside party to conduct an assessment of the industry’s retooling needs and estimated that results from the assessment might be available in 9 to 12 months. They also said that the study the agency initiated in 2015 on the economic viability of the young-growth timber industry would provide information to inform retooling options. Rural Development officials told us the agency could provide loans to help the industry retool. The agencies have taken steps to implement some of the actions they stated they would take to support fishing and aquaculture in southeast Alaska. For example: USDA’s Investment Strategy stated the agencies would strengthen the aquaculture industry in southeast Alaska by providing support to entrepreneurs in the industry. Rural Development officials reported that in fiscal years 2012 and 2013 the agency guaranteed four loans, totaling about $1.4 million, that supported fishing and aquaculture development in the region. Similarly, the Economic Development Administration reported awarding approximately $1.4 million in grants in fiscal years 2013 and 2014 to support fishing and aquaculture in southeast Alaska, most of which was awarded to the Hydaburg Cooperative Association, a tribe in southeast Alaska, for the renovation of a cold-storage facility to develop a specialty seafood processing plant. The Investment Strategy also stated the agencies would identify and promote ways to include aquaculture development among traditional USDA agriculture programs. Farm Service Agency officials told us the agency used an existing farm loan program to provide five loans since 2011 to parties entering the shellfish industry. These loans totaled about $160,000 and were used to fund operational and capital expenses, according to these officials. The Investment Strategy also stated the agencies would take steps to restore degraded salmon streams in an effort to increase salmon productivity. Forest Service officials estimated, based on budget documents, that the agency’s annual funding for watershed restoration in the Tongass averaged approximately $1.1 million for fiscal years 2011 through 2015. Restoration projects included replacing and resizing road culverts to improve fish passage and placing woody debris into streams to improve fish habitat. In contrast, the Forest Service did not implement a proposed increase in funding for fishing and aquaculture because of other priorities. The Investment Strategy stated that the Forest Service proposed tripling the annual funding for watershed restoration (i.e., actions intended to improve fish habitat in streams and thereby support the health of fish populations) in the Tongass to $4.6 million annually. As noted, however, Forest Service officials estimated that agency funding for such activities averaged approximately $1.1 million for fiscal years 2011 through 2015. A Forest Service fisheries official told us that it has been difficult to increase funding for watershed restoration in Alaska because watershed conditions in Alaska are generally better than elsewhere and the region is therefore a lower priority for the agency. The agencies have implemented some of the actions they stated they would take to support tourism and recreation in southeast Alaska. For example: The Investment Strategy stated that the Forest Service would increase guided access to public land. Since 2012, the Forest Service has increased the amount of commercial outfitting and guiding services it allowed in the Mendenhall Glacier Recreation Area, near Juneau, to meet increased demand for guided services and access to this site. This change has increased visitation to the Mendenhall Glacier by an estimated 15,000 visitors annually and, from 2012 through 2015, generated an additional $5 million in revenues for tour companies, according to a contractor hired by the Forest Service. The Investment Strategy also stated that USDA agencies would take steps to develop recreation infrastructure. Forest Service officials told us the agency conducted trail improvement projects in 2015 on the Juneau, Petersburg, and Craig Ranger Districts. In contrast, the Forest Service did not request an increase in funding for agency projects supporting tourism and recreation as proposed in USDA’s Investment Strategy. Specifically, the strategy identified $1.9 million in planned expenditures for fiscal years 2012 and 2013 and recommended $8.4 million in additional funding for those 2 years. Forest Service officials told us, however, that they did not request additional funding for the Tongass and that the budget for the agency’s Alaska Region declined during this time. They estimated that the region’s budget for tourism and recreation decreased from $8.8 million in fiscal year 2010 to $6.7 million in fiscal year 2013—a decline of about 24 percent. The officials estimated that the budget for fiscal year 2014 was $7.1 million, which was an increase of about 4 percent over the previous year’s level but lower than the 2010 funding level of $8.8 million. The selected tourism and recreation industry representatives we interviewed expressed concern about reduced funding, as they did not think the Forest Service would be able to maintain the current inventory of cabins, trails, and other recreation facilities. Forest Service officials told us the agency has focused on maintaining existing facilities rather than constructing new ones but determined in 2014 that it would close up to 10 of the 143 cabins in the Tongass given budget reductions. The agencies have taken steps to implement some of the actions they identified to support renewable energy development in southeast Alaska during the transition. For example: USDA’s Investment Strategy stated that the Forest Service would provide technical assistance related to the planning and installation of biomass energy systems. The Forest Service reported providing such assistance from 2011 through 2015 to at least 19 localities, businesses, tribal entities, and individuals. Assistance included identifying potential biomass projects in communities, evaluating the design and economic viability of projects, answering questions about biomass technology use, and identifying funding sources for projects. Forest Service officials highlighted a project at the Ketchikan International Airport as an example of the agency’s efforts. The Forest Service provided technical assistance and a $143,000 grant to convert the airport terminal to a biomass heating system. The project was scheduled to be completed in 2016, according to a Forest Service official. Similarly, the agency reported providing various types of assistance—including public presentations and education, fuel assessments, and design reviews of plans—to support the development of a biomass system for community facilities in Haines. The Investment Strategy also stated the USDA agencies would work to develop demand for biomass energy. Agencies have taken steps to do so. For example, Rural Development officials said that in fiscal years 2012 through 2014 the agency provided at least three grants, totaling about $1.2 million, to support renewable energy development in southeast Alaska. In the Investment Strategy, USDA said the Forest Service would approach the Southeast Conference organization about sponsoring the development of a biomass energy plan for the region. The Forest Service has worked with the Southeast Conference to assess the potential for increasing the use of biomass energy in southeast Alaska and, in September 2015, published the Community Biomass Handbook, which offers instructions on designing and planning biomass projects as well as information on where biomass systems are being used in the region. The agency’s partnership with the Southeast Conference resulted in about 30 feasibility studies funded predominantly by the Forest Service and approximately 10 biomass systems in southeast Alaska, according to Forest Service officials. Also in the Investment Strategy, USDA said the Forest Service would, where feasible, substitute woody biomass for diesel fuel to meet the energy needs of southeast Alaska. The agency has taken some initial steps to do so. For example, officials told us that the agency was converting its facility in Sitka from diesel fuel to biomass energy, a project they expect the agency to complete in summer 2016. The Forest Service had previously converted a visitor center in Ketchikan to a wood-fueled heating system, although the building is no longer using this system, which the agency reported was too large for the facility and had high operating costs. The agencies, however, no longer plan to implement some actions they previously identified, according to agency officials. For example, the Investment Strategy stated that, to help “kick start” the biomass energy industry in southeast Alaska, the Farm Service Agency would encourage the use of a nationwide program that provides financial incentives to the biomass industry. A Farm Service Agency official in southeast Alaska, however, told us the nationwide program is not being used in the region because funding is limited and national program officials had decided to target existing biomass industry businesses rather than new ones, and there were no such businesses in southeast Alaska. Representatives we interviewed from the 30 selected Forest Service stakeholder organizations identified a variety of options they said would improve the agency’s management of the Tongass timber program. These stakeholders also expressed strong differences of opinion regarding the overall direction of the Tongass timber program. Options stakeholders identified for improving the Forest Service’s management of the Tongass timber program included: Improving predictability of timber available for sale. The majority of the seven timber industry stakeholders we interviewed told us the Forest Service does not offer a predictable amount of timber for sale from year to year. These stakeholders emphasized the importance of predictability for the timber industry to be able to make decisions about how to retool to accommodate young-growth trees—which they said is important given potential changes to the industry as a result of the planned transition. Options for improving predictability identified by these timber industry stakeholders ranged from offering timber sales under longer-term contracts—as a means of providing greater certainty over the quantity of timber they will be allowed to harvest in future years—to transferring significant acreage from the Tongass to the State of Alaska, an entity some timber industry stakeholders viewed as offering a more predictable timber supply than the Forest Service. On the other hand, one of the conservation organization stakeholders we interviewed said that the Forest Service could improve the predictability of supply by reducing the volume of timber it offers for sale and offering timber for sale in locations where there will be less environmental impact, steps the stakeholder said could reduce opposition to proposed timber sales and increase the likelihood of sales being implemented in a timely manner. In an effort to improve the predictability of its timber supply, the Forest Service is participating in the collaborative “all lands, all hands” effort with other southeast Alaska landowners to explore ways of achieving greater economic efficiency by sharing infrastructure and jointly planning projects. As part of this effort, Forest Service officials told us they have coordinated with the Alaska Division of Forestry on the timing of timber sales to try to ensure a more predictable and even flow of timber offered to the timber industry. Alaska Division of Forestry officials told us that this effort has been helpful but that continued work will be needed to improve collaboration among landowners on issues such as sharing costs for maintaining roads and other infrastructure. Increasing focus on small timber operators. Some of the 30 stakeholders we interviewed said that the Forest Service could do more to support the small operators that also play a role in local economies throughout the Tongass by harvesting small amounts of old-growth timber. These stakeholders suggested the Forest Service take steps such as offering smaller sales and making other changes— such as allowing small operators greater use of roads constructed in conjunction with larger sales—to make it easier for smaller operators to access timber. As previously discussed, Forest Service officials told us they had taken several steps to assist smaller operators, including lengthening the duration of some small timber sales. Officials told us that for two timber sales in 2012 and 2013, they kept several roads open for approximately 2 years after the sales were completed to allow access to remaining timber by smaller operators. Improving Forest Service collaboration. Some of the stakeholders we interviewed also said the Forest Service needed to collaborate more with the industries and communities affected by the transition— for example, by involving community leaders earlier in the decision- making process and better considering the effects of management decisions on specific locations—if the young-growth transition is to be successful. Similarly, the Tongass Advisory Committee emphasized the need for the Forest Service to become more flexible and responsive to timber industry and community interests for the transition to be successful. To help achieve that goal, the committee said Forest Service leadership needed to provide clear and consistent direction to agency staff, and the agency needed to increase the use of collaborative processes in its management decisions. Forest Service officials identified various approaches the agency uses to collaborate with the industries and communities affected by the transition. For example, they said that the agency has participated in the Tongass Collaborative Stewardship Group, a region-wide forum for communities and landowners to work together to align Forest Service projects with local and regional priorities. The Forest Service has also participated in a number of smaller collaborative groups relating to specific geographic areas in the Tongass, including the communities of Hoonah, Kake, and Sitka, and the Staney Creek watershed on Prince of Wales Island. One such group, the Hoonah Native Forest Partnership, includes the Forest Service, nonfederal landowners in the area, and other entities, such as the Hoonah Indian Association. The partnership formed in 2015 and is still in the early stages of planning and identifying specific work, according to a Forest Service official. The partnership is taking a watershed planning approach intended to balance economic, social, and ecological outcomes and consider both timber harvest and other important resources, such as salmon and deer, that rely on forests. In discussing their views on possible options for improving the Forest Service’s management of the Tongass timber program, stakeholders we interviewed also expressed strong differences of opinion regarding the overall direction of the program. Stakeholders expressed differing opinions on such diverse topics as the volume of timber that should be harvested, the locations where harvest should be allowed, and the proportion of harvest that should be young growth. For example, regarding harvest locations, some of the stakeholders we interviewed were concerned that the Forest Service is considering harvesting timber in environmentally sensitive areas such as near streams and beaches, which provide important wildlife habitat. In contrast, the majority of timber industry stakeholders and a few local government stakeholders we interviewed told us that the Forest Service already placed too much emphasis on minimizing the environmental effects of timber harvest and that the agency did not need to take additional steps to consider the environmental effects of the transition. Regarding the proportion of harvest that should be young growth, the majority of the timber industry stakeholders we interviewed stated that the harvest should continue to consist of old-growth trees in order to be economically viable for the timber industry, while other stakeholders stated that old-growth harvest should end entirely or be reduced to a small amount. We provided a draft of this report for review and comment to the Departments of Agriculture and Commerce. The Forest Service, responding on behalf of the Department of Agriculture, generally agreed with our findings and described actions it is taking in an effort to support various economic sectors in southeast Alaska (see app. III). The Economic Development Administration, responding on behalf of the Department of Commerce, stated in an email sent April 11, 2016, that it had no comments on our draft report. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Agriculture and Commerce, the Chief of the Forest Service, the Administrator of the Farm Service Agency, the Under Secretary for Rural Development, the Chief Operating Officer of the Economic Development Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or members of your staff have questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. In conducting our work, we interviewed representatives from a nonprobability stratified sample of Forest Service stakeholder organizations. Table 2 lists the 30 stakeholder organizations whose representatives we interviewed. We selected stakeholders to provide a range of perspectives on the Forest Service’s management of the Tongass National Forest timber program. Because this is a nonprobability sample, the views of the stakeholders interviewed are not generalizable to all potential stakeholders, but they provide illustrative examples. In November 2015, the Forest Service released for public comment a draft environmental impact statement that analyzed five alternatives for undertaking the transition from old-growth harvest to young-growth harvest in the Tongass National Forest. Table 3 summarizes these alternatives, which described different time frames for making the transition and projected various numbers of acres from which timber would be harvested. In addition to the contact named above, Steve Gaty (Assistant Director), Greg Campbell, Jonathan Dent, Patricia Farrell Donahue, Holly Hobbs, Richard P. Johnson, Ben Nelson, Timothy M. Persons, and Anne Stevens made key contributions to this report.
The Tongass National Forest, managed by the Forest Service within USDA, is located in southeast Alaska and is the nation's largest national forest. Since the early 20th century, the Tongass has had a timber program based on harvesting old-growth trees, which are generally more than 150 years old. In 2010, USDA announced its intent to transition the Tongass timber program to primarily harvest young growth, in part to help conserve remaining old-growth forest while maintaining a viable timber industry. As part of the planned transition, the Forest Service and other federal agencies identified actions they would take to support several economic sectors in southeast Alaska. This report describes (1) steps the Forest Service has taken to assess whether its planned transition will meet the agency's goal regarding a viable timber industry in southeast Alaska, (2) the status of actions the Forest Service and other federal agencies stated they would take to support the timber industry and other economic sectors during the transition, and (3) options suggested by agency stakeholders for improving the Forest Service's management of the Tongass timber program. GAO reviewed laws and agency documents related to the Tongass and interviewed federal agency officials and representatives from a nongeneralizable sample of 30 stakeholder organizations—including tribal, state, and local governments and industry and conservation entities—selected to provide a range of perspectives. The Forest Service generally agreed with GAO's findings. The Forest Service has initiated some steps to assess whether its planned transition to young-growth harvest on the Tongass National Forest will support a viable timber industry in southeast Alaska—a goal the Department of Agriculture (USDA) established as part of the transition. For example, the Forest Service reported refining the data it uses to estimate the amount of young-growth timber to be available for harvest over the next 100 years. Forest Service officials stated the agency also began a study in 2015, partly in response to a recommendation that year from a USDA-convened advisory committee, to compare potential market prices for young-growth timber or products to the cost to harvest and process the timber, information that may help the agency assess the economic viability of a young-growth industry in the region. The agency expects the initial results from the study to be available in 2017. USDA and the Forest Service identified various actions they and other federal agencies would take to support four economic sectors—timber, fishing and aquaculture, tourism and recreation, and renewable energy—during the transition to young-growth harvest on the Tongass, and the agencies have taken steps to implement some of these actions. For example, USDA stated that the Forest Service would improve its planning processes to assist the owners of small timber mills in the Tongass. According to Forest Service officials and documents, the agency has lengthened the duration of some timber sales to provide small timber mills some flexibility on when to harvest in the Tongass. However, the agencies have not implemented other actions identified. For example, the Forest Service has not implemented proposed funding increases for improving fish habitat and tourism facilities in the Tongass because of other spending priorities, according to Forest Service officials. Representative from the 30 stakeholder organizations GAO interviewed identified options they said would improve the agency's management of the Tongass timber program. These options include improving the predictability of timber available for sale and increasing the agency's focus on small timber mills and other timber-related businesses. Forest Service officials said they have taken some steps to address these options. For example, the majority of the timber industry stakeholders GAO interviewed emphasized the importance of the Forest Service offering a predictable amount of timber for sale from year to year for the timber industry to be able to make decisions about how to retool to accommodate smaller-diameter trees—which they said is important given potential changes to the industry with the planned transition to harvest young-growth trees. In an effort to improve predictability, the Forest Service has coordinated with the Alaska Division of Forestry on the timing of timber sales to try to ensure a more predictable and even flow of timber. However, stakeholders also expressed divergent opinions regarding the overall direction of the Tongass timber program, including the volume and location of timber to be harvested.
PBGC was established by ERISA to pay participants in private defined benefit plans in the event that the sponsor could not. PBGC is financed through insurance premiums paid by sponsors and by investment returns on held assets. Sponsors are responsible for making legally required contributions to pension trust funds that are intended to be sufficient to fund the promised benefits over time. Plan assets are then invested on behalf of participating employees. The precise calculations for determining annual minimum funding requirements are set forth in ERISA, and compliance is monitored by the IRS. These requirements are designed to provide reasonable assurance that a plan’s assets will be sufficient to fund the accrued benefits owed to participants when they retire. In 2005, we found that, of the 100 largest defined benefit pension plans (including many underfunded plans), on average 62.5 percent had received no cash contributions each year from 1995 to 2002. These plans were able to meet minimum funding requirements through the use of accounting credits. Compliance with the minimum funding requirements is recorded through the plan’s funding standard account (FSA). The FSA tracks events that affect the financial health of a plan during that plan year: credits, which reflect improvements to the plan’s assets, such as contributions, amortized experience gains, and interest; and charges, which reflect an increase in the plan’s financial requirements, such as the plan’s normal cost and amortized charges such as the initial actuarial liability, experience losses, and increases in a plan’s benefit formula. If FSA credits exceed charges in a given plan year, the plan’s FSA registers a net increase. Compliance with the minimum funding standard requires that the FSA balance at the end of the year is non-negative. An existing credit balance accrues interest and may be drawn upon to help satisfy minimum funding requirements for future plan years, and may offset the need for future cash contributions. In 2006, Congress enacted the Pension Protection Act (PPA) to address some identified deficiencies in funding requirements. Because these changes in funding requirements are applicable to plan years beginning after 2007, we are unable to comment on what impact, if any, the act would have had on our case studies. For defined benefit plans, the accrued benefit is the amount that the plan participants would receive as a life annuity beginning at the normal retirement age, as defined by the plan. It is determined by a plan’s benefit formula and is recalculated annually as participants complete an additional year of service. ERISA requires sponsors to provide information to the federal government regarding plan benefits, summary financial information, and funding information. Sponsors must estimate their liabilities each year to determine whether their plans are fully funded or underfunded, with the assumption that the sponsor will continue to maintain the plan in its current form for the foreseeable future. If a sponsor needs to terminate a plan, it must conduct a valuation of plan assets and liabilities to determine whether it is fully funded. The valuation assumes that no further benefits will accrue and no further contributions will be made. If this valuation finds the plan to be underfunded, the sponsor must meet certain criteria set by ERISA in order to file a claim with PBGC. If the sponsor meets these criteria, PBGC becomes trustee of the plan and will assume responsibility for paying guaranteed benefits to plan participants. However, payments to an individual beneficiary are subject to a maximum annual dollar limit, as set in accordance with ERISA. For 2009, this limit is $54,000. Executives at 10 companies received approximately $350 million in pay and other benefits in the years leading up to the termination of their companies’ underfunded pension plans. We identified the salaries, bonuses, and benefits provided to small groups of high-ranking executives at these companies. Executives at some companies received salaries in excess of $10 million dollars in the years leading up to bankruptcy. We also found that some executives at these companies received millions of dollars combined in other financial benefits such as income tax reimbursements, retention bonuses, severance packages, split-dollar life insurance, and supplemental retirement plans. Along with financial compensation received, some executives were provided other benefits such as apartments, personal trips on company airplanes and helicopters, club memberships, legal fee reimbursement, and automobiles. We did not attempt to determine whether these benefits were customary. Further, for each of the 10 selected companies, at least one executive we reviewed sat on the company’s Board of Directors. We did not find any illegal activity related to executive compensation on the part of either the 10 companies or the 40 executives under review. Table 1 shows details of our 10 case study companies where we reviewed salaries, bonuses, and benefits paid to executives. Some companies sponsored multiple plans which were terminated; for these companies the details of their pension plans are presented in aggregate. Further detail on 4 of these cases follows the table. Case 1: In September 2002, this airline hired a new CEO, who also served as President and Chairman of the Board, to help lead the company through potentially difficult times ahead. Three months later, the company declared bankruptcy and did not emerge until February 2006. During bankruptcy, the company terminated four pension plans from December 2004 to June 2005 that according to PBGC were underfunded by a total of $7.8 billion. This airline missed or waived nearly $1 billion in required minimum contributions. During this CEO’s tenure, through the termination of the company’s four pension plans, the CEO and two other executives received a total of $55.5 million in salary, benefits, and other compensation. See table 2 below for the components of compensation received by these three executives. The new CEO received a $3 million signing bonus, and the company established a supplemental retirement trust fund worth $4.5 million in his name “in consideration of retirement benefits foregone as a result of resignation from his former employer.” Upon the airline’s reorganization, the COO also received $2.6 million to set up an irrevocable trust in his name, along with a separate $100,000 in 401(k)-related payments. All officers were given unlimited free travel on the airline or its subsidiaries, along with a complimentary membership in the company’s VIP travel club. The company also reimbursed its executives for any income taxes which they might have owed on this free travel. In addition to these travel benefits, according to the company’s Officer Benefits Statement, executives would be reimbursed for the cost of “social and business club memberships…where there is a benefit to be realized to the company” and offered payments for financial advisory services. PBGC’s takeover of these plans in such an underfunded state had significant consequences for some of the company’s pilots, who lost large portions of their pensions due to PBGC’s statutorily mandated benefit caps. We spoke to several retirees, including one pilot who lost two-thirds of his monthly pension payments when his pension plan was turned over to PBGC. Prior to the pension termination, he had made the decision to retire 2 years early at age 58 to spend more time at home after 35 years of routine flights to and from Southeast Asia. He told us that he made this decision after careful consideration of numerous retirement benefit estimates he had received over the years from the airline. Within 2 years of his retirement, PBGC had taken control of his plan and his benefits payments had been reduced to a third of what he had been promised at retirement. Case 2: This airline went through four CEOs and two bankruptcies during its struggle to survive in the face of considerable financial losses beginning in 2000. During the course of its two bankruptcies (the first in August 2002 and the second in September 2004), the airline turned over four pension plans to PBGC. It had missed or waived $206.8 million in required minimum contributions, with a total of $2.8 billion in underfunding reported by PBGC. The plan covering the company’s pilots was terminated during the first bankruptcy. The remaining three plans, which covered mechanics, flight attendants, and others, were terminated during the company’s second bankruptcy. From 1998 to 2005, four CEOs received over $120 million in total compensation. See table 3 below for the components of compensation provided to these four executives. For example, one executive received $16 million in stock and an additional $16 million in related income tax reimbursements during his tenure as CEO and Chairman of the Board. He also received a lump-sum payment in excess of $14 million for his pension plan holdings at the time of his resignation. His successor, who previously served as COO, received $1.2 million in incentive awards, as well as nearly $17 million in stock when he was promoted to CEO, a position which he held for 3 years. At the time of his resignation, he received over $15 million in supplemental executive retirement benefits. Another executive, whose tenure as CEO lasted 25 months, was provided a severance package that included triple his annual salary and bonuses plus over $1 million in payments related to a supplemental defined contribution retirement plan. These executives also received millions of dollars in other benefits. For example, the airline paid the four CEOs a combined total of nearly $500,000 for living and relocation expenses during their tenures. And, as in case 1, the executives were provided unlimited free transportation on the airline and were reimbursed for any income taxes incurred on such travel. In addition, they received a lifetime benefit of membership in the airline’s frequent flyer program, which grants unlimited free first-class travel upgrades. They further received hundreds of thousands of dollars worth of benefits for split-dollar life insurance, and reimbursements for financial planning services and automobile expenses. Case 3: In December 1995, this electronics company’s newly hired CEO and Chairman of the Board stated it was “clear to that the company could achieve meaningful growth.” He announced a broad restructuring plan to improve the company’s profitability which eventually resulted in the termination of nearly 3,000 employees. However, the company experienced a net loss for the next 6 years before declaring bankruptcy in October 2001. When PBGC took over the company’s pension plan in July 2002, it was underfunded by $318 million, though the company had not missed or waived any required contributions. See table 4 below for components of compensation provided to three executives. In the 5 years leading up to its bankruptcy, the company paid three of its executives salaries totaling over $4 million and granted them bonuses equal to more than half that amount. Further, these executives received $6 million in stock awards, along with nearly 1 million stock options. When the Chief Financial Officer (CFO) resigned 10 months before the company declared bankruptcy, her severance package included 2 years’ salary and bonuses, continuation of medical benefits, and $600 thousand for her holdings in the company’s supplemental pension plan. Soon after, the CEO received a $1.4 million retention bonus because, according to the company’s bankruptcy filings, “ did not want him to have his attention distracted from the core problem of keeping the company moving along the track and wanted him not to be worrying about time spent or uncertainty for him and his family personally.” These executives also were reimbursed for expenses such as foreign taxes, rent, utilities, shipping, health club memberships, and legal advice to help protect their interests during bankruptcy. One executive received a $380,000 loan to help with the purchase of a new home near the company’s headquarters. This loan and related interest were to be forgiven completely over the course of 4 years if the executive remained employed by the company. In addition, some executives were provided with company car usage, along with coverage for all car-related business expenses. The company not only provided these benefits for its executives, but also provided additional benefits for families of executives. One executive received thousands of dollars for his wife’s continuing education, as well as assistance to help her purchase a new car, and reimbursements for some of his wife and daughter’s travel. Case 4: This insurance company and its parent company were owned and run largely by a single family whose members served on the boards of directors of both companies, as well as in senior officer positions such as CEO and COO. The insurance company was taken over by its state regulators during May 2001, resulting in the termination of its pension plan that was underfunded by $108 million, to PBGC in February 2002. The parent company declared bankruptcy in June 2001, soon after the takeover of the insurance company, leaving behind a second, smaller pension plan, underfunded by $13 million in January 2004. These two plans missed or waived a combined $29.2 million in required minimum contributions. In the 5 years leading up to the company’s failure, five executives received a total of nearly $70 million in salary, bonuses, and benefits. See table 5 below for components of compensation provided to these five executives. In the 5 years leading to the companies’ downfall, five executives received nearly $60 million in salaries and bonuses. This figure includes nearly $20 million for the CEO and a comparable amount for his brother, who served as the COO. According to the companies’ compensation committees, executive compensation is meant to “establish a relationship between compensation and the attainment of corporate objectives.” However, in 1 year, an executive failed to meet his target objectives, yet received a $1.5 million bonus “given continued confidence that he will achieve superior results in the future.” In addition to salaries and bonuses, the executives also received over $10 million in other benefits. The CEO received split-dollar life insurance benefits in some years in excess of $1 million. Combined, these five executives received nearly 12 million stock options. The CEO and COO together received free personal travel on corporate aircraft valued at over $200,000, as reported by the companies. We reviewed documents provided by the companies and found extensive personal use of corporate aircraft: a Boeing 727-100 airplane and a Sikorsky helicopter. One executive described the plane as “nicely appointed” with multiple rooms. From 1997-1999, we found personal trips by the CEO, COO, and their families to China, Spain, Greece, Miami, Hawaii, Puerto Rico, and Mexico. In addition, the CEO took over 80 trips to his home in Quogue, N.Y. using the company’s helicopter, of which more than 20 trips were solely for the transport of his wife and children. Further, the CEO and COO used both the company aircraft during the course of a family trip to Europe. Figure 1 below shows the same make and model of the helicopter used by company executives. In June 2002, the insurance company’s Board of Directors was sued for breach of fiduciary duties. In the complaint against them, “excessive compensation and preferential transfers” were cited as factors in the company’s failure. A separate lawsuit was filed in 2003 against the company’s COO, alleging that he had improperly received millions of dollars in compensation. These suits were both settled for a total of $85 million in May 2005. All but one of the executives described in this case study are currently receiving monthly pension payments from PBGC in amounts ranging between $4,400 and $8,200. During our review of case studies, we noted that PBGC has little to no oversight authority regarding executive compensation. Companies are not required to report specific executive compensation details in financial disclosures to PBGC prior to plan termination. Further, while many plan terminations occur during bankruptcy, PBGC’s ability to recover payments made to executives is limited by bankruptcy law. Companies are required to disclose certain financial information to PBGC, such as financial statements and projections, prior to terminating a defined benefit pension plan. The PBGC officials we spoke with indicated that these disclosures do not normally separate executive compensation from general companywide salary and benefits information. They further indicated that PBGC has no authority to demand that adjustments be made to salaries, although general adjustments to large categories of discretionary spending (which may include overall salaries) may be considered during evaluations of the company’s ability to maintain a pension plan. During bankruptcy, PBGC has little power to recover amounts paid as compensation to executives. Any executive compensation paid during the course of a bankruptcy must be approved by the court, and once approved, cannot be recovered. Like any other creditor, PBGC can object to specific executive compensation plans, but the final decision regarding such plans is made by the bankruptcy judge, who is responsible for determining whether the plan is justified by the facts and circumstances of the situation. For executive compensation paid within the 2 years prior to a company’s bankruptcy petition, creditors have extremely limited ability to recover some amounts, but generally only in situations where extreme mismanagement of funds has occurred. In addition, since PBGC is normally considered a general unsecured creditor during bankruptcy procedures, the agency has low priority for the payment of any such recoveries. In written comments on a draft of this report, PBGC agreed with our assessment that ERISA does not provide it with any oversight power regarding payments to company executives prior to a company’s bankruptcy. Further, PBGC agreed that it has limited power over executive compensation during bankruptcy, as such payments are under the purview of the bankruptcy court. We have reprinted PBGC’s written comments in appendix II. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. We will then send copies of this report to interested congressional committees and the Acting Director of the Pension Benefit Guaranty Corporation. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. For further information about this report, please contact Gregory D. Kutz at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To determine pay and other compensation received by executives in the years preceding their company’s termination of an underfunded defined benefit pension plan, we first obtained a database from the Pension Benefit Guaranty Corporation (PBGC) identifying companies that had terminated underfunded pension plans from 1999 to 2008. This database contained information on 1,246 terminated pension plans. We focused on identifying companies that (1) were publicly traded prior to the termination of their pension plan(s), (2) had significant unfunded pension liabilities on a termination basis ($100 million or more), (3) had a high unfunded liability per plan participant ratio ($10,000 or more per participant), and (4) had more than 5,000 plan participants. Of the 1,246 underfunded plans terminated from 1999 to 2008, we selected 10 companies from approximately 30 companies that sponsored plans which met our criteria. The 10 case study companies we selected were not meant to be representative of all companies whose plans have been taken over by PBGC in the past decade. We requested and reviewed documents provided by selected companies related to the compensation of their executives. We reviewed Securities and Exchange Commission (SEC) filings and PBGC documents disclosing plan underfunding at the time of termination and missed contributions. We also interviewed PBGC and company officials, as well as plan participants. In addition, we requested documentation from all selected executives under review, including tax returns. We attempted to interview all selected executives, but some could not be reached or declined our interview request. Findings related to executive pay and benefits were limited by the availability of public documents and information voluntarily provided by companies, executives, and other entities (e.g., professional sports teams, golf clubs). Because some companies and executives did not provide information, declined to be interviewed, and/or did not consent to granting GAO access to copies of their tax returns from the Internal Revenue Service, we were not able to document all details concerning pay and benefits received beyond the details available in public documents or otherwise voluntarily provided. Thus, the executive compensation information in this report represents what we were able to determine and may be understated. All values listed for restricted stock awards were determined by reviewing SEC filings which list the value of stock awards as of the date of the award, and therefore may not represent their ultimate exercised value. We did not attempt to determine the market value of stock options at the time of their award, nor their exercised value. We limited our review to the time period beginning 5 years prior to a company’s first pension plan termination and ending with the company’s final pension plan termination. Since some companies terminated multipl pension plans over a period of time, for those companies our review may cover more than 5 years. Due to the high turnover of executives at thes companies, information regarding the total compensation of the executives discussed in this report may not have been available for the entire time period under review. We did not conduct an exhausti of each company’s executive pay and benefit practices; we reviewed information related to executive pay and compensation in the years preceding the termination of a defined benefit pension plan. To assess the reliability of PBGC data related to the termination of defined benefit pension plans from 1999 through 2008, we (1) interviewed PBGC officials familiar with the data related to terminated pension plans and (2) matched the data provided by PBGC for the pension plans in each case study against records available on the agency’s Web site to verify that the data we were provided were exported correctly. We found the data to be sufficiently reliable to identify case studies for further investigation. We conducted our audit and investigative work from January 2009 through September 2009. We conducted our audit work in accordance with U.S. generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our finding and conclusions based on our audit objective. We performed our investigative work in accordance with standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. In addition to the contact named above, the individuals who made major contributions to this report were Christopher Backley, Gary Bianchi, Robert Graves, Rebecca Guerrero, Matthew Harris, Ken Hill, Leslie Kirsch, Flavio Martinez, Vicki McClure, Jonathan Meyer, Sandra Moore, George Ogilvie, and Barry Shillito.
When sponsors terminate underfunded plans during bankruptcy, it can deplete resources of the Pension Benefit Guaranty Corporation (PBGC), which protects the pensions of almost 44 million American workers and retirees who participate in over 29,000 defined benefit pension plans. In 2009, PBGC reported an estimated deficit of over $30 billion. GAO was asked to determine what pay and other compensation executives received in the years preceding their company's termination of an underfunded defined benefit pension plan. To identify case study examples GAO analyzed a listing of the 1,246 underfunded plans that were terminated from 1999 to 2008 and selected public companies with large unfunded liabilities, large unfunded liabilities per participant, and a large number of plan participants. GAO reviewed documents provided by companies and executives, and interviewed PBGC and company officials. GAO also reviewed Securities and Exchange Commission (SEC) filings and PBGC documents disclosing plan underfunding at the time of termination and missed contributions. Executive compensation figures may be understated because some company executives could not be located, did not respond to document requests, declined interviews, and did not give GAO access to their tax records. GAO found that 40 executives for 10 companies received approximately $350 million in pay and other compensation in the years leading up to the termination of their companies' underfunded pension plans. GAO identified salaries, bonuses, and benefits provided to small groups of high-ranking executives at these companies during the 5 years leading up to the termination of their pension plans. For example, beyond the tens of millions in base salaries received, GAO found that executives also received millions of dollars in stock awards, income tax reimbursements, retention bonuses, severance packages, and supplemental executive-only retirement plans. In some cases, plan participants had their benefits reduced due to the underfunding of the plan when it was terminated. For example, a retired pilot saw his monthly pension payment reduced by two-thirds. The reduction in benefits occurred because federal law caps the benefits PBGC can guarantee when it takes over an underfunded pension plan. In addition, PBGC has no oversight power with regard to executive compensation prior to a company's bankruptcy. During bankruptcy, executive compensation must be approved by the bankruptcy court, and after this approval PBGC has extremely limited ability to recover those payments to executives. GAO did not find any illegal activity with respect to executive compensation on the part of either the 10 companies or the 40 executives under review.
Identity thieves can obtain a legitimate taxpayer’s name and Social Security number (SSN) in a variety of ways. They can obtain identity information by hacking into a computer system or paper files at one of the many entities that use names and SSNs in their records (e.g., employers, schools, or financial institutions). Thieves can trick the taxpayer into revealing such information, or they can steal it from the taxpayer. Armed with the stolen identity, the thief can then file a fraudulent tax return seeking a refund. The thief typically files a return claiming a refund early in the filing season, before the legitimate taxpayer files. If IRS determines the name and SSN on the tax return appear valid (IRS checks all returns to see if filers’ names and SSNs match before issuing refunds) and it passes through IRS’s other filters, IRS will issue the refund to the thief. IRS often becomes aware of a problem after the legitimate taxpayer files a return. At that time, IRS discovers that two returns have been filed using the same name and SSN, as shown in figure 1. The legitimate taxpayer’s refund is delayed while IRS spends time determining who is legitimate. As we have previously reported, IRS has taken multiple steps to detect, resolve, and prevent identity theft-based refund fraud. IRS developed new filtering processes in 2012 to detect identity theft based on the characteristics of incoming tax returns that do not rely on a duplicate filing or self-identification by filers. Identity theft indicators—also known as account flags—are a key tool used to resolve and detect identity theft. Identity theft indicators speed resolution by making a taxpayer’s identity theft problems visible to all IRS personnel with account access. In some cases, IRS uses its identity theft indicators to screen tax returns filed in the names of known identity theft victims. If a return fails the screening, it is subject to additional IRS manual review, including contacting employers to verify that the income reported on the tax return was legitimate. IRS uses the Identity Protection Personal Identification Number (IP PIN)—a single-use identification number sent to victims of identity theft that have validated their identities with IRS—to prevent refund fraud. When screening returns for possible identity theft, IRS excludes returns with an IP PIN, which helps avoid the possibility of a “false positive” and a delayed tax refund. If a taxpayer was issued an IP PIN and does not use it when filing electronically, IRS rejects the electronically filed return and prompts the taxpayer to file on paper. Taxpayers that do not use an IP PIN or enter an incorrect IP PIN filing on paper experience processing delays as IRS verifies the taxpayers’ identity. As of June 30th, IRS reported providing more than 251,500 IP PINs to taxpayers in 2012 and of those, 150,506 taxpayers filed using an IP PIN. Of filers that filed using an IP PIN, 8.6 percent (12,936) used an invalid IP PIN. IRS officials told us their review of a sample of these cases found that the majority of the invalid IP PINs were due to transposition or keying errors. Details on other IRS actions can be found in our previous reports. Other steps taken in 2012 include temporarily reallocating hundreds of staff from other business units to resolve duplicate filing cases and issue refunds to legitimate taxpayers. Officials in IRS’s accounts management function told us that in October 2012 there were more than 1,700 staff working to resolve identity theft cases. Also, in April 2012, IRS began the Law Enforcement Assistance Pilot Program in Florida to help state and local law enforcement agencies obtain tax return data vital to local identity theft investigations. The pilot allows taxpayers to give their permission for IRS to provide state and local law enforcement with the returns submitted using their SSN in certain cases. IRS expanded the pilot to eight additional states in October 2012. As of September 2012, 49 state and local agencies participated in the pilot. We did not independently assess IRS’s 2012 efforts. The full extent and nature of identity theft-based refund fraud is not known, but IRS data indicate that it is a large and growing problem. The data show that in the first 9 months of 2012, the number of known tax- related identity theft incidents has already more than doubled over 2011 (see table 1). Understanding the extent and nature of identity theft-related refund fraud is important to crafting a response to it. Program officials said that one of the challenges they face in combating this type of fraud is its changing nature. The officials said that when they discover and shut down one vulnerability, thieves often change tactics. The hidden nature of the crime means it is not reasonable to expect perfect knowledge about cases and who is committing the crime. However, the better IRS managers’ understanding of the problem, the better they can respond and the better Congress can oversee IRS’s efforts. IRS officials described several areas where the extent and nature of identity theft is unknown. Total number and cost of fraudulent returns. IRS does not know the full extent of the occurrence of identity theft. Officials said that they count the refund fraud cases that IRS identifies but that they do not estimate the number of identity theft cases that go undetected. IRS officials explained that “we don’t know what we don’t know,” because if a fraudulent return goes through IRS’s identity theft models and other programs, they are unable to tell if they failed to detect the fraudulent return. Officials explained that it is very difficult to detect a fraudulent return when an identity thief uses a correct SSN and has enough identifying information to make the return “look” like it came from the legitimate tax filer. The tax return appears to be legitimate as it has been filed with a name and SSN that match. Detecting identity theft can also be challenging because some legitimate filers mistakenly file duplicate returns. For example, IRS officials told us that in some cases, taxpayers intending to amend their return are confused and file a second Form 1040. In such a case, IRS has to investigate whether the duplicate filing is due to taxpayer confusion or identity theft. IRS captures data on the amount of money it recovers from all types of fraudulent returns, but it does not distinguish whether the type of fraud was identity theft or some other type of fraud. In some cases, external entities, such as banks or other agencies, may notify IRS of potential refund fraud, including suspected identity theft-based refund fraud. IRS reported it had received information from 116 banks and external leads on more than 193,000 accounts between January 1 and September 30, 2012, for all types of refund fraud. IRS reported that banks and other external entities returned almost $754 million dollars during this period. These cases are ones where fraudulent returns passed through IRS processes and refunds were issued. W&I officials told us they analyze data from such cases to identify characteristics of the fraudulent returns to improve their screening for identity theft and other types of refund fraud. The officials told us that the procedure for banks to notify IRS of suspected refund fraud is not new, but more financial institutions have now begun doing so. Identity of the thieves. Unless IRS pursues a criminal investigation, IRS generally does not know the real identity of the thieves. An investigation is necessary because the only identity information IRS has on the fraudulent tax return is that of the identity theft victim, not the thief. Officials responsible for processing returns said that they do not have the sort of information that would be needed to even begin such an investigation. CI has substantially increased efforts to criminally investigate identity theft cases in fiscal year 2012; however, as with other forms of fraud, CI focuses its investigative resources on the most serious cases. The number of identity theft investigations opened and time spent investigating identity theft cases have increased from fiscal year 2010 to fiscal year 2012, as shown in table 2. Although identity theft is one of CI’s investigative priorities, the number of investigations initiated is substantially less than the number of identity theft incidents confirmed by IRS in 2012. CI officials told us that while other IRS functions share leads with CI, not all of these leads meet CI’s criteria for developing a case for prosecution. CI officials told us they generally focus their investigative resources on the most egregious and significant identity theft cases, as measured by volume and refund amounts. Whether a fraudulent return is an individual attempt or part of a broader scheme. W&I and CI officials told us the two units work closely to utilize the information they obtain from identity theft cases. They use this information to improve their measures to identify new identity theft-based refund attempts and to identify especially significant or egregious cases to consider for possible criminal investigations. When either W&I’s analysis of identity theft cases or CI investigations lead to the identification of new schemes, that information is reported to the Return Integrity and Correspondence Services unit so it can strengthen its identity theft filters. Identifying new schemes or significant cases, such as one identity thief using numerous taxpayer identities, depends on analysts noticing patterns or other indications that a few cases may be part of a larger scheme. As a result, some schemes or cases involving multiple taxpayers may go undetected. Characteristics of known identity theft returns. IRS officials told us that the agency does not systematically track characteristics of known identity theft returns, including the type of return preparation (e.g. paid preparer or software), whether the return is filed electronically or on paper, or how the individual claimed a refund (e.g. check, direct deposit, or debit card). They added that the form in which a refund is claimed would be particularly hard to track using the current processes. Officials noted that they can identify the Internet protocol address of computers used to electronically file returns, which is helpful in detecting potential identity theft. While much remains unknown about identity theft, IRS has taken steps to collect program data on its identity theft detection and resolution efforts. IRS developed the internal Refund Fraud and Identity Theft Global Report (Global Report) in July 2012 to consolidate and track existing information about identity theft incidents from multiple sources within IRS. IRS officials said that the information in the report is not new, but that they saw the need for consistency in identity theft-related information drawn from several data sources. The report is used to provide information to IRS senior management and the Identity Theft Advisory Councilidentity theft metrics and to provide a standard source of information for responding to data requests from external entities, according to PGLD officials. Officials also stated that because the Global Report is new, they are working to improve its quality. In a selective review of the Global Report, we found that it had many of the attributes we have previously found to be useful for program monitoring. For example, the report covers key program activities and generally provides names, definitions, and data sources. However, we also found some areas where additional information or clarification of information currently in the report could make it more useful, as explained in table 3. The Global Report is a useful step towards providing IRS management and other entities with up-to-date, consistent information about identity theft-based refund fraud and IRS efforts to address it. However, it could be improved with the inclusion of additional information about data limitations, definitions, data sources, and the frequency of data updates in some areas. With such additional information, IRS management or other entities that use the report would have a clearer picture of not only what is known about identity theft-based refund fraud, but the strengths and limitations of the available information. The quality of the report will also be enhanced by the institution of process controls to help ensure consistency in how the data in the report are compiled, verified, and validated. Identity theft-related tax fraud is a terrible problem for the victims and a growing problem for tax administration. For this reason, legislators, government officials, and the public want to know about IRS efforts to address identity theft. The nature of identity theft-related tax fraud means that it will be very difficult, if not impossible, to develop a complete picture of the extent and nature of the problem, which in turn makes it difficult to assess IRS’s progress in combating it. While not a direct attack on the problem itself, IRS’s new Global Report could be a useful management tool. It is a recognition of the fact that IRS is devoting significant resources to the identity theft problem and that consolidated and more consistent program information could assist in management oversight and decision making. In reading the report, we identified some improvements that could help users better understand the information provided. To improve the identity theft information available to IRS management and Congress, we recommend that the Acting Commissioner of Internal Revenue update the Refund Fraud and Identity Theft Global Report to: provide definitions, data sources, and the frequency of data updates for data elements where such information is missing; document procedures used to compile and validate data; and describe limitations of the data presented. IRS officials provided oral comments in response to our draft findings, conclusions, and recommendations. The Director of PGLD stated that she agreed with our recommendations and plans on implementing our recommendations to improve the information provided in the Global Report. The Director of CI, Refund Crimes said that the discussion of CI’s investigation of identity theft could be interpreted to suggest that CI is expected to work every case of identity theft. We revised language in the report to emphasize that, like other forms of fraud, CI focuses its identity theft-related refund fraud investigations on the most serious cases. Chairman Platts, Ranking Member Towns, and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For further information on this testimony, please contact James R. White at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the individual named above, David Lewis, Assistant Director; Shannon Finnegan, Analyst-in-Charge; Michele Fejfar; Sarah McGrath; Donna Miller; Amy Radovich; and Sabrina Streagle made key contributions to this report. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Identity theft is a growing and evolving problem that imposes a financial and emotional toll on its victims. As of September 30, 2012, IRS had identified almost 642,000 incidents of identity theft that impacted tax administration in 2012 alone, a large increase over prior years. A taxpayer may have his or her tax refund delayed if an identity thief files a fraudulent tax return seeking a refund using a legitimate taxpayer's identity information. GAO was asked to describe identity theft issues at IRS and limits to what is known about the extent of identity theft. GAO updated its analysis on identity theft with current data on identity theft cases and interviewed IRS officials. GAO also reviewed past GAO reports to identify key attributes of successful performance measures and compare information provided by the Global Report Understanding the extent and nature of identity theft-related refund fraud is important to crafting a response to it, but Internal Revenue Service (IRS) managers recognize that they do not have a complete picture. Program officials said that one of the challenges they face in combating this type of fraud is its changing nature and how it is concealed. While perfect knowledge about cases and who is committing the crime will never be attained, the better IRS understands the problem, the better it can respond and the better Congress can oversee IRS's efforts. IRS officials described several areas where the extent and nature of identity theft is unknown. Total number and cost of fraudulent returns. IRS does not know the full extent of the occurrence of identity theft. Officials said that they count the refund fraud cases that IRS identifies but that they do not estimate the number of identity theft cases that go undetected. Identity of the thieves. Unless IRS pursues a criminal investigation, IRS generally does not know the real identity of the thieves. Whether a fraudulent return is an individual attempt or part of a broader scheme. Identifying new schemes or significant cases, such as one thief using numerous taxpayer identities, depends on analysts noticing patterns or other indications that a few cases may be part of a larger scheme. As a result, some schemes or cases involving multiple taxpayers may go undetected. Characteristics of known identity theft returns. IRS officials told us that the agency does not systematically track characteristics of known identity theft returns, including the type of return preparation (e.g., paid preparer or software), whether the return is filed electronically or on paper, or how the individual claimed a refund (e.g., check, direct deposit, or debit card). While much remains unknown about identity theft, IRS has taken steps to organize what it knows in a newly developed Refund Fraud and Identity Theft Global Report (Global Report). The Global Report consolidates and tracks information about identity theft incidents and IRS detection and resolution efforts from multiple sources within IRS. The report provides information to IRS senior management and a standard source of information for responding to data requests from external entities. GAO's selected review of the Global Report against key attributes of successful performance measures found that it had many of the attributes useful for program monitoring, but also had some areas where additional information or clarification would make the report more helpful. Updating the Global Report to provide information on definitions, data sources, and limitations such as the unknown number of undetected fraudulent returns, could help ensure users have a more complete picture of the data and its strengths and limitations. The quality of the report will also be enhanced by the institution of process controls to help ensure consistency in how the data in the report are compiled, verified and validated. To improve information available to IRS management and Congress, GAO recommends that IRS update the Global Report to provide definitions and data sources, where such information is missing; document procedures used to compile and validate the data; and describe limitations of the data presented. IRS officials agreed with our recommendations. Based on their comment, we revised language in the report to clarify that, like other forms of fraud, IRS conducts criminal investigations only in the most serious identity theft-related refund fraud cases.
The Postal Service and its predecessors have delivered mail to and from other countries since the 1840s. International mail to and from the United States is regulated by both U.S. postal laws and international agreements. The 1970 Act authorizes the Postal Service, with consent of the President, to negotiate and conclude postal treaties or conventions and to establish the rates of postage or other charges on mail matter conveyed between the United States and other countries . On the basis of these provisions, the Postal Service participates in the Universal Postal Union (UPU). Unlike domestic rate changes, the Postal Service’s rate changes for international postal services are not reviewed by the Postal Rate Commission (PRC) and the delivery of outbound international mail is not covered by the Private Express Statutes (PES). Like its foreign counterparts, the Postal Service collects and retains revenues on outbound international mail, and UPU members compensate one another for in-country delivery of foreign-origin mail. The 1970 Act has been interpreted by Postal Service officials as requiring total international mail revenues from both outbound and inbound international services to cover all of the attributable costs associated with international mail. For fiscal year 1994, the Postal Service reported that it received $1.6 billion from international mail services. It handled 1.1 billion pieces of outbound international mail and delivered 727 million pieces of inbound international mail. International mail accounted for about 3 percent of the total postal revenues of $50 billion, about 1 percent of the total postal volume of 177 billion pieces, and contributed 2.5 percent to the total postal overhead costs of $17 billion in fiscal year 1994. Until June 1995, the Postal Service’s international mail policies were developed by various functional offices at postal headquarters. For example, the Postal Service’s operations support office was responsible for setting policies and procedures for the Service’s 28 international exchange offices, and its marketing office was responsible for developing international marketing strategies. An international postal relations office was charged with coordinating these interdepartmental efforts within the Postal Service and it continues to coordinate efforts with (1) international organizations and other countries, (2) private delivery services, and (3) other federal agencies. In June 1995, a new International Business Unit was established that, according to Postal Service officials, brought these and other functions together in a larger unit responsible for international mail policies. To accomplish the three objectives of our review, we (1) analyzed international mail revenue, volume, and cost data; (2) reviewed postal manuals and handbooks on international mail; (3) examined UPU documents and reports and other international agreements; and (4) toured the largest air facility (at John F. Kennedy Airport in New York) and the largest surface facility (the bulk-mail center in Jersey City) that process international mail. We also reviewed relevant provisions of the 1970 Act and various regulations, and court decisions, regarding the Postal Service’s participation in international mail delivery. In addition, we analyzed the Postal Service’s international marketing data, product and pricing information, and performance statistics. We also reviewed articles published in trade periodicals about the Postal Service’s international competitors and reviewed related documents on international mail prepared by PRC and the Departments of Commerce, Justice, State, and Transportation. Finally, we interviewed Postal Service officials responsible for international mail and representatives of ACCA. We conducted our work in Washington, D.C., from June 1994 to September 1995 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Postal Service, the Postal Rate Commission, and the International Committee of the Air Courier Conference of America. Their comments are discussed on pages 27 to 31. A UPU international agreement, the Universal Postal Convention, sets the basic principles and guidelines for the exchange of letter post mail and to a lesser extent for express mail. Other UPU agreements and guidelines apply to parcel post and financial services, such as postal money orders. UPU, one of the oldest intergovernmental organizations, was founded in 1874 by postal administrations of 22 nations to create a “single postal territory.” In 1995, 189 countries were UPU members. The “supreme” body of UPU is its “Congress.” Comprised of representatives of all member countries, the UPU Congress meets every 5 years to reevaluate and revise the “Acts” of the union: the Constitution, the General Regulations, the Universal Postal Convention, the Postal Parcels Agreement, the Money Orders Agreement, the Giro Agreement, the Cash-on-Delivery Agreement, the Final Protocols of the Conventions and the Agreements, and the Detailed Regulations of the Convention and the Agreements. Countries that signed the Universal Postal Constitution agree to accept mail from other countries and to deliver the international mail to its final destination. Member countries also are obligated to move each other’s mail through its territory and to exchange international mail where direct transportation or political relations do not exist. For example, although the U.S. government does not have diplomatic relations with the Cuban government, an American resident can send a letter to a Cuban resident. Letter mail to Cuba transits through Canada and Mexico. The Universal Postal Convention defines (1) general guidelines on international postal service and (2) regulations on the operations of letter post mail. These include the rates (called “terminal dues”) that countries pay each other compensation for processing and delivering inbound mail, the methods of calculating and collecting terminal dues, the maximum and minimum weights, the size limits of letter post mail, and the conditions of acceptance. (See app. I for further information on terminal dues.) The Postal Service and other UPU members may also enter into (1) bilateral agreements to exchange express mail and (2) multilateral agreements to exchange all categories of international mail under conditions more favorable than could be negotiated at a global level. To provide worldwide mail service, the Postal Service uses the domestic mail system as an integral part of the UPU international distribution network. Within the United States, inbound and outbound international mail is processed through the Postal Service’s national transportation, sorting, and delivery network. Under Postal Service procedures, outbound international mail is to be collected, separated from domestic mail, and transported to one of the Postal Service’s 28 international exchange offices. Outbound international airmail is to be forwarded to the assigned exchange office regardless of overseas destination. Outbound international surface mail, on the other hand, is to be transported to one of the Postal Service’s three international gateway offices (the New Jersey International Bulk Mail Center and the Miami and Oakland Processing and Distribution Centers) depending upon the country or overseas region. From the international exchange office, the mail is to be transported to the receiving foreign postal administration for processing and delivery. Inbound international mail enters the United States through one or more international exchange offices. Each inbound shipment is to be verified and recorded by postal officials and then submitted to U.S. Customs Service officers for inspection. The mail is then “commingled” with domestic mail and transported to mail processing plants where it is sorted and delivered through the network of post offices. Through its agreements with other countries, the Postal Service provides an array of international postal services. Until the mid-1970s, the Postal Service provided two basic international postal services: airmail and surface mail. Although these two services remain the principal sources of the Postal Service’s international mail revenue, the Service has over the years added several new postal services. Around 1980, it added Express Mail International Service (EMS) and International Surface Airlift (ISAL), which it continues to offer today. In 1986, the Postal Service added International Priority Airmail (IPA). Revenues from these three services totaled $184.1 million in fiscal year 1994 or about 14 percent of the $1.3 billion in international mail revenues from delivery services. Airmail accounted for 80 percent of the outbound international mail pieces, or $883.9 million, and 69 percent of the total revenues. International surface mail accounted for about 20 percent of the total international mail pieces, or $221.9 million, and about 17 percent of the international mail revenues from delivery services. (See fig. 1.) Airmail ($883.9 million) Like domestic mail, the Postal Service also offers various services for international mail, such as insurance coverage, registered mail, and return receipt. (More detailed descriptions of the these services are presented in app. II.) The Postal Service publishes its international rates by services, weight, destination country, or group of countries. For most services, the rates for Canada and Mexico are established separately from and are lower than those for other countries. For example, the rates for a 1-ounce letter to Canada and Mexico are 52 cents and 46 cents, respectively; the 1-ounce letter mail rate is $1.00 for all other foreign countries. The Postal Service has also been involved in various UPU-sponsored efforts to improve worldwide mail service. Examples of these activities include (1) providing Postal Service managers to assist postal administrations of developing countries that have requested technical assistance and (2) participating in studies to examine delivery service problems. According to Service officials, the Postal Service is losing some business to private carriers and foreign postal administrations. The Postal Service’s market research data show that total U.S. outbound international mail revenues for all carriers, including the Service, grew 12 percent annually from 1987 to 1992. In contrast, the Postal Service’s revenue annual growth rate during the same period was only 6 percent. In 1987, the Postal Service had 41 percent of the total U.S. international mail revenues; by 1992, it was down to 32 percent. Postal Service officials said that they expect that the Service’s share of international mail market revenues will drop as low as 30 percent by the end of this year unless steps are taken to reverse the trend. (See fig. 2.) The Postal Service officially had monopoly protection under PES for all outbound nonexpedited international letter mail until 1986. However, when the Postal Service issued regulations suspending the restrictions for “extremely urgent letters” in 1979, private U.S. carriers used that suspension to expand a practice called “international remailing.” Through remailing, U.S. mailers bypassed the Postal Service, using private carriers to deposit U.S. outbound nonexpedited international letter mail directly into foreign postal systems either (1) for return to the United States and delivery by the Postal Service at rates below domestic postage (known as “ABA remail”); (2) for delivery within the destination country at rates below U.S. international postage (known as “ABB remail”); or (3) for distribution and delivery to other countries, also at rates below U.S. international postage (known as “ABC” remail). In response, the Postal Service, in 1985, proposed to modify its 1979 regulations to clarify that the suspension did not allow the practice of ABA or ABC international remailing. U.S. mailers’ comments on the proposal were overwhelmingly negative toward the Postal Service’s proposal and positive toward continuing to allow ABB and ABC remail. As a result, the Postal Service withdrew its proposal and issued regulations formally suspending the operation of PES for international mail destined for delivery in other countries in 1986. (See app. I for more information on international remailing.) In 1994, EMS accounted for 1/2 of 1 percent of the Postal Service’s total international pieces and 8 percent of its total international revenues from international postal services. In terms of market share, in 1992, the latest year for which the Service compiled market share data, the Postal Service handled 4.3 million EMS mail pieces. This volume accounted for $81.7 million, or 4 percent, of the Postal Service’s estimate of $2 billion total U.S. international express services revenues for all carriers. Express services of all carriers, including the Postal Service, accounted for about 57 percent of the total $3.5 billion international mail market in 1992. The leaders in the international express market are the Federal Express Corporation and DHL Airways, Inc., which together accounted for over one-half of the total U.S. outbound international revenues. Other competitors include United Parcel Service (UPS), Emery Worldwide, Airborne Express Company, KLM Royal Dutch Airlines, and TNT Express Worldwide. (See fig. 3.) Service factors contributed to the Postal Service’s inability to gain a larger market share. According to the Postal Service, it did not provide certain value-added services offered by its competitors, such as automated tracking and tracing. Furthermore, the Postal Service had not matched the competitors’ reliability and speed of service, partly because it does not have end-to-end control of its delivery systems. According to the Postal Service, it is required to use scheduled U.S. commercial air flights to transport its mail overseas. A combination of treaty arrangements and national postal monopolies compels the Postal Service to rely, for the most part, on foreign postal administrations for in-country mail delivery. In contrast, some private carriers, with their own aircraft and ground transportation, have better control over schedules. For example, according to Federal Express officials, Federal Express has experienced growth in its international express market because it adjusted its flight schedules for faster express service. The Postal Service’s core international business is the nonexpedited delivery of letter post mail (letters, small packets, printed matter, and publishers’ periodicals). In 1994, this mail accounted for 99 percent of the Postal Service’s total international mail piece volume and 77 percent of its total international revenues from postal services. In 1992, the latest year for which the Postal Service developed market share data, the Service reported it handled over 1.2 billion pieces of letter post mail. This volume accounted for $930 million, or 75 percent, of the Postal Service’s estimate of $1.3 billion total U.S. international letter post mail service revenues for all carriers. Competitors in this market include remailers, such as KLM and TNT, and foreign postal administrations, such as the British, Dutch, Danish, and Canadian post offices. The British post office, through its subsidiary Royal Mail, and the Dutch post office, through an international mail joint venture called Interpost, are both, according to the Postmaster General, “aggressively” seeking business in the United States. Both postal administrations offer price discounts for nonexpedited letter post services to some high-volume U.S. customers. The Danish post office maintains offices in the United States and collects U.S. customers’ international mail for shipment through its international network. Canada Post has also contracted with major U.S. mailers, such as L.L. Bean, Inc., to enter northbound letter post mail directly into Canada. Although the Postal Service still has maintained a large share of the letter post market over the past decade, it has lost some market share reportedly because of unreliable delivery service, lack of value-added service, and substantial rate increases. In 1985, the New Postal Policy Council, an association of major users of the Postal Service, complained about the “erratic and unreliable” nature of the international service provided by the Service, which the Council said was “inferior to that provided by private companies.” These mailers were among those who successfully lobbied the Postal Service to suspend PES protection, as previously mentioned, for outbound nonexpedited international letter mail. Postal Service officials believed that the Service lost market share because it did not provide the value-added services that its competitors offered, such as warehousing, inventory, and customs clearance. The Postal Service required customers to sort and bag their bulk mailings by country of destination and to transport the mailings to an international airport to qualify for the best prices. In contrast, private companies such as TNT were willing to pick up unstamped business mail at the customer’s location, do some sorting, and transport the mail to the appropriate place overseas. Using its overseas facilities, TNT would then sort, stamp, and give the mail to the local postal authority for delivery to ultimate destinations. Postal Service officials also attributed the market share loss to the need to price according to “inequitable” terminal dues systems. Postal Service officials said that its international treaties necessitated substantial international rate increases in the 1980s that hurt its competitive position. For example, the Postal Service increased its international postage rates in 1981 an average of 39 percent for all of its services. Postal Service officials said this increase was necessary largely because the UPU Congress increased the terminal dues by 267 percent during its 1979 meeting. Furthermore, under the 1970 Act, the Postal Service must set its rates for all classes of services to cover all direct and indirect costs attributable to each class of mail plus that portion of other Service costs reasonably assignable to each class. However, it does not unilaterally control other postal administrations’ payments to the Postal Service for delivering foreign mail to U.S. destinations. According to the Postal Service, it is not fully reimbursed for delivering inbound international letter mail because these terminal dues are set below the Service’s unit cost for inbound delivery. To cover the shortfall between actual inbound mail delivery costs and the related terminal dues reimbursements, the Postal Service charged its outbound customers rates sufficient to cover the processing costs for both inbound and outbound mail. In response to the greater competition, the Postal Service expanded its ISAL service to more countries and introduced its International Priority Airmail service in 1986. It also began new efforts with foreign postal administrations to improve delivery times. Beginning in 1988, the Postal Service changed its pricing policy to reflect the new market environment. According to the Postal Service, these changes reversed a downward volume trend for the Postal Service’s international air letter mail. For example, through 1985, the basic international airmail letter rate had been set at twice the basic domestic letter rate. In 1988, however, when the domestic rate increased to 25 cents, the international airmail letter was set at 45 cents instead of 50 cents. Although the decline in the international outbound volume began to reverse itself, the Postal Service continued to lose overall market share. The Postal Service believes that it needs to be a competitor in the international mail market for two reasons. First, the revenue generated from its international mail services helps to cover institutional costs, thereby helping to restrain the growth in postal rates overall. Second, the Postal Service’s presence in the market provides its customers with a broader range of choices when selecting among the providers of international mail services. Toward that end, in 1995, the Postal Service announced plans to compete “aggressively” for international mail delivery. A senior Postal Service official said that the Service expects to be a “leading provider of efficient, high value, reliable and secure, full-service international communication and package delivery services” to “meet the needs of U.S. citizens and businesses on a worldwide basis.” The Postal Service plans to (1) introduce new services and value-added services, (2) improve the service quality of its letter mail service, and (3) pursue a market-sensitive pricing strategy that includes flexible volume discounts and customized mailing solutions. The Postal Service sees new service offerings as a critical part of its strategy to become a “global leader” in the international mail market. One new service is the WORLDPOST Priority Letter (WPL) service, introduced in March 1995. WPL is an expedited airmail letter service being pilot-tested for deliveries from 7 U.S. cities to Canada and 13 Western European and Pacific Rim countries. WPL is designed to be faster than the Postal Service’s regular airmail and cheaper than its international express mail. A Postal Service official said that WPL is geared to mail-order companies, colleges, travel agencies, and manufacturers “with a need to move correspondence reliably and at low cost.” Another new service is the International Package Consignment Service (IPCS). IPCS is a bulk-mailing service, with discounts that increase with each larger volume increment, targeted to these large U.S. mail-order companies sending merchandise to other countries. The Postal Service began providing IPCS for delivery to Japan in December 1994. To qualify for this service, a mailer must agree to send at least 25,000 packages over a 12-month period. IPCS base rates are lower than single-piece international rates for both airmail and express services. For example, the Postal Service’s individual parcel post air rate to Japan for a 5-pound package was $37.44 in July 1995. At that time, however, the base rate was $20.14 for a 5-pound package to Japan using standard air service under IPCS. According to Postal Service officials, IPCS base rates are lower than single-piece international rates because of fundamental cost differences in acceptance, handling, and transportation of IPCS parcels. As shown in table 1, the base rates may be reduced through up to four additive discounts (for a maximum discount of almost 21 percent) depending on how many cumulative packages the customer mails to Japan through IPCS during a 12-month period. Postal Service officials believe that IPCS discounts are consistent with discounts offered for domestic bulk “drop-shipped” parcels. These domestic discounts have been available to mailers for many years and reflect the Postal Service’s lower cost when larger volume mailings are handled as a single shipment and/or the mailer absorbs part of the transportation costs (drop-ships). The Postal Service also is pilot-testing a value-added service to complement IPCS. The service, called Customs Pre-Advisory Service (CPAS), allows customers in Japan ordering from U.S. mail-order companies to prepay their customs duties when they order U.S. merchandise using their credit cards. In January 1994, the Postal Service, in cooperation with postal administrations in 20 other countries, implemented an external system to measure on-time letter mail delivery between the United States and major industrialized countries. The system, administered by Price Waterhouse, measures letter mail delivery times from deposit to delivery. The Postal Service has not publicly released any of the results. The Postal Service said it is also working with several foreign postal administrations to improve mail service. For example, the Service Upgrading Task Force, created in 1994 and consisting of representatives from the United States, Canada, and eight major European countries, is tasked with identifying problems and implementing solutions to improve delivery times between countries represented on the task force. In 1993, the Postal Service implemented a customized program for international mail service known as International Customized Mail (ICM). ICM is targeted at low cost mailers who can tender large volume mailings. Under ICM, the Postal Service would negotiate individual rates with certain large volume mailers—those capable of offering at least 1 million pounds of international mail or paying at least $2 million in international postage and of providing all of its ICM mail to the Service at one location. According to Postal Service officials, ICM service is not a volume discount mechanism derived from any existing single piece rates. Rather, ICM is a customized service wherein the price is based on the specific costs of providing a combination of specific services requested by the customer. Postal Service officials added that this pricing approach is commonly used by private sector service providers and is recognized as a valid method of responding to the needs of large volume commercial mailers. Although a federal district court ruled in 1994 that the newly established ICM service unreasonably discriminated among mail users and could not be implemented, in September 1995, the U.S. Court of Appeals for the Third Circuit reversed the district court’s ruling and upheld the authority of the Postal Service to implement ICM service with its proposed volume discounts. Also, in an effort to increase its EMS business, the Postal Service, in July 1995, began offering “country-specific” rates for Express Mail destined for Canada, Mexico, the United Kingdom, China, and Japan. Rates to these countries, available to all international mailers, are lower than the rates to all other countries. According to Postal Service officials, the Service is passing along the cost savings of serving these high volume destinations to customers through lower rates for these countries. The Postal Service’s efforts to compete for increased shares of the international mail markets have raised issues for both the Service and its competitors. These issues evolve from the 1970 Act, various regulations, court orders that interpret and implement that act, and related federal laws and regulations. Specifically, the Postal Service is attempting to overcome what it considers to be statutory and regulatory barriers that limit its ability to compete for international mail business. However, competitors contend that the Postal Service has a competitive advantage from its unique role in setting international mail rates with limited independent review and serving as a government agent in conducting negotiations and making agreements with other postal administrations. According to Postal Service and Department of Transportation officials, the Postal Service is generally required to use U.S. flag carriers to deliver its international mail overseas and to pay them in accordance with rates set by the Department of Transportation. In contrast, the Postal Service can set the rates it pays air carriers for domestic mail delivery. Postal Service officials said that restrictions over international air routes limit the Service’s ability to minimize transportation costs and impair its ability to provide fast international service. In June 1995, the Department of Transportation drafted a bill that would give the Postal Service authority to negotiate directly with U.S. airlines for the carriage of international mail. The Postal Service, responding to a request by the Office of Management and Budget (OMB) for comments, said that it opposed the draft bill because the legislation did not address the U.S. flag carrier preference. According to Postal Service officials, the transfer of rate-making authority, without elimination of the U.S. flag carrier preference, could complicate the transportation contracting process, foster a distortion of contract prices, and create an “informal cartel” of dominant U.S. carriers. In July 1995, OMB recirculated the draft bill, along with the Postal Service’s remarks, for general comments but subsequently shelved the draft legislation to allow for Postal Service and Department of Transportation officials to negotiate a mutually agreeable compromise. As of January 1996, the parties were still involved in discussion regarding the details of the draft bill. While the Postal Service maintains that legal and regulatory constraints hinder its ability to compete effectively in the international mail market, private carriers argue that the Service enjoys an unfair competitive advantage because of its quasi-governmental status. The Postal Service’s competitors have alleged for several years that rates on some of the Service’s international postal services are so low that they do not cover service costs. The 1970 Act requires the Postal Service to recover its direct and indirect costs for each class of mail service, and competitors argue that the Service’s international pricing practices are inconsistent with the act. To ensure that the Postal Service does not engage in illegal pricing practices, ACCA has recommended that Congress consider giving the Postal Rate Commission (PRC) authority to recommend the Service’s international postage rates. The Postal Service, PRC, and the courts have all agreed that section 407(a) of the 1970 Act permits the Service to set international rates without approval from PRC. PRC maintains that “postal rates and services between the United States and foreign nations have a foreign affairs dimension, which requires presidential review; they are thus not purely regulatory questions.” Although PRC does not have jurisdiction over international rates, the Postal Service is required to submit forecasts of international mail revenues and costs as part of its formal request to PRC for domestic rate changes to demonstrate that each class of mail bears its direct and indirect costs attributable to that class of mail. Before 1994, the Postal Service provided detailed information on how it projected these estimates and responded to questions about them in public hearings. However, in requesting rate changes in 1994 (R94-1), the Postal Service ceased this practice and provided only aggregate volume, revenue, and cost figures. Prior to the hearings, Federal Express filed interrogatories seeking information on the costs and revenues associated with international mail. In its response to the request, the Postal Service provided some information but said that the supporting, detailed information requested was irrelevant and outside the scope of the proceeding, was extremely burdensome to produce, and contained certain confidential and commercially sensitive information. The Postal Service further argued that the then-current detailed information provided on domestic mail estimates clearly showed any international costs that were incorrectly attributed to domestic mail. In August 1994, PRC ruled that supporting data were needed for the Postal Service’s financial forecast of international mail to verify that (1) the forecasts of international mail costs and revenues were accurate and reliable and (2) the attributable costs of international and domestic mail were correctly separated. PRC rejected the Postal Service’s argument that international mail estimates could be verified by reviewing domestic mail data because, among other reasons, international mail incurs costs that are not shared by domestic mail. Furthermore, PRC concluded that the Postal Service’s allegations of commercial harm did not form a legally adequate basis for applying the trade secret privilege. Although the Postal Service agreed to provide some of the information, it would do so only if it were allowed to decide what international mail data would be considered to be confidential and thus protected from public disclosure. PRC granted the Postal Service’s request to protect the requested data from public disclosure but added a provision permitting Federal Express to challenge the Service’s designations of confidential information. The Postal Service objected to this provision and provided no supporting details to its financial forecasts of international mail. Federal Express believes that the Postal Service’s refusal to obey the PRC’s ruling is illegal. The Postal Service had earlier declined to provide detailed information on international mail revenues, costs, and volumes. In 1992, the former Chairman, Subcommittee on Federal Services, Post Office and Civil Service, Committee on Governmental Affairs, and the former Ranking Minority Member asked PRC to conduct a comprehensive review of international rates and related costs to determine if the Postal Service covered all appropriate costs under international rates. Members’ concerns were prompted in part by a 1992 ACCA-sponsored study in which the Postal Service’s rates for some international services were asserted to be “significantly underpriced.” Postal Service officials argued that the 1992 ACCA-sponsored study was based on flawed assumptions, limited and inaccurate data, and a misunderstanding of postal rate-making procedures. The officials added that although some Postal Service international services have lower markups than some domestic services, international services as a whole cover their direct costs and contribute to overhead costs, as required by law. Accordingly, the Postal Service declined to provide the information requested by PRC, again citing its commercial sensitivity. PRC revised its study guidelines, with the agreement of the former Chairman and the former Ranking Minority Member, to ensure that no data would be made public. The Postal Service still did not provide the data, again stating that the data were commercially sensitive. For fiscal year 1994, the Postal Service reported that international mail revenue was $1.6 billion, which covered its direct (or attributable) cost and contributed $436 million to overhead costs. On the basis of our review of the Postal Service’s cost and revenue data, we determined that international mail as a whole covered its attributable costs and contributed to overhead costs every year from 1990 to 1994. However, international surface mail did not recover its attributable costs in 1991 and 1992, and international surface letters and cards as well as surface parcel post did not recover their attributable costs in 1990. We also noted that both international mail’s contribution to overhead costs and cost coverage increased every year from 1992 to 1994. According to the Postmaster General, international mail services make a contribution to overhead costs comparable with or higher than other postal services that are provided in competitive environments. International mail total “cost coverage” (the markup over attributable costs) was 141 percent in fiscal year 1994. In comparison, domestic express mail service contributed $148 million to overhead costs, and it had a cost coverage of 128 percent in fiscal year 1994. Fourth-class (domestic) parcel service did not cover its attributable costs in fiscal year 1994 because the parcel volume was less than anticipated that year. The revenue was $98 million less than attributable costs. The Postal Service began negotiating cost-based terminal dues agreements in the late-1980s with countries with which it exchanges large volumes of mail. According to the Postal Service, the purpose of these agreements is to lower terminal dues losses by making the terminal dues system more consistent with operating and delivery costs. The Postal Service currently participates in two such agreements: (1) a 14-country multilateral agreement with members of the Conference of European Postal and Telecommunications Administrations (CEPT) and (2) a bilateral agreement with Canada. According to an ACCA representative, the CEPT agreement’s main purpose is to discourage remail. The ACCA representative argued that charging a higher uniform terminal dues rate means nothing if the two countries exchange the same amount of mail because “everything washes out.” (See app. I for further information on cost-based terminal dues agreements.) Two federal agencies and the European Commission supported ACCA’s assertion regarding the CEPT agreement. In 1988, the Antitrust Division of the Department of Justice and the International Trade Administration of the Department of Commerce responded to a request by OMB for comments on the proposed CEPT agreement. Both expressed concerns about the agreement. Justice warned that if the CEPT terminal dues are set above costs, they may drive remailers out of the international mail market. According to Justice’s documents, “such a development would injure consumers by eliminating the competitive market for international mail and could wipe out the gains we have achieved in the past few years.” Similarly, in a letter to OMB, Commerce officials stated that the agreement would strengthen the competitive position of the national postal administrations at the expense of the private sector. The International Express Carriers’ Conference (IECC), ACCA’s international affiliate, filed a complaint before the European Commission arguing that higher terminal dues through the CEPT agreement were meant to curtail remail. Although the European Commission found that the CEPT agreement was anticompetitive and therefore inconsistent with the Treaty of Rome, it dismissed IECC’s complaint because it believed that European postal administrations corrected the situation by negotiating a new terminal dues agreement. IECC had appealed the decision to the European Court of Justice at the time of our review. Postal Service officials said that the Service does not consider its agreements with other postal administrations to be postal treaties or conventions that require presidential consent under the 1970 Act. Accordingly, the Postal Service implemented the CEPT agreement notwithstanding the Commerce and Justice Departments’ objections. Both the CEPT and Canadian agreements will expire in 1996. The Postal Service and postal administrations from Canada and 19 European countries were working on a successor to the current agreements at the time of our review. According to ACCA, the Postal Service has benefited from (1) its official role as a national postal administration and (2) its exclusive access to foreign postal administrations through UPU, by receiving special treatment from customs services under both United States and foreign laws. ACCA has also maintained that the Postal Service has abused its official role at the UPU Congress, at the expense of private carriers. Consequently, ACCA challenged the Postal Service’s representation at the UPU Congress saying that the Service has represented the United States without lawful authority of the President. ACCA has long contested the Postal Service’s role as the sole U.S. negotiator of the Universal Postal Convention, arguing that the Service cannot be both regulator and competitor of international courier companies. An ACCA representative said that the Postal Service has agreed, in the name of the United States, to various anticompetitive provisions at UPU Congresses. In response to a Department of State’s request for comments on the 1989 UPU Convention, OMB, Justice, and Commerce expressed concerns about the Convention. OMB considered the following two provisions of the 1989 Universal Postal Convention to be anticompetitive: (1) a provision (better known as Article 25) that permits postal administrations to refuse to handle mail brought into their country by private couriers and (2) a provision that terminal dues rates need not be directly related to costs that, according to testimony given by a DHL Airways official before the House Subcommittee on Postal Service in June 1995, allows national postal administrations to manipulate international rates. Although former President Bush signed the 1989 UPU Convention, he did so with reservations. President Bush said that he was concerned about “the Postal Service’s role as sole U.S. negotiator of international postal agreements while at the same time a competitor in the international mail arena” and that “several elements of the UPU Convention could affect competition in the carriage of international mail.” ACCA has also said that the Postal Service represented the United States at the UPU without legally requisite approval of the President. Because of its belief that this representation is illegal and its concerns regarding the Postal Service’s role in UPU, ACCA requested that the President appoint ACCA as a member of the 1994 UPU Congress delegation. In November 1993, the Department of State denied ACCA’s request on the basis of the Postal Service’s advice that it would be inappropriate to include private operators in an intergovernmental body whose basic purpose is to help postal administrations fulfill statutory universal service obligations on an international level. In August 1994, President Clinton officially appointed the Postal Service to represent the United States at the 1994 UPU Congress. ACCA maintains that the President delegated this authority without any procedural safeguards to ensure that the public interests of the United States were represented. Consequently, ACCA contends that this delegation of authority by the President to the Postal Service violates the due process requirements of the Constitution. The Postal Service led the U.S. delegation at the 1994 UPU Congress. Postal Service officials maintain that 39 U.S.C. 407(a) authorized the Postal Service to represent the United States at the UPU. Although President Clinton issued a letter confirming that the Postal Service had his consent to negotiate UPU agreements, he did so only to be responsive to ACCA. According to Postal Service officials, President Clinton’s consent had already been given through his representative, the Secretary of State, with whom the Postal Service coordinates its UPU activities. The Postal Service also defends its responsibility as the U.S. representative in the UPU on the basis of its statutory and treaty obligations. According to Postal Service officials, the status of the Postal Service as a federal entity arises from its congressionally mandated universal service obligations and its accountability to Congress for the fulfillment of those obligations—obligations that competitors do not share. Furthermore, the UPU establishes rules, standards and procedures that apply only to member postal administrations. Service officials believe that, because competitors have no duty to provide the services established by the UPU Acts, they have no special claim to participate directly in UPU decisionmaking processes relating to what are essentially postal obligations and services. To meet the competitors’ concern, Postal Service officials said that the UPU has established a private operators-UPU Contact Committee to provide for dialogue at a global level and to help determine areas of common interest where cooperation can be of benefit for both sides. Postal Service officials also said that, despite their limited access to the UPU, private operators do have direct access to foreign postal administrations. They can deposit mail with another postal administration like any other customer, and they can negotiate more favorable access arrangements based on the traffic they generate. The Postal Service is authorized by the 1970 Act to enter into agreements with other countries regarding international mail rates and delivery. On the basis of this authority, the Postal Service over many years has been a part of a universal mail service network established on an international basis. Since 1970, however, the Postal Service has assumed an additional role of competitor in a dynamic international mail market. It sees this role as appropriate and necessary to (1) assist in reducing the overall cost of operating the U.S. mail system, since the revenue from international mail helps pay the Service’s institutional cost and (2) give American citizens and businesses another choice—namely, the Postal Service—among providers of international message and package delivery services. While there may be valid reasons for the Postal Service to compete aggressively with private firms, the 1970 Act does not specify what role the Service and its competitors should play in the international mail market. The competition between the Postal Service and private firms has raised policy issues that could not have been anticipated in 1970. Consequently, the 1970 Act and its legislative history provide little guidance to resolve issues involving (1) the Service’s required use of American flag carriers, (2) the appropriateness of the Service’s rates for international mail services, and (3) the Service’s participation with the Universal Postal Union and with governments of postal administrations of other countries in matters affecting the Service’s commercial interests. These are policy issues that require reexamination of many complex and interrelated provisions of the 1970 Act, which was beyond the scope of our current review. Moreover, issues surrounding the Postal Service’s role in the international mail market are very similar to issues that we have previously reported on regarding the Service’s role in domestic mail markets. Because of this similarity, and the fact that Congress is already considering proposals for reform of the Postal Service, we are making no recommendations regarding the Service’s role in the international mail market. We obtained written comments on a draft of this report from the heads of the Postal Service and the Air Courier Conference of America (ACCA). The Postal Rate Commission (PRC) did not provide written comments, but we discussed the draft with the Chairman and other Commission officials on November 20, 1995, and they agreed with our description of the international mail market and the associated issues. They suggested several technical and editorial changes to the draft, which we made where appropriate. In its written comments on the draft, the Postal Service said that international mail is an integral part of its statutory mission. The Postal Service believed that the full range of international services it provides helps businesses in the United States respond to developments in global commerce and that its major customers believe that they are better served by having the Postal Service in the market. The Postal Service said that the increasingly commercial dimension of its services requires reconsideration of the 1970 Act and that it has recently heard views from many parties, including leaders of foreign postal administrations, on the changes needed. The Postal Service said it believes that to meet the needs of the American public and business community, a stronger commercial capability is important domestically but, more crucially, for its international services, for which it suspended the protection afforded by the Private Express Statutes. The Postal Service said that it established the International Business Unit not only to develop new international services but also to generate new revenues to support improvements of the international universal service network. The Postal Service said that it expects efforts in the international mail market will lead to better performance and enhanced service in the domestic market as well. The Service responded to the criticism made by its competitors that the Postal Service and the Universal Postal Union (UPU) are “regulators” of international delivery services by saying that Congress determines the scope of the Private Express Statutes in the United States and that the scope of postal monopolies in other countries is determined by respective national governments. According to the Postal Service, the role of UPU is to connect postal administrations with universal service mandates established by national authorities in a standardized global postal delivery network. The Postal Service believes that it is inappropriate for private operators, who have no obligations under UPU agreements, to participate directly in UPU proceedings. The Postal Service added that despite the limited access to UPU, private operators have unlimited direct access to foreign postal administrations and that they would not be able to offer remail services if they did not have such access. Although we did not reach any conclusions on the need for any change in U.S. participation in UPU, we do not agree with the Postal Service’s assertion that Congress, alone, has determined the scope of the U.S. mail monopoly. Rather, the Postal Service has issued several regulations implementing the statutes which (1) define a “letter” for the purpose of administering and enforcing the statutes, (2) exempt various items from the scope of the statutes, and (3) suspend entirely the statutes for certain letters. The Postal Service said that ACCA’s position on terminal dues is without foundation. The Postal Service said that it supports country-specific cost-based terminal dues. For example, the Postal Service said that it helped develop and supported the cost-based terminal dues for bulk mail adopted by the 1994 UPU Congress. The Postal Service added that ACCA failed to highlight the connection between terminal dues systems that do not cover the cost of delivery and remail offered at below-cost prices because the Service believes that ACCA members have benefited from postage rates based on below-cost terminal dues. Information we obtained supports the Postal Service’s comment that it has supported cost-based terminal dues. We did not determine whether ACCA members have benefited from postal rates that do not reflect cost-based terminal dues. The Postal Service also rejected ACCA’s allegation that the Service has engaged in unfair pricing practices and that international prices should be subject to PRC review. The Postal Service said that it does not price its services below costs. It added that international mail’s contribution to the Service’s overhead costs saves money for the domestic rate payers. Our analysis of Postal Service cost and revenue data for fiscal years 1990 to 1994 supports the Service’s comment. International services as a whole covered attributable costs and made a contribution to overhead costs during each of those 5 years. The Postal Service believes that it has provided PRC sufficient cost data to verify that international services are not subsidized by domestic services. The Postal Service said that there is no evidence of cross-subsidization of international mail products by first-class rate, which the Service said is the second lowest in the industrialized world, or by international air letter rate, which it said is the lowest in the industrialized world. We did not determine whether PRC had received sufficient data for setting postage rates because this determination was not within the scope of this review. However, PRC believes that it needed more data than the Postal Service provided in the most recent rate case, when new rates became effective in January 1995. The full text of Postal Service’s comments are included as appendix III. ACCA commented in writing that it believes our report was impartial and well considered and represented an important first step in a review of U.S. policy toward the international delivery services sector. However, ACCA took exception to our use of the Postal Service’s market surveys to describe the international mail market. ACCA said that a market survey study, which focuses on the Postal Service’s share of the overall market revenues, is inappropriate for public policy analysis. ACCA said that the surveys tend to disregard matters such as volume of letters carried, quality of service, customer complaints, and profitability of the service. ACCA also believes that the Postal Service has made inappropriate comparisons of Service and Federal Express market shares and misstated its overall share of the international mail market. We agree that a market study based on revenues only does not provide a complete picture of how well the general public is served by the overall international mail market. However, the distribution of, and changes in, market revenue is one measure of how well the Postal Service is performing, as indicated by the choices among competitors that existing and potential customers make. Furthermore, neither the Postal Service nor ACCA provided data to measure the quality of the overall market and its value to the general public. We described the market using what we believe are the best data available and have added qualifiers to the description to clarify the limitations of the data. ACCA said that our report presents an excellent overview of Postal Service and UPU efforts that ACCA believes hinder private competition in the international mail market. However, ACCA believes that we understated the magnitude of the anticompetitive effort and provided what it considered to be additional points not noted in the draft report on (1) the Postal Service’s representation of the United States at UPU without approval of the President or through unconstitutional delegation of presidential authority; (2) the Postal Service’s world leadership in efforts to strengthen Article 25 of the Universal Postal Convention by expanding its application to “nonphysical remail”; (3) UPU’s promotion of special customs privileges for post offices, preferential rates for large mailers, and group refusal to deal with private carriers; and (4) the Postal Service’s commercial advantages gained through anticompetitive practices of foreign governments. A review of presidential authority to delegate and the alleged anticompetitive and other practices of UPU and foreign governments were not within the scope of this review. Rather, this report highlights what is shown, by the evidence we collected, as the key competitiveness issues confronting both the Postal Service and its international competitors. ACCA’s comments helped to amplify and emphasize its views on those issues. We did, however, revise appendix I to include ACCA’s allegations of the Postal Service’s role in expanding the scope of Article 25 to “nonphysical remail.” ACCA also said that we incorrectly reported that the Postal Service has suspended the postal monopoly for international mail, arguing that the Postal Service has no legal authority to suspend the postal monopoly. ACCA also said that the U.S. Customs Service discriminates against shipments tendered by private delivery services. A review of the Postal Service’s legal authority to suspend the postal monopoly and the U.S. Customs Service’s policy toward private delivery services were not within the scope of this review. Finally, ACCA disagreed with our conclusion that issues in the international market be resolved within the context of overall postal reforms. ACCA said that when Congress drafted the Postal Reorganization Act of 1970, Congress simply failed to consider international postal policy. ACCA believes the time has come to apply the principles of the 1970 Act to international postal services. In light of both ACCA and Postal Service comments regarding the need to update the 1970 Act, we revised and expanded our conclusions regarding any future changes to the act. The full text of ACCA’s comments is included as appendix IV. We are sending copies of this report to the Senate and House Postal Oversight Appropriation Committees, the Postmaster General, the Postal Service Board of Governors, the Postal Rate Commission, the Air Courier Conference of America, and other interested parties. We will also make copies available to others upon request. Majors contributors to this report are listed in appendix V. If you have any questions, please call me at (202) 512-8387. From 1875 to 1971, the Universal Postal Union (UPU) authorized postal administrations to retain all of the charges assessed by them for outbound international mail. The postal administrations that received the international mail were not compensated for delivering the inbound letter items to their final destinations because the UPU countries operated under the presumption that a letter elicited a letter in reply and that mail traffic was therefore the same in both directions. In 1969, the UPU Congress acknowledged that the “balanced mail exchanges between countries” concept was invalid and developed terminal dues as the mechanism for compensating postal administrations for the costs of delivering inbound international mail. Terminal dues charges are established in the Universal Postal Convention. The charges are increased every 5 years by the UPU Congress. The rate is based on Special Drawing Rights (SDR). Postal administrations only compensate one another when an administration sends more mail (based on overall weight) than it receives. Until the 1989 UPU Congress, a uniform fee had always applied to both letter (LC) and printed matter (AO) mail. This fee represented an averaging of worldwide costs to sort and deliver light and heavier weight letter pieces. The fee was the same for a kilogram of mail consisting of one piece destined to one address or for a kilogram of a number of pieces destined to a number of different addresses. Postal Service officials believed that a terminal dues system based only on weight undercompensates countries—like the United States—that receive many LC pieces. According to Postal Service officials, it is more expensive to sort and deliver a kilogram of light-weight mail than of heavy-weight mail because the former contains more pieces. Conversely, the weight-based terminal dues overcompensates countries that receive heavier items. Since the Postal Service sends many heavy items overseas, it considers the terminal dues rate to be too “high” for outbound mail. Postal Service officials said that they prefer rates that have both a piece and a weight charge. The current terminal dues system is a two-tier system. One rate applies to threshold (developed) countries—those that ship more than 150 metric tons or 330,000 pounds a year. For LC mail, this is 8.115 SDR per kilogram (or $5.79 per pound). For AO mail, this is 2.058 SDR per kilogram (or $1.47 per pound). A lower uniform rate (for both LC and AO mail) applies to nonthreshold (developing) countries—those that traditionally have low outbound mail volume—less than or equal to 330,000 pounds a year. This is 2.940 SDR per kilogram (or $2.10 per pound). In 1994, the Postal Service exchanged mail with 28 threshold countries and 128 nonthreshold countries. The receiving postal administrations can ask for an application of the “correction mechanism” from threshold countries to adjust the rates upward if they can show that the destinating country’s mail stream has more items per kilogram than the worldwide averages. This mechanism allows the Postal Service to collect higher terminal dues from some high-volume countries that send light-weight mail. The two-tier system created opportunities for domestic mailers to avoid paying domestic postage through a practice called “ABA remail.” ABA remail occurs when remailers take mail prepared for delivery in the United States (country A) to nonthreshold countries with low terminal dues (country B) and mail it back to the United States (country A) at a fraction of the domestic rate. This practice can result in a large revenue loss for the Postal Service. For example, when a U.S. resident takes a 1/2-ounce item to a nonthreshold country, such as the Dominican Republic, for mailing back to the United States, the Postal Service can receive as little as 6 cents an item in compensation instead of 32 cents postage. Postal Service officials also believe that the current system encourages foreign mailers to route U.S.-bound mail from countries with which the Postal Service has negotiated cost-based terminal dues arrangements (discussed below) through nonthreshold countries with low terminal dues (known as “ABC remail”). The Postal Service is not fully compensated for the costs of delivering these inbound remailed items. For example, when a foreign mailer in Great Britain (country A) takes a 1/2-ounce item to a nonthreshold country (country B), such as Panama, for mailing to the United States (country C), the Postal Service can receive as little as 7 cents an item instead of the 26 cents per item that it would have received under the cost-based terminal dues system (such as the CEPT agreement discussed below). To reduce the economic incentive of these types of remailing, the Postal Service, in 1994, amended subchapter 790 of the International Mail Manual to broaden the definition of a U.S. resident. This clarification enables the Postal Service to collect domestic postage on mailings “posted in another country not only by firms organized in the United States but also by firms organized under the laws of other countries, which have substantial connection with U.S. businesses.” The amendments also authorized the collection of domestic postage for mailing “on behalf of persons who reside in countries with which the Postal Service has negotiated cost-based terminal dues arrangements, but posted in countries with which the United States has not negotiated cost-based rates.” The 1994 UPU Congress abandoned the two-tier structure and went back to a single rate for LC/AO mail. The basic rate will be 3.427 SDR per kilogram (or $2.45 per pound). The new rate was to become effective in January 1996. The UPU Congress also amended the correction mechanism. A piece rate will be levied for threshold countries if the average number of pieces per kilogram exceeds 21. Rates for threshold countries then would be adjusted to 0.14 SDR per item plus 1 SDR per kilogram (or 23 cents per item plus 72 cents per pound). For the first time, rates can also be adjusted downward—for threshold countries if their average item per kilogram is less than 14. The UPU Congress also adopted a bulk-mail option, which allows countries accepting bulk mail to set their own rates. Countries have the option of charging either a weight-based or an item-based rate. Bulk mail is defined as 1,500 items per dispatch or per day or 5,000 items over 2 weeks from the same customer. The basic UPU rate is 0.14 SDR per item plus 1 SDR per kilogram (or 23 cents per item plus 72 cents per pound). Starting in 1996, that rate could be 60 percent of the domestic postage or 27 cents per item plus 89 cents per pound, whichever is lower. Charges could eventually advance to 80 percent of domestic postage or 100 percent above the basic UPU bulk rate, whichever is lower. The UPU Congress also tied compensation for ABC remail to the 80 percent of the domestic postage rate or to the UPU bulk rate of 27 cents per item plus 89 cents per pound, whichever is lower. The Postal Service currently participates in two special cost-based terminal dues agreements: (1) a 14-country multilateral agreement with members of the Conference of European Postal and Telecommunications Administrations (CEPT) and (2) a bilateral agreement with Canada. Under the CEPT agreement, postal administrations from the United States and 13 European countries charge each other 23 cents per piece and $1.06 per pound. Since the Canadian agreement set charges based on the shape of the envelopes, different reimbursement rates apply to letters, flats, packets, and parcels. Both agreements will expire in 1996. In addition to the single-piece products, which make up around 90 percent of the Postal Service’s international revenues, the Postal Service has developed 10 business “products” to address the needs of its business customers. They include an electronic service, two expedited mail services, several discounted bulk mail services, and a business reply mail service. Table II.1 describes the single-piece and business products in detail. Super-urgent letters, documents, and graphics (continued) Available only to 14 countries during pilot-test —10 pounds minimum per mailing 50 pounds minimum per mailing —750 pounds minimum to a single country) —M-Bag (15 pounds minimum per bag) Depends on service option (express, standard, or economy air) Varies with each negotiated service/price agreement —$2 million in international postage a year —Mail must be provided at one location (continued) Anne C. Kornblum, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Postal Service's (USPS) participation in the international mail market, focusing on: (1) USPS responsibility for delivering and receiving international mail; (2) the competition for international mail delivery and USPS plans to increase its competitiveness; (3) legal or regulatory issues arising out of the competition in international mail services. GAO found that: (1) USPS and other Universal Postal Union (UPU) members provide a worldwide mail delivery network even to remote locations; (2) USPS is concerned that it is losing international mail market share to private carriers whose services are more dependable, faster, and cheaper than USPS service; (3) some foreign postal services also compete in the United States for certain outbound international bulk business mail; (4) USPS has developed an aggressive strategy to regain market share that includes new services, service improvements, and market-based prices; (5) USPS officials believe that the statutory requirement that it use U.S. flag carriers at Department of Transportation-set rates limits its ability to compete for international mail; (6) competitors believe that USPS unfairly benefits from its federal status, exclusive access to foreign postal administrations, and status as the sole U.S. UPU representative; (7) competitors believe that certain USPS pricing practices violate laws and regulations and the Postal Rate Commission should set international mail rates just as it does domestic rates; (8) although competitors believe that USPS status as the sole UPU representative is unconstitutional, USPS believes it is justified by its statutory and treaty obligations that are not shared by its domestic competitors; and (9) the USPS role in the international mail market is similar to the issues surrounding USPS competitiveness in the domestic market.
U.S. international broadcasting efforts support the three key objectives of U.S. public diplomacy, which are to engage, inform, and influence overseas audiences. As a news organization, the BBG must maintain its journalistic independence while also serving U.S. strategic interests as a member of the public diplomacy apparatus. To fulfill this latter role, the BBG seeks input from the Department of State and the larger public diplomacy community in formulating its broadcast plans and making annual decisions on the deletion and addition of language services. The Secretary of State serves as a member of the Board, further strengthening coordination efforts. Within the BBG, VOA, Radio/TV Marti, and WorldNet Television, are organized as federal entities, while Radio Free Europe/Radio Liberty and Radio Free Asia operate as independent, nonprofit corporations and are funded by Board grants. Radio Free Europe/Radio Liberty, Radio Free Asia, and Radio/TV Marti function as “surrogate” broadcasters where a local free press does not exist. Congress created the International Broadcasting Bureau (IBB) in 1994 in an effort to streamline and consolidate certain broadcast operations. Figure 1 illustrates the Board’s placement in the U.S. public diplomacy hierarchy and its current organizational structure. Each U.S. broadcast entity is organized around a collection of language services that produce program content. In some instances, both VOA and a surrogate broadcaster run “overlapping” services due to the different missions pursued by VOA and the surrogates. For example, both VOA and Radio Free Europe/Radio Liberty have their own Russian language service. The BBG currently has a collection of 97 language services—with a 55 percent overlap between VOA and the surrogates broadcasting in the same language. Each broadcast entity has its own legislated mandate. VOA’s mandate is to (1) serve as a consistently reliable and authoritative, accurate, objective, and comprehensive source of news; (2) represent America, not any single segment of American society, and therefore present a balanced and comprehensive projection of significant American thought and institutions; and (3) present the policies of the United States clearly and effectively and also present responsible discussions and opinion on these policies. In contrast, the role of the surrogate broadcasters (Radio Free Europe/Radio Liberty, Radio Free Asia, and Radio/TV Marti) is to temporarily replace the local media of countries where a free and open press does not exist. WorldNet Television and Film Service provides production and distribution support for television broadcasts developed by VOA and the Department of State. The Board’s public diplomacy mandate also includes helping to develop independent media and raising journalistic standards where possible. The Board’s new approach to broadcasting represents an ambitious attempt to reach larger audiences in key markets. To do this, it seeks creative solutions that prioritize the use of limited resources and marry the mission of U.S. international broadcasting to the needs and wants of target audiences. The Board’s new strategic plan was issued in December 2002; however, development of its new approach to broadcasting began in July 2001. The plan was developed to address declining audience share in key markets such as Russia and historically static performance in key strategic regions such as the Middle East. For example, the BBG had a 21 percent market share in Russia in the early 1990s that has declined to about 4 percent of the adult listening audience in recent years. In the Middle East, the VOA’s Arabic service has for decades reached less than 2 percent of potential listeners. The Board’s new plan outlines a strategic vision for U.S. international broadcasting that is designed to move the organization toward a market- based approach that will generate the large listening audiences in priority markets that the Board believes it must reach to effectively meet its mission. Early implementation of the plan has focused on markets relevant to the war on terrorism; however, the Board intends that many elements of its new approach will be applied to many of its language services over time. The Board’s vision is to create a flexible, multimedia, research-driven U.S. international broadcasting system. This system will incorporate regional networks and single-country operations to reach large audiences by programming the distinct content of VOA and the surrogate services through state-of-the-art formats and distribution channels controlled by the Board. Figure 2 provides an overview of the Board’s new strategic plan and shows the links among the Board’s mission statement, vision statement, broadcast priorities, strategic goals, and program objectives. Appendix I provides a complete list of the goals and objectives. Strategic plans play a critical role in the management of agency operations. Guidance from the Office of Management and Budget (OMB) makes clear that agency strategic plans, annual performance plans, and annual performance reports form the basis for a comprehensive and integrated approach to performance management. In the Board’s case, its performance management is augmented by an ongoing series of program reviews of individual language services conducted each year and an annual comparative review of all language services. Program reviews are in-depth assessments of performance conducted by a team of management, audience research experts, technical staff, and language service staff. The comparative review of language services represents an intensive 4-month review by the Board designed to evaluate the need for adding or deleting language services and strategically reallocating funds to the language services on the basis of priority and impact. This year, the Board asked eight language services to prepare individual performance plans that capture key elements of the Board’s new strategic approach to broadcasting, including the need to identify a target audience and establish specific audience goals. These performance plans will become the focus of future program reviews and form the final link in a planned performance management system that will integrate the Board’s strategic plan, performance plan, annual language service review, budget preparation process, and program reviews into a unified whole. The strategic plan forms the heart of this system since it should provide the performance goals and measures that drive the Board’s entire operations. Consistent with the plan’s theme of “marrying the mission to the market,” the Board has applied its new audience-focused broadcasting approach to recent initiatives supporting the war on terrorism. The first project under the new approach, Radio Sawa in the Middle East, was launched in March 2002 using many of the modern, market-tested broadcasting techniques and practices prescribed in the plan, in an effort to attract a larger, younger population. Follow-on program initiatives also adhere to the Board’s modern approach to broadcasting, though application is tailored to the specific circumstances of each target market. These initiatives include the Afghanistan Radio Network (ARN) and the new Radio Farda service to Iran. Estimated start-up and recurring costs for these three projects through fiscal year 2003 total about $116 million. As funds become available, there are plans to extend application of the Board’s new approach to other high-priority markets, such as Indonesia. In addition, the Board hopes to further expand its presence in the Middle East through the launch of a Middle East Television Network. Future initiatives are expected to require additional reallocation of funds and possible supplemental spending by Congress. The Board has tailored the use of its modern, audience-focused approach to broadcasting, taking target audiences and market circumstances into consideration when developing and implementing new program initiatives. Table 1 provides a brief description of recently implemented projects supporting the war on terrorism. The first program under the Board’s new approach, Radio Sawa in the Middle East, was launched using modern, market-tested broadcasting techniques and practices, such as the extensive use of music formats, to improve performance in this priority market and lend support to the war on terrorism by targeting youth audiences. Although music remains a large part of the programming on Radio Sawa, the proportion of news and information to music is steadily increasing, peaking at 5-hours a day during Operation Iraqi Freedom. Radio Sawa replaced the poorly performing VOA Arabic service, which had listening rates at around 2 percent of the population. The Board has survey research indicating that Radio Sawa is reaching 51 percent of its target audience and is ranked highest for news and news trustworthiness in Amman, Jordan. Despite such results, it remains unclear how many people Radio Sawa is actually reaching throughout the Middle East because audience research has been performed only in select markets and has not yet included audiences in key markets like Saudi Arabia. The Afghanistan Radio Network was launched in August 2002 to more effectively use and strengthen the impact of BBG broadcasting resources targeted to Afghanistan, a key market for the war on terrorism. ARN utilizes broadcasting concepts outlined in the Board’s new strategic approach, such as tailoring content to the target audience and integrating programming streams across entities. Unlike Radio Sawa, ARN is not primarily designed to reach a youth audience but a broader Afghani audience. Programs are designed to be locally focused and are high in educational, news, and information content. BBG service to Afghanistan has in the past yielded some of the Board’s highest listening rates (in 1999 around 80 percent of adult male heads-of-household). Recent BBG research indicates that the Board is reaching about 45 percent of all male and female adults in the listening regions of Kabul and Mazar-e-Sharif. Radio Farda was launched to strengthen the impact of BBG broadcasting resources targeted to Iran, another key market for the war on terrorism. Based on audience research and an analysis of specific market factors in Iran, the Board tailored the plan’s elements to Radio Farda. Radio Farda uses modern broadcast techniques to attract a youth target audience. Although it uses music formats, Radio Farda also strives to provide substantial news and information. The Board claims that increases in the volume of e-mail and phone calls from the region indicate that the service is gaining popularity among the target audience in Iran. The Board is planning other program initiatives in support of the war on terrorism, and plans indicate that the Board will selectively apply its new broadcasting approach to these projects. Future initiatives include enhancements to the VOA Indonesian and Urdu services and creation of a Middle East Television Network, which represents the single largest enhancement to the Board’s operations in the coming year. Still in the planning stages, the Middle East Television Network will be an 18- to 24- hour-a-day, seven-days-a-week, U.S.-controlled satellite TV service presenting what the Board sees as American-style news and information programs in the Arabic language to counter the lack of depth and balance in the Middle Eastern media. As television is the most important medium in the region for news and information, the Board expects to significantly increase its audience size with this initiative. Certain elements of the Board’s new plan will require substantial levels of investments. Such elements include broadcasting round-the-clock, using audience research and music formats extensively, and reaching audiences on Board-controlled AM and FM frequencies. Other elements do not require as substantial capital investments, such as identifying target audiences and redesigning program content to appeal to these audiences. Just as Radio Sawa, ARN, and Radio Farda incorporate the Board’s new broadcasting approach to varying degrees, the Board has stated in its strategic plan that it will apply certain high-cost elements of its new approach on a case-by-case basis. It cannot afford to broadly apply all elements to all language services, and some markets do not require such changes for U.S. international broadcasting to remain competitive. Table 2 provides a cost summary of recently implemented high-priority projects. The estimated price tags for other priority initiatives, such as the Middle East Television Network and the expansion of the VOA Indonesian service, are also significant. For example, the Board estimates that it will cost about $62 million to initiate the Middle East Television Network and an additional $37 million annually for recurring operational costs. Expanding VOA Indonesian radio and TV programming is estimated to cost an additional $3.4 million. Cost estimates for the VOA Urdu service program expansion are not yet available because the Board has not finalized its plans for this project. Some of the Board’s recent priority projects have been funded in part by reallocation of program funds under the Board’s annual language service review process. For example, the Board funded Radio Farda by reprogramming more than $5.6 million in fiscal year 2003 funds and also helped pay for Radio Sawa by reprogramming approximately $4.1 million in fiscal year 2001 funds from other language services. The Board’s new approach to broadcasting is based on the need to reach large audiences in priority markets, but its strategic plan does not include a single goal or related program objective designed to gauge progress toward increasing audience size. In addition, the plan’s seven existing strategic goals (for example, to employ modern communication techniques or to revitalize efforts to tell America’s story) are not supported by measurable program objectives that would allow the Board and others to gauge the agency’s progress in implementing its strategic goals. While the plan lacks a range of measurable program objectives, key effectiveness measures that could be incorporated in future versions of the Board’s strategic plan include audience awareness of U.S. broadcast efforts, audience perceptions of the credibility of U.S. broadcasts, and whether VOA effectively presents information about the United States and its policies to target audiences. Efforts to assess the effectiveness of the Board’s new approach to broadcasting may also be hampered by the lack of details on how the Board intends to implement each of its program objectives. Missing from the plan are specifics on implementation strategies, resource requirements, and project time frames. The Board has acknowledged that its strategic plan needs to be significantly improved, and major changes are planned for the next iteration. The absence of “audience size” as a strategic goal and related measurable program objectives represents one of the most significant oversights in the Board’s strategic plan. The strategic plan references the importance of reaching a large audience in priority markets as the key driver behind the Board’s new approach to broadcasting and notes that audience size is the most readily available and accurate impact measure it has. Despite the central importance of audience size to the Board’s new approach to broadcasting, the plan is silent on how these data should be incorporated as a measurable program objective or series of program objectives to gauge the Board’s effectiveness in this key area. The Board has traditionally reported audience size in its annual performance plan; however, this reporting lacks any contextual meaning since it is not tied to a program objective(s) defining the Board’s multiyear vision for what it would like to accomplish in this area. In addition, the Board’s practice of reporting audience size goals and accomplishments at the entity level in its annual performance plan obscured important performance data at the regional and language service level. We also found that the plan’s existing strategic goals are not supported by measurable program objectives. The strategic plan has 17 program objectives, any of which can be used to illustrate the lack of performance goals and expectations. For example, under the goal of employing modern communication techniques and technologies, one objective is to accelerate multimedia development and infuse more television and Internet into the mix. The Board’s plan only makes broad assertions about the need to “do more with TV where market realities demand and resources permit” and that the Board “will ensure that all entities have world-class Internet presences.” Under the goal of progressively building out the U.S. international broadcasting system, the Board lists the successful launch of Radio Sawa as a program objective. Again, the plan makes broad statements about the need to attract and build a significant audience in the Middle East and present news that is objective, comprehensive, fresh, and relevant. However, it does not provide details on expected performance levels. Specifically, the plan does not establish short- or long-range target audience figures for the Gaza strip, West Bank, and 17 countries in the Middle East and Africa to which Radio Sawa will eventually broadcast. Our survey of senior program managers across all broadcast entities and discussions with other program staff and outside parties, suggested a number of other effectiveness measures the Board could incorporate when developing measurable program objectives designed to support the plan’s strategic goals. These measures include audience awareness; broadcast entity credibility; and a measure of VOA’s ability to communicate a balanced and comprehensive projection of American thought, institutions, and policies so that audiences receive, understand, and retain this information. The strategic plan does not include a measure of audience awareness to answer a second key question of effectiveness: whether target audiences are even aware of U.S. international broadcasting programming available in their area. Board officials have stated that such measures would help the Board understand a key factor in audience share rates and what could be done to address audience share deficiencies. The Board could develop this measure since it already collects information on language service awareness levels in its audience research and in national surveys for internal use. The strategic plan does not include a measure of broadcaster credibility, which can identify whether target audiences believe what they hear. Reaching a large listening or viewing audience is of little use if audiences largely discount the news and information portions of broadcasts. Our survey of senior program managers and discussions with BBG staff and outside groups all point to the possibility that U.S. broadcasters (VOA in particular) suffer from a credibility problem with foreign audiences, who may view VOA and other broadcasters as biased sources of information. InterMedia, the Board’s audience research contractor, told the Board that it is working on a credibility index for another customer that could be adapted to meet the Board’s needs which, when segmented by language service, would reveal whether there are significant perception problems among key target audiences. However, to develop this measure, the Board would need to add several questions to its national survey instruments. Finally, the strategic plan does not include a measure of whether target audiences hear, understand, and retain information broadcast by VOA on American thought, institutions, and policies. The unique value-added component of VOA’s broadcasting mission is its focus on issues and information concerning the United States, our system of government, and the rationale behind U.S. policy decisions. Tracking and reporting these data are important to determining whether VOA is accomplishing its mission. InterMedia officials noted that developing a measure of this sort is feasible and requires developing appropriate quantitative and qualitative questions to include in the Board’s ongoing research activities. We found that each of the plan’s program objectives lacked a detailed description of implementation strategies, resource needs, and project time frames. Typically, each program objective consists of an overview of the problem followed by a general assertion that operations must be improved. For example, the “action plan” for the accelerated use of television and the Internet is limited to the following statements: “Appropriate Television – VOA has seen significant audience impact in several key markets through television broadcasts—the Balkans, Iran, and Indonesia. We can and will do more with TV where market realities demand this and where resources permit. The first step is to cement the establishment of VOA-TV from the former Worldnet. Higher Quality Web Presence – We have seen spotty progress towards the goal of having all language services create high quality news- oriented websites. Some are outstanding. The content of others is thin and visually uninteresting. Bottom line: We will ensure that all entities have world-class Internet presences.” This level of planning begs key questions such as: What is the overall strategy for implementing the enhanced use of television and the Internet? Who will be responsible for implementing the component parts of the strategy? How much will it cost? How long will it take to implement? How will the Board manage workforce planning issues such as transitioning staff from radio-based skills to the skill set required to significantly augment the Board’s multimedia operations? How will the long-planned merger of VOA Television and WorldNet impact the Board’s strategic approach to television? How will the Middle East Television Network factor into the Board’s plans and what are the resource, staffing, and training implications of this proposed network? Answers to such questions will provide the Board, BBG managers, OMB, and the Congress with specific information needed to manage ongoing program implementation and assess progress against meaningful short- and long-term criteria. This level of planning also will reveal any potential gaps or inconsistencies in planned implementation steps across the Board’s many program objectives. The key strategic challenge the Board faces is how to achieve large audiences in priority markets while dealing with (1) a collection of outdated and noncompetitive language services, (2) a disparate organizational structure consisting of seven separate broadcast entities and a mix of federal agency and grantee organizations that are managed by a part-time Board of Governors, and (3) the resource challenge of broadcasting in 97 language services to more than 125 broadcast markets worldwide. The plan does address the challenge of revamping its current broadcast operations by identifying a number of solutions to the competitive challenges the Board faces. It also provides a new organizational model for U.S. international broadcasting that stresses the need to view the broadcast efforts of the separate entities as part of a “single system” under the Board’s direct control and authority. The Board has stated that it cannot sustain all its current broadcast operations and have the desired impact in high priority markets at the same time. Despite a clear articulation of U.S. international broadcasting’s resource challenges, the Board and Congress have not been able to substantially reduce the total number of language services or a reported 55 percent overlap in VOA and surrogate language services. The Board’s strategic plan does an adequate job of identifying the market challenges for U.S. international broadcasters and potential solutions to these challenges. The task of reaching a significant audience today is a far different proposition than reaching an audience a decade ago. Priority markets have multiplied and media environments have advanced virtually everywhere with an explosion of local radio and television outlets that compete aggressively for audience share. Broadcast and computer technologies have made quantum leaps, with satellite television and the Internet becoming preferred information modes for millions. The Board has concluded that because many people can now pick and choose their information sources, U.S. international broadcast operations must be improved to remain competitive in a new media environment. The Board’s strategic plan includes a frank assessment of the market challenges that must be addressed to make U.S. international broadcasting more competitive. These challenges include: Branding and positioning. Language services lack a distinctive contemporary identity and a unique reason for listeners or viewers to tune in. Target audiences. Few language services have identified their target audience—a key first step in developing a broadcast strategy. Formats and programs. Many language services have outmoded formats and programs with an antiquated, even Cold War, sound and style. Delivery and placement. Three-quarters of transmitted hours have poor or fair signal quality, and affiliate broadcaster strategies have stressed quantity over quality. Marketing and promotion. Audience awareness levels are low across the world and audiences often do not know where to tune in or what to expect once they do. Technology. The Board is not maximizing the use of multimedia to reach audiences, stimulate real-time interaction, and cross-promote broadcast products. These challenges are addressed by a number of proposed solutions in the form of strategic goals and program objectives listed in the plan. With regards to the marketing challenges, 12 of the 17 program objectives are designed to directly or indirectly overcome these challenges. For example, the Board’s strategic goal of employing modern communication techniques and technologies is supported by the following program objectives: accelerate multimedia development and infuse more television and Internet into the mix; adopt modern radio principles and practices including the matching of program formats to target audiences; control the distribution channels that audiences use; go local in content and presence; tailor content to audiences; and drive innovation and performance with research. Full implementation of these and other solutions to market challenges in high priority markets will depend on available resources, which in turn will be driven in part by the Board’s effectiveness in addressing its organizational and resource challenges. The plan identifies a number of internal challenges or obstacles which, if not addressed and corrected, will hamper the Board’s ability to effectively implement its new strategic approach to broadcasting. First, the Board believes that it needs to do more to consolidate and rationalize its organizational structure to better leverage existing resources and generate greater program impact in priority markets. As the strategic plan notes, “the diversity of the BBG—diverse organizations with different missions, different frameworks, and different constituencies—makes it a challenge to bring all the separate parts together in a more effective whole.” Second, the Board believes that it must clarify the respective roles and responsibilities of the Board, the IBB, and the broadcast entities to ensure that a rational management process is in place and that internal communications flow in a logical manner. The Board’s response to these internal challenges is largely contained in the two program objectives listed under the strategic goal of designing a broadcast architecture for the 21st century. The first program objective is to create a unified broadcasting system by treating the component parts of U.S. international broadcasting as a single system. This is an important distinction since it places the Board in the position of actively managing resources across broadcast entities to achieve common broadcast goals. A good example of this strategy in action is Radio Farda, which draws on the unique content of VOA’s Persian service and Radio Free Europe/Radio Liberty’s Persian service to create a new broadcast product for the Iranian market. Board officials acknowledge that the new single system approach will take years to implement throughout the BBG and require hands-on management by the entire Board to ensure that resources are adequately managed across entities. Also, the Board’s experience with implementing Radio Sawa suggests that it can be difficult to make disparate broadcast entities work toward a common purpose. For example, Board members and senior planners said they encountered significant difficulties attempting to work with VOA officials to launch Radio Sawa and there are now plans to constitute Radio Sawa as a separate grantee organization. While this move is understandable under the circumstances, it also contributes to the further “balkanization” of U.S. international broadcasting. The second program objective consists of realigning the BBG’s organizational structure. This objective highlights the need to reinforce the Board’s role as CEO and to reaffirm the IBB’s role as central provider of transmission and local placement services. The plan notes that by law the Board is the head of the agency with a host of nondelegable responsibilities including taking the lead role in shaping the BBG’s overall strategic direction, setting expectations and standards, and creating the context for innovation and change. As it consolidates its role as the collective CEO for U.S. international broadcasting, the Board will seek to create better and stronger linkages among entities, uniting them in a common purpose and program. At the same time, the Board plans to assume the role of helping the broadcasting organizations develop radio formats to package and better present the broadcasters’ content. According to the plan, this becomes a major responsibility, as professional formatting is vital to the BBG’s competitiveness and effectiveness. We found significant support among BBG staff and outside experts we interviewed and surveyed for a select number of solutions not included in the Board’s plan. However, these are complex issues that deserve detailed review and careful weighing of the pros and cons. Implementing these solutions is largely beyond the Board’s control. However, the Board can play a key role in identifying and endorsing creative solutions for Congress to consider if the Board’s planned solutions to organizational and leadership challenges falter and are ineffective. A list of these options is offered for informational purposes and as a reference point for the Board, OMB, and Congress in pursuing solutions to acknowledged operating challenges. (See app. II for relevant survey responses we received from senior program managers.) Table 3 summarizes the Board’s planned action compared with these potential alternatives. The Board has concluded that if U.S. international broadcasting is to become a vital component of U.S. foreign policy, it must focus on a clear set of broadcast priorities. Trying to do too much at the same time fractures this focus, extends the span of control beyond management capabilities, and siphons off precious resources. The Board has determined that current efforts to support its broadcast languages are “unsustainable” with current resources given its desire to increase impact in high priority markets. Currently, the Board broadcasts in 66 languages, through 97 language services (resulting from a 55 percent overlap between VOA and surrogate language services) to more than 125 markets worldwide. The plan notes, “it is a daunting challenge to obtain the impact the Board desires across all its language services given what is essential to spend in high priority services.” Despite this recognition, the plan fails to answer such questions as, when is it appropriate to broadcast VOA and surrogate programming in the same language, and what level of duplication in roles and target audiences should be allowed between VOA and surrogate broadcasters. These types of questions have been raised before. For example, in our September 1996 review of options for addressing possible budget reductions at the U.S. Information Agency, we concluded that any substantial reduction in funding for U.S. international broadcasting would require major changes in the number of language services and broadcast hours. Our report noted that the BBG planned to extensively review its language services to determine their continued need and effectiveness. Our September 2000 report on U.S. international broadcasting noted that the Board concluded it was essential to revisit the issue of broadcast overlap between VOA and the surrogate services in light of evolving foreign policy, geopolitical, and budget realities in the new century. Finally, the Board considered the issue of role and target audience duplication among VOA and surrogate broadcasts in a July 2000 language service analysis, which sought to identify where broadcast services shared similar roles (that is, to supply international/regional news, local news, information on American policies and perspectives, etc.) and the same target audiences (that is, elites, mass, youth, women, and diaspora). This analysis confirmed that surrogate broadcasters, consistent with their mission, carry substantially more local content than VOA. Likewise, the analysis confirmed that VOA alone provides news and information on what the Board labeled the “American political perspective.” However, the Board’s analysis also revealed that a significant degree of overlap existed in other content areas (such as “political/democracy building”) and in target audiences between VOA and the surrogates. Our survey of senior program managers revealed that the majority supported significantly reducing the total number of language services and the overlap in services between VOA and the surrogate broadcasters. Eighteen of 24 respondents said that too many language services are offered, and when asked how many countries should have more than one U.S. international broadcaster providing service in the same language, 23 of 28 respondents said this should occur in only a few countries or no countries at all. Finally, when we asked respondents what impact a significant reduction in language services (for the purpose of reprogramming funds to higher priority services) would have, 18 of 28 respondents said that this would have a generally positive to highly positive impact. The BBG’s annual language service review process addresses the need to delete or add languages. The process prioritizes individual language services based on such factors as U.S. strategic interests, political freedom, and press freedom data. Such assessments have been used in an attempt to shift the focus of U.S. international broadcasting away from central and eastern Europe to allow greater emphasis on Russia and Eurasia; central and South Asia; China and east Asia; Africa; and selected countries in our hemisphere such as Colombia, Cuba, and Haiti. This system has been used to re-deploy resources within the BBG. For example, the Board has reallocated more than $9 million through the elimination or reduction of language services since its first language service review in January 2000. In total, the Board has eliminated 3 language services and reduced the scope-of-operations of another 25 services since January 2000. In terms of the total number of language services, the Board had 91 language services when it concluded its first language service review and 97 language services at the conclusion of this year’s review. Congress has contributed to this situation by authorizing additional language services over the years. However, the Board, through its required annual language service review and strategic plan, is responsible for analyzing, recommending, and implementing a more efficient and economical scope of operations for U.S. international broadcasting. The Broadcasting Board of Governors’ strategic plan embodies, defines, and guides the Board’s new approach to U.S. international broadcasting, which aims to dramatically increase the size of listening and viewing audiences in markets of U.S. strategic interest while focusing on the war on terrorism. Early initiatives such as Radio Sawa, Radio Free Afghanistan, and Radio Farda represent the first wave of projects incorporating, to varying degrees, the market-driven techniques on which the Board’s new approach to broadcasting are based. Effective implementation of the Board’s new approach to broadcasting rests, in part, on a rigorous plan that reflects the Board’s best strategic thinking on a host of critical issues. However, the Board’s plan lacks measurable program objectives, detailed implementation strategies, resource needs, and project time frames. We identified a number of key areas that could provide a starting point for developing multiyear program objectives that focus on the Board’s actual effectiveness. These measures include audience size by language service, audience awareness, broadcaster credibility, and whether VOA effectively presents information about U.S. thought, institutions, and policies to target audiences. Implementation of these and other program objectives could be tracked through a related set of performance goals and indicators in the Board’s annual performance plan. The Board has identified a number of market and internal challenges and proposed solutions to address them. If the Board falters in its efforts to correct some significant organizational challenges, a number of alternative solutions do exist. Finally, the Board needs to evaluate how many language services it can effectively carry and what level of overlap and duplication in VOA and surrogate broadcast services is appropriate. Resolving these key questions will have significant resource implications for the Board and its ability to reach large audiences in markets of priority interest to the United States. To improve overall management of U.S. international broadcast operations and maximize their impact on U.S. public diplomacy efforts, we recommend that the Chairman of the Broadcasting Board of Governors: revise the BBG’s 5-year strategic plan to include measurable program objectives, implementation strategies, resource requirements, and project time frames; include audience size, audience awareness, broadcaster credibility, and VOA mission effectiveness as measurable program objectives in the strategic plan; revise the BBG’s annual performance plan to include performance goals and indicators that track the Board’s progress in implementing the multiyear program objectives established in the Board’s revised strategic plan; and revise the Board’s strategic plan to include a clear vision of the Board’s intended scope-of-operations and the appropriate level of overlap and duplication between VOA and surrogate language services. The Broadcasting Board of Governors provided written comments on a draft of this report. The Board stated that overall our report is fair and accurate and it largely concurred with our report recommendations. The Board noted that it intends to create a new strategic goal (that is, maximizing impact in priority areas) and recast the plan’s seven existing strategic goals as operational goals that would support the Board’s single strategic goal. These operational goals would be descriptive in nature and generally not measured directly. However, the Board intends to develop measurable multiyear program objectives and related performance indicators under its new strategic goal that will be tracked on an annual basis through the BBG’s performance plan. The Board’s response notes that possible performance indicators include audience reach, share, awareness, credibility, programming quality, mission, added-value, and delivery. Finally, the Board noted that it is currently undertaking an in-depth assessment of the utility and practicality of integrating overlapping language services and expects to include this assessment in its fiscal year 2005 budget submission. We believe these planned actions are significant and if fully implemented should materially improve the Board's performance management process and provide OMB and Congress with more meaningful data on the actual impact of Board activities. The comments provided by the Board are reprinted in appendix IV. The Board also provided technical comments which we have incorporated in the report as appropriate. To obtain comparative information on all our objectives, we conducted fieldwork in the United Kingdom and Germany. We met with foreign ministry officials in London and Berlin to discuss their approaches to public diplomacy. We also met with broadcasting officials from the British Broadcasting Corporation in London and Deutsche Welle officials in Cologne and Berlin to discuss their respective approaches to international broadcasting. To examine the status of the BBG’s new strategic approach, we conducted interviews with Board members and senior managers from the broadcast entities including Radio Free Europe/Radio Liberty officials in Prague. We also reviewed the Board’s new 5-year strategic plan titled “Marrying the Mission to the Market” as well as other agency documentation, including entity mission statements and budget requests. To identify how the Board plans to measure the effectiveness of its new strategic approach, we reviewed current performance management documentation, such as language service and program review documents, audience research summaries, and annual performance plans and reports. We also met with Board officials and with several private sector audience research firms to discuss a range of performance management and measurement issues. To obtain information on various challenges the Board faces in executing its new strategy, and to identify program options for overcoming key challenges, we administered a survey to 34 senior program managers across the 5 broadcast entities in existence at the time our survey was implemented. We also conducted interviews with Board members and the Undersecretary for Public Diplomacy and Public Affairs at the Department of State. We conducted our work from May 2002 through April 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested members of Congress, the Chairman of the Broadcasting Board of Governors, and the Secretary of State. We will also make copies available to other parties upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix V. The Board’s strategic plan provides both a candid assessment of the challenges facing U.S. international broadcasting and a series of proposed solutions to address these challenges in the form of strategic goals and related program objectives. Table 4 is an overview of each challenge described in the Board’s strategic plan. Table 5 is a list of the proposed solutions the Board identified. To determine senior managers’ views of current operations, obtain information on the challenges associated with U.S. international broadcasting, and obtain information on the expected impacts of the BBG’s new “Marrying the Mission to the Market” initiative, we conducted a survey of these managers. Our survey questionnaire was administered from January 15 to March 11, 2003, to the directors, program-related managers, and regional language chiefs at the five BBG broadcasting entities in existence at the time our survey was implemented. The questionnaire was developed between September 2002 and January 2003 by social science survey specialists and other individuals who were knowledgeable about international broadcasting issues. In late October, we obtained an external expert review of the questionnaire from InterMedia, a private consulting group that conducts research into international broadcasting issues. We also obtained a series of comments and feedback from key Board planners and staff in November and December 2002. We pretested the questionnaire in December 2002 with four senior managers of BBG broadcasting entities to ensure that the questionnaire was clear, unambiguous, and unbiased. Initially, we had considered surveying a broader section of managers of BBG broadcasting entities, such as language service chiefs and managers of support services. However, after conducting the pretests, we concluded that our questions were appropriate only for directors, program-related managers, and regional language chiefs. In addition, we decided that it would be inappropriate to survey members of the Board of Governors because many of the questions asked about decisions and strategies for which they were directly responsible. We developed our study population of top managers, program-related managers, and regional language chiefs based on information that the BBG provided and input from BBG management. In those instances where managers had taken office during or after the time period to be evaluated in our survey (Oct. 1, 2001, through Sept. 30, 2002), we also surveyed their predecessors. In all, we sent the survey to the 34 individuals we identified as our study population and received 30 responses, resulting in an 88 percent response rate. All data from the completed surveys were double- keyed and verified during data entry. The results of our closed-ended questions to our survey are provided in appendix III. Finally, the questionnaire asks about current (GAO), an agency of Congress, has been asked operations, recent changes in programming, and by the Chairman of the House International program options. Relations Committee to study the activities of the Broadcasting Board of Governors (BBG). This request was prompted by the terrorist contribution to our study of U.S. international attacks of September 11, 2001 and the question broadcasting, and ask that you respond to this of what can be done to improve our image and questionnaire so that we may provide the most audience understanding of U.S. foreign policy. As part of this work, we are surveying entity questionnaire should take between 30 to 45 heads, senior program managers, and regional minutes to complete, depending upon the length language chiefs in the International Broadcasting of your answers to the open-ended questions. Free Asia (RFA). This questionnaire asks you to assess various unless compelled by law or requested by a program elements during fiscal year 2002 for the member of Congress. Please complete this questionnaire as soon as whether any external factors impeded the ability possible and fax it to Melissa Pickworth at (202) Melissa Pickworth at (202) 512-3158. BBG’s new strategic planning initiative “Marrying the Mission to the Market” that was introduced in November and December 2002. Section 1: Assessment of Program Elements in Fiscal Year 2002 (October 1, 2001, through September 30, 2002) Mission of All U.S. International Broadcasting Language Services “To promote and sustain freedom and democracy by broadcasting accurate and objective news and information about the United States and the world to audiences overseas.” technology (e.g., digital phones). 12) Use of audience and media/format preferences. Comments, if any. (Please provide highlights of what worked well, areas needing improvement, and suggestions on how operations can be improved.) Q.2) During fiscal year 2002, what impact did the following strategic planning elements have on your language services’ ability to achieve their mission? (Please check one box in each row.) planning. planning. planning. Comments, if any. (Please provide highlights of what worked well, areas needing improvement, and suggestions on how operations can be improved.) Q.3) During fiscal year 2002, how effective or ineffective were the following performance management system elements in terms of how they helped your language services to achieve their mission? (Please check one box in each row.) 1) BBG’s Annual Service Review process. 2) Your entity’s annual service. 3) Quantity of research for your entity’s annual program reviews. 4) Quality of research for your entity’s annual program reviews. 5) Timeliness of research support for your entity’s annual program reviews . Comments, if any. (Please provide highlights of what worked well, areas needing improvement, and suggestions on how operations can be improved.) Q.4) During fiscal year 2002, did the following organizational structures have a positive or negative impact on your language services’ ability to achieve their mission? (Please check one box in each row.) 1) Management oversight by the board of governors. 2) Use of multiple broadcast entities (VOA and surrogates model). of the IBB and its support services role. 4) New regional network streams on one frequency). -ship (the board, entity managers, and the IBB). 6) Firewall to protect journalistic independence. 7) VOA and Radio/TV Marti’s status as federal entities. 8) RFE/RL and RFA’s status as grantees. Comments, if any. (Please provide highlights of what worked well, areas needing improvement, and suggestions on how operations can be improved.) Q.5) During fiscal year 2002, how satisfied or dissatisfied were you with the allocation of resources and organizational capacities with regards to your language services? (Please check one box in each row.) Allocation of Resources and Organizational Capacities 1) Program funding levels. 2) Staffing levels. 3) Level of staff skills and knowledge. 4) Level of staff training. equipment. 6) Ability to compete in such as the BBC. 7) Ability to compete in BBC. 8) Ability for crisis capability. 9) Managerial flexibility: resources. Comments, if any. (Please provide highlights of what worked well, areas needing improvement, and suggestions on how operations can be improved.) item) 1) A perception of U.S. international broadcasting as a propaganda tool of the United States. 2) Impact of U.S. foreign policy on foreign perceptions. 3) A generally negative image of the United States. 4) Fear of listening because of repressive regimes. 5) The jamming of U.S. governments. 6) Potential audience’s lack of technology (no SW radios, satellite dishes, etc.). Comments, if any. Section 3: Assessment of External Conditions in Fiscal Year 2002 Q.7) Think back over the main categories of factors and elements you were asked to address in questions 1 through 6 of this survey. The following table summarizes the categories and issues within the Number of hours of transmission, transmission strength and quality, use of affiliates, use of technology, use of audience and marketing research. BBG and IBB strategic, technology, and workforce planning. Language Service Review process, entity annual program reviews, and research for annual program reviews. Management oversight by the Board of Governors, use of multiple broadcast entities, organizational placement of the IBB, new regional network approach, intra-agency coordination/guidance/leadership, firewall issues, status of some entities. E) Resource Issues and Organizational Capacities Current program funding and staff levels, staff skills, knowledge and training, technology and equipment, ability to compete and respond to crises, managerial flexibility. Perceived credibility of U.S. international broadcasting, image of the United States, lack of free media and civil liberties, jamming of U.S. broadcasts, potential audience’s lack of technology to hear broadcasts. 7a) During fiscal year 2002, what factor made the greatest contribution to your broadcasting entity’s ability to meet its mission? (Please enter the letter corresponding to the factor from the list above.) A 13 Comments, if any. 7b) During fiscal year 2002, what factor represented the greatest impediment to your broadcasting entity’s ability to meet its mission? (Please enter the letter corresponding to the factor from the list above.) E 13 Comments, if any. Section 4: New Strategic Planning Initiatives Q.8) How familiar or unfamiliar are you with the BBG’s new strategic planning initiative, “Marrying the Mission to the Market,” which was introduced in November and December 2002? (Check one box.) Not familiar (Skip to Question12) No basis to judge (Skip to Question 12) Q.9) To what extent, if any, do you believe the new strategic planning initiative, “Marrying the Mission to the Market”: (Please check one box in each row.) 1) Is well structured? 2) Addresses issues of critical importance to U.S. international broadcasting? 3) Is likely to succeed in most aspects? 4) Will be embraced by middle management? 5) Will be embraced by the rank and file? Comments, if any. Q.10) In your opinion, what impact will the BBG’s new strategic planning initiative, “Marrying the Mission to the Market,” likely have on the following aspects of U.S. international broadcasting? (Please check one box in each row.) strength and quality, use of affiliates, use of technology, use of audience and marketing research. BBG and IBB strategic, planning. process, entity annual program reviews, and research for annual program reviews. Board of Governors, use of the IBB, new regional network ship, firewall issues, status of some entities. 5) Resource Issues and staff levels, staff skills, ability to compete and respond flexibility. Perceived credibility of U.S. image of the United States, lack of free media and civil liberties, jamming of U.S. audience’s lack of technology to hear broadcasts. Comments, if any. Q.11) Overall, do you think that the BBG’s new strategic planning initiative, “Marrying the Mission to the Market,” will likely have a positive or a negative impact on U.S. international broadcasting’s ability to achieve its mission? U.S International Broadcasting entities, actually reaching significant audiences? which of the following best describes the (Check one box.) number of language services offered by U.S. international broadcasting entities? (Check one box.) Broadcasting Board of Governors (BBG) should be more than one U.S. international has made shifts in resources. Do you broadcaster providing service in any believe that the BBG: particular language? (Check one box.) (Check all that apply.) Other (please explain in the comments) Q.17) One current model of service delivery uses the same program stream, coordinates coverage, and has common production values. To what extent do you believe this model might be applicable to the countries served by your broadcasting entity? (Check one box.) Q.15) Based on your experience in broadcasting, how would you assess the current level of funding for U.S. international broadcasting relative to its mission: (Check one box.) U.S. international broadcasting? U.S. foreign policy interests? Section 6: Program Options Q.20) In your opinion, what impact would the following program options (identified by various contacts in our review) likely have on the ability of U.S. international broadcasting to achieve its mission? (Please check one box in each row.) 1) Consolidate VOA, the IBB, and the surrogates into one broadcasting entity headed by the Board of Governors. 2) Appoint a single individual as CEO for U.S. international broadcasting, and give that individual direct reporting responsibilities to the board. 3) Significantly reduce the overall number of language services in order to reprogram funds to higher priority services. 4) Use language service audience goals tailored to local circumstances (e.g., 5 percent audience share in one market versus a 10 percent share in another market). 5) Set language service audience goals by target audience (e.g., mass versus elites, under 30 versus over 30, men versus women, etc.). 6) Eliminate VOA editorials. 7) Revamp/re-invent VOA editorials. 8) Defederalize VOA (e.g., give VOA grantee status). 9) Defederalize IBB (e.g., give IBB grantee status). 10) Establish closer strategic coordination between the BBG and the State Dept. 11) Establish closer strategic coordination between the BBG and the White House. 12) Establish closer cooperation with other international broadcasters. 13) Establish a national strategic guidance to all agencies involved in public diplomacy. Comments, if any. Q.21) Which organization do you work for? Q.22) Which of the following categories (Please check one box.) most closely matches your level within your organization? (VOA, IBB, RFA, etc.) Q.23) Please briefly describe the language services for which you are responsible: Q.24) Other comments (Please continue on additional sheets, if necessary. Also, please feel free to attach any relevant documents you wish.) Contact information: If you would like us to contact you directly about an issue related to this survey, please provide your name and telephone number below. Any contacts we have with you will be strictly confidential. Thanks for your assistance! Diana Glod (202) 512-8945. In addition to the person named above, Michael ten Kate, Melissa Pickworth, and Janey Cohen made key contributions to this report. Martin De Alteriis and Ernie Jackson also provided technical assistance. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Prompted by a desire to reverse declining audience trends and to support the war on terrorism, the Broadcasting Board of Governors (BBG), the agency responsible for U.S. international broadcasting, began developing its new strategic approach to international broadcasting in July 2001. This approach emphasizes the need to reach mass audiences by applying modern broadcast techniques and strategically allocating resources to focus on high-priority markets. GAO was asked to examine (1) whether recent program initiatives have adhered to the Board's new strategic approach to broadcasting, (2) how the approach's effectiveness will be assessed, and (3) what critical challenges the Board faces in executing its strategy and how these challenges will be addressed. Consistent with its new plan to dramatically increase the size of U.S. international broadcasting listening and viewing audiences in markets of U.S. strategic interest, the Broadcasting Board of Governors has launched several new projects, including Radio Sawa in the Middle East, Radio Farda in Iran, and the Afghanistan Radio Network. These projects adhere to the Board's core strategy of identifying a target audience and tailoring each broadcast product to market circumstances and audience needs. The Board's plan lacks measurable program objectives designed to gauge the success of its new approach to broadcasting, detailed implementation strategies, resource needs, and project time frames. A number of key effectiveness measures could provide a starting point for developing measurable program objectives and related performance goals and indicators under the Board's annual performance plan. These measures include audience size in specific markets, audience awareness, broadcaster credibility, and whether the Voice of America (VOA) effectively presents information about U.S. thought, institutions, and policies to target audiences. The Board has identified a number of market and internal challenges--such as technological innovation and better coordination of its seven separate broadcast entities--that must be addressed to make U.S. international broadcasting more competitive. It has also developed a number of solutions to address these challenges. However, the Board has not addressed how many language services it can carry effectively (with the number rising nearly 20 percent over the past 10 years) and what level of overlap and duplication in VOA and surrogate broadcast services would be appropriate under its new approach to broadcasting. Resolving these questions will have significant resource implications for the Board and its ability to reach larger audiences in high-priority markets.
Federal law and policy have established roles and responsibilities for federal agencies to work with industry in enhancing the physical and cyber-security of critical government and industry infrastructures. For example, consistent with law, presidential policies stress the importance of coordination between the government and industry to protect the nation’s cyber critical infrastructure. In addition, policies establish DHS as the focal point for the security of cyberspace—including analysis, warning, information sharing, vulnerability reduction, mitigation efforts, and recovery efforts for government and industry critical infrastructure and information systems. Federal policy also establishes critical infrastructure sectors, assigns federal agencies responsibilities over each sector (known as sector-specific agencies), and encourages industry involvement. A fundamental component of DHS’s efforts to protect and secure our nation’s infrastructure is its partnership approach, whereby it engages in partnerships among government and industry stakeholders. In 2006, DHS issued the National Infrastructure Protection Plan (NIPP), which provides the overarching approach for integrating the nation’s critical infrastructure protection and resilience activities into a single national effort. The NIPP also outlines the roles and responsibilities of DHS with regard to critical infrastructure protection and resilience and sector-specific agencies— federal departments and agencies responsible for critical infrastructure protection and resilience activities in 16 critical infrastructure sectors— such as the dams, energy, and transportation sectors. Appendix I lists the 16 critical infrastructure sectors and their sector-specific agencies. The NIPP emphasizes the importance of collaboration, partnering, and voluntary information sharing among DHS and industry owners and operators, and state, local, and tribal governments. The NIPP also stresses a partnership approach between the federal and state governments, and industry stakeholders for developing, implementing, and maintaining a coordinated national effort to manage the risks to critical infrastructure. Specific laws and directives have guided DHS’s role in critical infrastructure protection, including the Homeland Security Act of 2002, as amended; Homeland Security Presidential Directive/HSPD-7; Presidential Policy Directive/PPD-21, which was issued on February 12, 2013; and Executive Order 13636, which was also issued on February 12, 2013. PPD-21 directs DHS to, among other things, coordinate the overall federal effort to promote the security and resilience of the nation’s critical infrastructure. PPD-21 also recognizes that DHS, in carrying out its responsibilities under the Homeland Security Act, evaluates national capabilities, opportunities, and challenges in protecting critical infrastructure; analyzes threats to, vulnerabilities of, and potential consequences from all hazards on critical infrastructure; identifies security and resilience functions that are necessary for effective stakeholder engagement with all critical infrastructure sectors; integrates and coordinates federal cross-sector security and resilience activities; and identifies and analyzes key interdependencies among critical infrastructure sectors, among other things. Executive Order 13636 directs DHS to, among other things, develop a voluntary cybersecurity framework; promote and incentivize the adoption of cybersecurity practices; increase the volume, timeliness, and quality of cyber threat information sharing; and incorporate privacy and civil liberties protections into every initiative to secure our critical infrastructure. Within DHS, the National Protection and Programs Directorate (NPPD) is responsible for working with public and industry infrastructure partners and leads the coordinated national effort to mitigate risk to the nation’s infrastructure through the development and implementation of the infrastructure protection program. Using a partnership approach, NPPD works with owners and operators of the nation’s infrastructure to develop, facilitate, and sustain strategic relationships and information sharing, including the sharing of best practices. NPPD also works with government and industry partners to coordinate efforts to establish and operate various councils intended to protect infrastructure and provide infrastructure functions to strengthen incident response. Our prior work has found that DHS and its partners have taken a number of steps intended to improve the security of our critical infrastructure. However, we have also identified a number of additional steps DHS could take to further improve its partnerships aimed at protecting our critical infrastructure. Specifically, our work has identified three key factors that can affect the implementation of the partnership approach used by DHS: (1) recognizing and addressing barriers to sharing information; (2) sharing the results of DHS assessments with industry and other stakeholders; and (3) measuring and evaluating the performance of DHS’s partnership efforts. Addressing pervasive and sustained computer-based and physical attacks to systems and operations and the critical infrastructures they support depends on effective partnerships between the government and industry owners and operators of critical infrastructure. Recognizing and addressing barriers to information sharing includes, among other things, identifying barriers to sharing information with partners, understanding information requirements, and determining partners’ reasons for participating in voluntary programs. Identifying barriers to industry sharing information with federal partners. In a July 2010 report examining, among other things, government stakeholders’ expectations for cyber-related, public- private partnerships we identified some barriers to industry’s sharing of cyber threat information with federal partners.of the government entities we contacted reported that industry partners were mostly meeting their expectations in several areas, including sharing timely and actionable cyber threat information, though the extent to which this was happening varied by sector. However, we found that federal officials also reported that improvements could be made. For example, while timely and actionable cyber threat and alert information was being received from industry partners, federal officials noted there were limits to the depth and specificity of the information provided by industry partners. Among other issues, we found that industry partners did not want to share their sensitive, proprietary information with the federal government. For example, information security companies had concerns that they could lose a competitive advantage by sharing information with the government if, in turn, this information was shared with those companies’ competitors. In addition, despite special protections and sanitization processes, we found that industry partners were unwilling to agree to all of the terms that the federal government or a government agency requires to share certain information. On the basis of our findings, we recommended, among other things, that DHS, in collaboration with industry partners, use the results of our July 2010 report to continue to focus its information- sharing efforts on the most desired services. DHS concurred with this recommendation and described steps underway to address it, including the initiation of several pilot programs intended to enable the mutual sharing of cybersecurity information at various classification levels. Identifying barriers to the government’s sharing information with industry partners. Federal efforts to meet the information-sharing expectations of industry partners are equally important in managing effective public-private partnerships to successfully protect cyber- reliant critical assets from a multitude of threats. In July 2010, we also examined industry partners’ expectations for cyber-related, public- private partnerships and identified some barriers to the federal government’s sharing of cyber threat information with its industry partners. We reported that federal partners were not consistently meeting industry’s information sharing expectations, including providing timely and actionable cyber threat information and alerts, according to industry partners we contacted at the time. We found that this was, in part, due to restrictions on the type of information that can be shared with industry partners. We reported that according to federal officials, DHS’s ability to provide information is affected by restrictions that do not allow individualized treatment of one industry partner over another industry partner—making it difficult to formally share specific information with entities that are being directly affected by a cyber threat. In addition, we reported in July 2010 that because DHS has responsibility for serving as the nation’s cyber analysis and warning center, it must ensure that its warnings are accurate. DHS vulnerability assessments are conducted during site visits at individual assets and are used to identify security gaps and provide options for consideration to mitigate these identified gaps. DHS security surveys are intended to gather information on an asset’s current security posture and overall security awareness. Security surveys and vulnerability assessments are generally asset-specific and are conducted at the request of asset owners and operators. assets crucial to national security, public health and safety, and the economy. We recommended, and DHS concurred, that it design and implement a mechanism for systematically assessing why owners and operators of high-priority assets decline to participate, and develop a road map, with time frames and milestones, for completing this effort. DHS stated that it had implemented a tracking system in October 2013 to capture data on the reason for declinations by owners and operators. Although DHS reports that it has taken or begun to take action on the open recommendations discussed above, we have not verified DHS’s progress implementing all of our recommendations. We will continue to monitor DHS’s efforts to implement these recommendations. Another important factor for DHS’s implementation of its partnership approach is sharing information on the results of its security assessments and surveys with industry partners and other stakeholders. Timely sharing of assessment results at the asset level. DHS security surveys and vulnerability assessments can provide valuable insights into the strengths and weaknesses of assets and can help asset owners and operators that participate in these programs make decisions about investments to enhance security and resilience. In our May 2012 report, we found that, among other things, DHS shares the results of security surveys and vulnerability assessments with asset owners or operators.security survey and vulnerability assessment results could be enhanced by the timely delivery of these products to the owners and operators and that the inability to deliver these products in a timely manner could undermine the relationship DHS was attempting to develop with these industry partners. Specifically, we reported that, based on DHS data from fiscal year 2011, DHS was late meeting its (1) 30-day time frame—as required by DHS guidance—for delivering the results of its security surveys 60 percent of the time and (2) 60- day time frame—expected by DHS managers for delivering the results of its vulnerability assessments—in 84 percent of the instances. DHS officials acknowledged the late delivery of survey and assessment results and said they were working to improve processes and However, we also found that the usefulness of protocols. However, DHS had not established a plan with time frames and milestones for managing this effort consistent with standards for project management. We recommended, and DHS concurred, that it develop time frames and specific milestones for managing its efforts to ensure the timely delivery of the results of security surveys and vulnerability assessments to asset owners and operators. DHS stated that, among other things, it deployed a web-based information-sharing system for facility-level information in February 2013, which, according to DHS, has since resulted in a significant drop in overdue deliveries. Sharing information with critical infrastructure partners at the sector level. Critical infrastructures rely on networked computers and systems, thus making them susceptible to cyber-based risks. Managing such risk involves the use of cybersecurity guidance that promotes or requires actions to enhance the confidentiality, integrity, and availability of computer systems. In December 2011, we reported on cybersecurity guidance and its implementation and we found, among other things, that DHS and the other sector-specific agencies have disseminated and promoted cybersecurity guidance among and within sectors. However, we also found that DHS and the other sector-specific agencies had not identified the key cybersecurity guidance applicable to or widely used in each of their critical infrastructure sectors. In addition, we reported that most of the sector- specific critical infrastructure protection plans for the sectors we reviewed did not identify key guidance and standards for cybersecurity because doing so was not specifically suggested by DHS guidance. Therefore, we concluded that given the plethora of guidance available, individual entities within the sectors could be challenged in identifying the guidance that is most applicable and effective in improving their security and that improved knowledge of the available guidance could help both federal and industry partners better coordinate their efforts to protect critical cyber-reliant assets. We recommended that DHS, in collaboration with government and industry partners, determine whether it is appropriate to have cybersecurity guidance listed in sector plans. DHS concurred with our recommendation and stated that it will work with its partners to determine whether it is appropriate to have cybersecurity guidance drafted for each sector and, in addition, would explore these issues with the cross-sector community. Sharing certain information with critical infrastructure partners at the regional level. Our work has shown that over the past several years, DHS has recognized the importance of and taken actions to examine critical infrastructure asset vulnerabilities, threats, and potential consequences across regions. In a July 2013 report, we examined DHS’s management of its Regional Resiliency Assessment Program (RRAP)—a voluntary program intended to assess regional resilience of critical infrastructure by analyzing a region’s ability to adapt to changing conditions, and prepare for, withstand, and rapidly recover from disruptions—and found that DHS has been working with states to improve the process for conducting RRAP projects, including more clearly defining the scope of these projects. We also reported that DHS shares the project results of each RRAP project report with the primary stakeholders—officials representing the state where the RRAP was conducted—and that each report is generally available to certain staff, such as sector-specific agencies and protective security advisors within DHS. However, we found that DHS did not share individual RRAP reports more widely with others in similar industry lines, including other stakeholders and sector-specific agencies outside of DHS. We also reported that DHS had been working to conceptualize how it can develop a product or products using multiple sources—including RRAP reports—to more widely share resilience lessons learned to its critical infrastructure partners, including federal, state, local, and tribal officials. DHS further reported using various forums, such as regional conferences or during daily protective security advisor contacts, to solicit input from critical infrastructure partners to gauge their resilience information needs. Due to DHS’s ongoing efforts, we did not make a related recommendation in the report. However, we noted that through continued outreach and engagement with its critical infrastructure partners, DHS should be better positioned to understand their needs for information about resilience practices, which would in turn help clarify the scope of work needed to develop and disseminate a meaningful resilience information–sharing product or products that are useful across sectors and assets. Sharing information with sector-specific agencies and state and local governments. Federal sector-specific agencies and state and local governments are key partners that can provide specific expertise and perspectives in federal efforts to identify and protect critical infrastructure. In a March 2013 report, we reviewed DHS’s management of the National Critical Infrastructure Prioritization Program (NCIPP)—which identifies and prioritizes a list of nationally significant critical infrastructure each year—to include how DHS worked with states and sector-specific agencies to develop the list. We reported that DHS had taken actions to improve its outreach to sector-specific agencies and states in an effort to address challenges associated with providing input on nominations and changes to the NCIPP list. For example, in 2009, we reported that DHS revised its list development process to be more transparent and provided states with additional resources and tools for developing their NCIPP nominations. Furthermore, DHS provided on-site assistance from subject matter experts to assist states with identifying infrastructure, disseminated a lessons-learned document providing examples of successful nominations to help states improve justifications, and was more proactive in engaging sector-specific agencies in ongoing dialog on proposed criteria changes, among other efforts. However, we also found that most state officials we contacted continued to experience challenges with nominating assets to the NCIPP list using the consequence-based criteria developed by DHS. We reported that DHS officials told us that they recognized that some states are facing challenges participating in the NCIPP program and have taken additional steps to address the issue, including working to minimize major changes to the consequence-based NCIPP criteria; enhancing state participation; and working collaboratively with the State, Local, Tribal and Territorial Government Coordinating Council to develop a guide to assist states with their efforts to identify and prioritize their critical infrastructure. Furthermore, in our January 2014 report reviewing the extent to which federal agencies coordinated with state and local governments regarding enhancing cybersecurity within public safety entities, we determined that DHS shared cybersecurity-related information, such as threats and hazards, with state and local governments through various entities. For example, we found that DHS collected, analyzed, and disseminated cyber threat and cybersecurity-related information to state and local governments through its National Cybersecurity and Communications Integration Center and through its relationship with the Multi-State Information Sharing and Analysis Center. In addition, we reported that DHS’s State, Local, Tribal, and Territorial Engagement Office’s Security Clearance Initiative facilitated the granting of security clearances to state chief information officers and chief information security officers which allowed these personnel to receive classified information about current and recent cyber attacks and threats. For example, we reported that, according to DHS officials, they have issued secret clearances to 48 percent of state chief information officers and 84 percent of state chief information security officers. Moreover, we reported that DHS provides unclassified intelligence information to fusion centers, which then share the information on possible terrorism and other threats and issue alerts to state and local governments. For example, in March 2013, a fusion center issued a situational awareness bulletin specific to public safety entities. Although DHS reports that it has taken or begun to take action on the open recommendations discussed above, we have not verified DHS’s progress implementing all of our recommendations. We will continue to monitor DHS’s efforts to implement these recommendations. Measuring and evaluating the performance of DHS partnerships—by among other things, obtaining and assessing feedback, evaluating why certain improvements are made, and measuring the effectiveness of partnerships and assessment—is another important factor in DHS’s implementation of its partnership approach. Obtaining and assessing feedback from industry partners. Taking a systematic approach to gathering feedback from industry owners and operators and measuring the results of these efforts could help focus greater attention on targeting potential problems and areas needing improvement. In April 2013, we examined DHS’s Chemical Facility Anti-Terrorism Standards (CFATS) program and assessed, among other things, the extent to which DHS has communicated and worked with owners and operators to improve security. Specifically, we reported that DHS had increased its efforts to communicate and work with industry owners and operators to help them enhance security at their facilities since 2007. We found that as part of their outreach program, DHS consulted with external stakeholders, such as private industry and state and local government officials to discuss issues that affect the program and facility owners and operators. However, despite increasing its efforts to communicate with industry owners and operators, we also found that DHS had an opportunity to obtain systematic feedback on its outreach. We recommended that DHS explore opportunities and take action to systematically solicit and document feedback on facility outreach. DHS concurred with this recommendation and has actions underway to explore such opportunities to make CFATS-related outreach efforts more effective for all stakeholders. Evaluating why facility-level improvements are made or not made. According to the NIPP, the use of performance measures is a critical step in the risk management process to enable DHS to objectively and quantitatively assess improvement in critical infrastructure protection and resiliency at the sector and national levels. In our May 2012 report on DHS’s efforts to conduct surveys and assessments of high-priority infrastructure assets and share the results, we found that, consistent with the NIPP, DHS has taken action to follow up with participants to gather feedback from asset owners and operators that participated in the program regarding the effect these programs have had on asset security. However, we also found that DHS could consider using this follow-up tool to capture key information that could be used to understand why certain improvements were or were not made by asset owners and operators that have received surveys and assessments. For example, the follow-up tool could ask asset representatives what factors—such as cost, vulnerability, or perception of threat—influenced the decision to implement changes, either immediately or over time, if they chose to make improvements. We concluded that obtaining this information would be valuable to understanding the obstacles asset owners or operators face when making security investments. We recommended, and DHS concurred, that it consider the feasibility of expanding the follow-up program to gather and act upon data, as appropriate, on (1) security enhancements that are ongoing and planned that are attributable to DHS security surveys and vulnerability assessments and (2) factors, such as cost and perceptions of threat, that influence asset owner and operator decisions to make, or not make, enhancements based on the results of DHS security surveys and vulnerability assessments. DHS reported that it had modified the follow-up program to capture data on whether ongoing and planned security enhancements are attributable to security surveys and vulnerability assessments. Furthermore, DHS stated that it had also completed additional modifications to the follow-up tools to more accurately capture all improvements to resilience as well as information on factors influencing owner and operator decisions to make or not make enhancements. Measuring the effectiveness of sector-level partnerships. Ensuring the effectiveness and reliability of communications networks is essential to national security, the economy, and public health and safety. In an April 2013 report, we found that while DHS has multiple components focused on assessing risk and sharing threat information, DHS and its sector partners do not consistently measure the outcome of efforts to improve cybersecurity at the sector level. For example, we found that DHS and its partners had not developed outcome- based performance measures related to the cyber protection of key parts of the communications infrastructure sector. We concluded that outcome-based metrics related to communications networks and critical components supporting the Internet would provide federal decision makers with additional insight into the effectiveness of partner protection efforts at the sector level. We recommended that DHS collaborate with its partners to develop outcome-oriented measures for the communications sector. DHS concurred with our recommendation and stated that it is working with industry to develop plans for mitigating risks that will determine the path forward in developing outcome-oriented performance measures for cyber protection activities related to the nation’s core and access communications networks. Measuring the effectiveness of regional-level assessments. Similarly, in our July 2013 report examining DHS’s management of its RRAP program, we found that DHS had taken action to measure efforts to enhance security and resilience among facilities that participated in these regional-level assessments, but faced challenges measuring the results associated with these projects. Consistent with the NIPP, DHS performs periodic follow-ups among industry partners that participate in these regional assessments with the intent of measuring their efforts to make enhancements arising out of these surveys and assessments. However, we found that DHS did not measure how industry partners made enhancements at individual assets that participate in a RRAP project contribute to the overall results of the project. DHS officials stated at the time that they faced challenges measuring performance within and across RRAP projects because of the unique characteristics of each, including geographic diversity and differences among assets within projects. However, we concluded that DHS could better position itself to gain insights into projects’ effects if it were to develop a mechanism to compare facilities that have participated in a RRAP project with those that have not, thus establishing building blocks for measuring its efforts to conduct RRAP projects. We recommended that DHS develop a mechanism to assess the extent to which individual projects influenced partners to make RRAP-related enhancements. DHS concurred with our recommendation and reported that it had actions underway to review alternatives, including possibly revising its security survey and vulnerability assessment follow-up tool, to address this recommendation. Although DHS reports that it has taken or begun to take action on the open recommendations discussed above, we have not verified DHS’s progress implementing all of our recommendations. We will continue to monitor DHS’s efforts to implement these recommendations. In closing, the federal government has taken a variety of actions that are intended to enhance critical infrastructure cybersecurity. Improving federal capabilities—through partnerships with industry, among other things—is a step in the right direction, and effective implementation can enhance federal information security and the cybersecurity and resilience of our nation’s critical infrastructure. However, more needs to be done to accelerate the progress made in bolstering the cybersecurity posture of the nation. The administration and executive branch agencies need to fully implement the hundreds of recommendations made by GAO and agency inspectors general to address cyber challenges. Until then, the nation’s most critical federal and private sector infrastructure systems will remain at increased risk of attack from our adversaries. Chairman Carper, Ranking Member Coburn, and members of the committee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For information about this statement please contact Stephen L. Caldwell, at (202) 512-9610 or [email protected], or Gregory C. Wilshusen, at (202) 512-6244 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this work included Edward J. George, Jr., Assistant Director; Michael W. Gilmore, Assistant Director; Hugh Paquette, Analyst-in-Charge; Jose Cardenas; Tom Lombardi; and Erin McLaughlin. This appendix provides information on the 16 critical infrastructure (CI) sectors and the federal agencies responsible for sector security. The National Infrastructure Protection Plan (NIPP) outlines the roles and responsibilities of the Department of Homeland Security (DHS) and its partners—including other federal agencies. Within the NIPP framework, DHS is responsible for leading and coordinating the overall national effort to enhance protection via 16 critical infrastructure sectors. Consistent with the NIPP, Presidential Decision Directive/PPD-21 assigned responsibility for the critical infrastructure sectors to sector-specific agencies (SSAs). As an SSA, DHS has direct responsibility for leading, integrating, and coordinating efforts of sector partners to protect 10 of the 16 critical infrastructure sectors. Seven other federal agencies have sole or coordinated responsibility for the remaining 6 sectors. Table 1 lists the SSAs and their sectors.
Federal efforts to protect the nation's critical infrastructure from cyber threats has been on GAO's list of high-risk areas since 2003. Critical infrastructure is assets and systems, whether physical or cyber, so vital to the United States that their destruction would have a debilitating impact on, among other things, national security and the economy. Recent cyber attacks highlight such threats. DHS, as the lead federal agency, developed a partnership approach with key industries to help protect critical infrastructure. This testimony identifies key factors important to DHS implementation of the partnership approach to protect critical infrastructure. This statement is based on products GAO issued from October 2001 to March 2014. To perform this work, GAO reviewed applicable laws, regulations, and directives as well as policies and procedures for selected programs. GAO interviewed DHS officials responsible for administering these programs and assessed related data. GAO also interviewed and surveyed a range of other stakeholders including federal officials, industry owners and operators, industry groups, and cybersecurity experts. GAO's prior work has identified several key factors that are important for the Department of Homeland Security (DHS) to implement its partnership approach with industry to protect critical infrastructure. DHS has made some progress in implementing its partnership approach, but has also experienced challenges coordinating with industry partners that own most of the critical infrastructure. Recognizing and Addressing Barriers to Sharing Information. Since 2003, GAO has identified information sharing as key to developing effective partnerships. In July 2010, GAO reported some barriers affecting the extent to which cyber-related security information was being shared between federal and industry partners. For example, industry partners reported concerns that sharing sensitive, proprietary information with the federal government could compromise their competitive advantage if shared more widely. Similarly, federal partners were restricted in sharing classified information with industry officials without security clearances. GAO recommended that DHS work with industry to focus its information-sharing efforts. DHS concurred and has taken some steps to address the recommendation, including sponsoring clearances for industry. Sharing Results of DHS Assessments with Industry. GAO has found that DHS security assessments can provide valuable insights into the strengths and weaknesses of critical assets and drive industry decisions about investments to enhance security. In a May 2012 report, GAO found that DHS was sharing the results of its assessments with industry partners, but these results were often late, which could undermine the relationship DHS was attempting to develop with these partners. GAO recommended that DHS develop time frames and milestones to ensure the timely delivery of the assessments to industry partners. DHS concurred and reported that it has efforts underway to speed the delivery of its assessments. Measuring and Evaluating Performance of DHS Partnerships . GAO's prior work found that taking a systematic approach to gathering feedback from industry owners and operators and measuring the results of these efforts could help focus greater attention on targeting potential problems and areas needing improvement. In an April 2013 report, GAO examined DHS's chemical security program and assessed, among other things, the extent to which DHS has communicated and worked with industry owners and operators to improve security. GAO reported that DHS had increased its efforts to communicate and work with industry to help them enhance security at their facilities. However, GAO found that DHS was not obtaining systematic feedback on its outreach. GAO recommended that DHS explore opportunities and take action to systematically solicit and document feedback on industry outreach. DHS concurred and reported that it had taken action to address the recommendation. However, the cyber security of infrastructure remains on GAO's high-risk list and more needs to be done to accelerate the progress made. DHS still needs to fully implement the many recommendations on its partnership approach (and other issues) made by GAO and inspectors general to address cyber challenges. GAO has made recommendations to DHS in prior reports to strengthen its partnership efforts. DHS generally agreed with these recommendations and reports actions or plans to address many of them. GAO will continue to monitor DHS efforts to address these recommendations.
All 50 states and the District of Columbia submitted strategic highway safety plans to FHWA before October 2007, a deadline established by SAFETEA-LU. Additionally, the 25 state strategic highway safety plans we reviewed generally contained the key elements specified in SAFETEA-LU, such as consideration of all three approaches to improving highway safety, including infrastructure improvement, behavioral approaches (education and enforcement), and emergency medical service improvements, and evidence of involvement by a broad set of stakeholders. For example: All 25 plans included infrastructure improvement and behavioral approaches among the emphasis areas or key strategies that states identified to address their top priorities. Twenty-two of the plans included emergency medical services improvements. Our review of the plans indicated that 20 of 25 states consulted with at least five of the eight specified types of stakeholders, including representatives of the state agencies that administer NHTSA and FMCSA safety grants. As a result, the new planning process helped break down the separation between engineering and behavioral program planning that existed prior to SAFETEA-LU. Highway safety officials in states we visited said the extent of cooperation between stakeholders that occurred when developing the strategic highway safety plan was a largely new development after SAFETEA-LU. FHWA officials told us that they believe this change in planning is the most important result to date of the changes in HSIP. Likewise, officials responsible for safety programs at NHTSA, FMCSA, and in the states we visited agreed that HSIP’s strategic highway safety planning process facilitated more integrated safety planning than had occurred in the past. While the state plans we reviewed indicated general compliance with SAFETEA-LU’s requirements for preparing strategic highway safety plans, states do not yet have the crash data analysis systems needed to identify and select possible safety improvements as set forth in SAFETEA-LU. These systems include crash location data in a geographic format suitable for mapping and roadway characteristics data—such as lane and shoulder dimensions—for all public roads, together with software that can analyze the data. With these components, states can identify hazardous locations, develop appropriate remedies, and target resources to the greatest hazards. The requirement to obtain and analyze data for all public roads is a significant departure from past practice for many states. Before SAFETEA-LU, states generally had such information only on the roads they owned, because that information was useful for managing the maintenance and operation of their state-owned roads. However, state- owned roads account for a relatively small proportion of the public road miles in most states, averaging 20 percent nationwide. In the six states we visited, the state-owned portion of all public roads ranged from about 8 percent in Iowa to about 33 percent in Pennsylvania, and the remaining roads were locally owned. This data gap presents a challenge for states that may be costly for many to address, but the increased funding authorized for HSIP is generally available for data improvements as well as safety projects. Our review of 25 state strategic highway safety plans and six site visits indicated that, to varying degrees, states lack key components of crash data analysis systems: All 50 states maintain data on the crashes that occur on all public roadways in the state, but in the 25 states we reviewed, the information on crash locations was typically not in a geographic format (GIS or GPS) suitable for mapping. Safety engineers use crash location data to determine if accidents recur, or cluster, at specific sites. Among the states we visited, Iowa and California had crash data in a geographic format that allowed accidents to be located precisely on any public road in the state, but the other four states did not have such data for nonstate roads. According to our review of 25 states’ strategic highway safety plans, some states are working toward improving their crash location data by upgrading their crash reporting systems with GPS capabilities, yet it is still common for crash location data to come from handwritten crash reports that use mile-post markers, intersections, or street addresses to identify crash locations. Most of the 25 states included in our review did not have data on roadway characteristics for all publicly owned roads, especially locally owned roads. As noted, states generally maintain these data only for roads they are responsible for maintaining and operating. For example, the Pennsylvania Department of Transportation originally established, and now maintains the data for, a roadway characteristics database to support its management and operation of state-owned roads. The department still uses the database primarily for this purpose, but the data can also be used for safety analyses. Furthermore, because it is costly and time consuming to gather and maintain roadway characteristics data, states generally have not expanded their roadway characteristics databases to include locally owned roads. For example, Florida officials estimated that it would initially cost $300 million and could take 3 years to develop such a database. In addition, they noted there would be annual maintenance costs to keep the data current. Of the six states we visited, only Iowa had roadway characteristics data for all public roads. Most of the 25 states we reviewed have not developed software or other analytic tools to use the crash location and roadway characteristics data to perform the analysis required by SAFETEA-LU. FHWA is developing a software system, known as “Safety Analyst,” that is designed to help states use crash location and roadway characteristics data to determine their most hazardous locations, rank them, identify possible remedies, and estimate the costs of implementing the remedies. FHWA estimates that it will complete the development of this software and release it to the states later in 2008. In the meantime, some states may also be developing their own approaches. For example, Mississippi is developing its own software, which is similar to Safety Analyst. Until states have obtained the necessary data and software, they cannot conduct the kind of data analysis specified by SAFETEA-LU—namely, identifying and ranking hazardous locations on all public roads, determining appropriate remedies, and estimating project costs. This kind of analysis is also necessary to generate 5 percent reports that fully meet the requirements for these reports set forth in SAFETEA-LU, including requirements for information on remedies and costs. Many of the 5 percent reports we reviewed lack this required information. FHWA provided guidance and technical assistance to states in preparing strategic highway safety plans, and FHWA division officials participated in each state’s planning process. FHWA’s guidance included memorandums describing new HSIP program procedures and a reference guide on strategic planning. Furthermore, FHWA held training symposiums and provided technical assistance through its division offices and resource center. According to our review of 25 strategic highway safety plans and six site visits, FHWA division staffs were actively involved in the state planning efforts that resulted in states’ adoption of strategic highway safety plans and FHWA’s acceptance of these plans. In its guidance to states on implementing HSIP, FHWA stopped short of requiring states to gather all the data needed for the type of safety analysis specified in SAFETEA-LU. FHWA set August 31, 2009, as a deadline for states to develop the crash location data needed to map crashes on all public roads. FHWA officials told us that they believe that states will meet this deadline. However, recognizing the data limitations many states face, FHWA has not set a date for states to have the other required data on roadway characteristics for all public roads. Without roadway characteristics data, states cannot identify remedies and estimate the costs of infrastructure projects using analytic tools, such as Safety Analyst, but must instead rely on older approaches that combine data analysis with field surveys of potential improvement locations, roadway safety audits, or other information sources. In its guidance on the 5 percent report, FHWA gave states leeway in interpreting the act’s requirements and did not specify a methodology. Recognizing the states’ data limitations, FHWA advised the states to prepare their 5 percent report using available data. Consequently, states prepared widely varying 5 percent reports. For example, some reports included remedies and costs for each location while others showed remedies and costs only for certain locations or for none at all. In our review of the 2007 reports for 25 states, the number of locations reported ranged from 5 to 880, with 3 states reporting 10 or fewer locations and 6 states reporting over 100. Additionally, many reports list locations in a format that the general public may find difficult to use. For example, the public may find it hard to identify a hazardous location when it is identified in the report by the roadway mile marker, as is done in several reports we reviewed. We found that some states were using their 5 percent reports to help identify projects for funding, but where the format for identifying the sites was not readily accessible to the public, it was not clear whether the reports would enhance public awareness of highway safety, as intended. As previously noted, federal and state officials told us that the strategic highway safety planning process improved collaboration and safety planning, but it is too early to evaluate the results of states’ efforts to carry out HSIP since SAFETEA-LU’s enactment, especially the results of infrastructure projects identified through the strategic highway safety planning process. However, preliminary evidence from our review of 25 states’ plans and six site visits indicates that three provisions in SAFETEA- LU may not be aligned with states’ safety priorities. First, states have generally not taken advantage of HSIP’s flexible funding provision, which allows them to use HSIP funding for noninfrastructure projects. Second, the rail-highway crossing set-aside may target a low-priority type of project for some states, although other states continue to emphasize this area. Third, states have just begun to implement the high-risk rural road program, but data limitations may be making it difficult for some states to allocate program funds to qualifying projects. Too little time has passed for states to select and build infrastructure projects identified in their strategic highway safety plans and, as a result, it is too soon to evaluate the results of HSIP projects funded under SAFETEA-LU’s authorization. Given the October 2007 deadline for states to submit their strategic highway safety plans to FHWA, states finalized their plans relatively recently—28 states did so in 2006, and the remaining 22 states, plus the District of Columbia, did so in 2007. Because infrastructure projects can take a year or more to select and build, and subsequent project evaluations require 3 years’ worth of crash data after the projects have been implemented, it is too soon to assess the effectiveness of projects undertaken under the new program. States made limited use of the HSIP flexible funding provision that allows them to transfer up to 10 percent of their HSIP funds to behavioral and emergency medical services projects if they have adopted a strategic highway safety plan and certified that they have met all their safety infrastructure needs. As of the end of June 2008, seven states had applied to FHWA, and been granted approval, to transfer about $13 million in HSIP funds to behavioral or emergency medical services projects (see table 1), according to FHWA data. Though none of the six states we visited has requested approval to transfer HSIP funds, officials in two of those states did express interest in doing so. However, these officials noted that their states could not meet the certification requirement because of ongoing infrastructure needs and concerns about the potential legal liability a state could incur by certifying that all its infrastructure safety needs have been met. Officials in the other states we visited agreed that certification would be difficult, but did not express interest in transferring funds because they had enough infrastructure projects to use all the available HSIP funds. At least in part because of these conditions attached to transferring funds, most HSIP funding remains focused on infrastructure. In some instances, the funding allocated between approaches may not be aligned with the emphasis areas laid out in the state strategic highway safety plan. Nevertheless, states may use NHTSA and FMCSA grants as well as transfer HSIP funds to address behavioral and emergency medical services approaches to improving highway safety. In contrast to HSIP funding, though, grants from related NHTSA and FMCSA programs are not formally aligned with the strategic highway safety plan developed as part of HSIP. In our interviews with federal officials at FHWA, NHTSA, and FMCSA, we found that stakeholders from those three organizations were collaborating, usually informally, but to date, the flexible funding provision in HSIP has not significantly altered the sources of federal funding states use to fund infrastructure, behavioral, and emergency medical services safety projects. Additionally, because states’ NHTSA and FMCSA grant awards are not formally aligned with states’ strategic highway safety plans, it is unclear to what extent states have aligned their total federal highway safety funding with priorities identified in their strategic highway safety plans. HSIP’s funding set-aside for rail-highway crossing improvements may target projects that are a low priority and yield low safety benefits for some states, but other states continue to emphasize rail-highway crossing improvements. Our review of 25 strategic highway safety plans showed that improving rail-highway crossings was often a low priority for states. As noted earlier, states designate their top safety priorities as emphasis areas in their strategic highway safety plans and identify their most hazardous locations in their 5 percent reports. Seventeen of 25 states had not identified rail-highway crossings as an emphasis area. In our review of the 5 percent reports submitted by these 25 states in 2007, we found that Oregon alone identified a rail-highway crossing in its 5 percent report of most hazardous locations. States’ relatively low emphasis on safety improvements at rail-highway crossings may be related to their evaluations of the effectiveness of recent improvements. In reviewing our 25 selected states’ rail-highway crossing program annual reports for 2007, we found 21 reports that included before-and-after crash data for rail-highway crossing improvement locations. In 15 of these 21 states, almost all of the improved locations showed zero incidents both before and after the improvement. Nevertheless, West Virginia’s annual crossing report noted that as long as federal funding through the set-aside program continues, the state’s strategic highway safety plan will address rail-highway crossings despite low project benefits. The six states we visited varied in their views on the set-aside for rail- highway crossing improvements. Officials in two of the states said that the set-aside may be disproportionately high given the low risk rail-highway crossings pose compared with other hazardous locations. FHWA Office of Safety officials agreed that the program’s funding, which accounts for approximately 17 percent of HSIP authorizations, was high based on the number of fatalities that occur at rail-highway crossings. Conversely, officials in Illinois noted that rail-highway crossings are a safety priority for the state. Additionally, Mississippi demonstrated the importance of improving crossings through their safety programs by augmenting federal set-aside funds with state funds. The SAFETEA-LU Technical Corrections Act provides states with flexibility to use rail-highway crossing set-aside funds for other types of HSIP projects if they certify that they have met all their rail-highway crossing needs. While it remains to be seen how states will respond to this amendment, they may be reluctant to certify that they have met all their needs. As noted earlier, some states have been reluctant to make use of HSIP’s flexible funding provision because they may still have some infrastructure needs or may have legal concerns about the potential liabilities of such a certification. Many states are still in the early stages of implementing the set-aside program for high-risk rural roads and have yet to obligate significant funds for projects, and data limitations may be hindering their ability to target program funds to eligible projects. SAFETEA-LU created this program because over half of highway fatalities occur on rural roads. The act authorizes $90 million per year to address hazards on rural roads defined as high risk. Projects on roadways that meet the act’s definition are eligible for funding under the program. According to reports on the program to FHWA by the 25 states we selected, 23 of these states had implemented the program to some extent by the end of fiscal year 2007. Of these 23 states, 16 had already identified projects and approved, funded, or contracted for at least one infrastructure project, and 7 were still identifying potential projects, gathering data, or performing other preliminary activities. Because states remain in the early stages of implementing the program, obligations made to date are low; for example, through June 2008, program obligations for all years under SAFTETEA-LU totaled $50.3 million, compared with almost $270 million authorized through that time period. Limited data on rural roads—including data on crash locations and local roadway characteristics—may be hindering the program’s implementation by making it difficult for some states to identify roads that conform to the definition of high-risk rural roads in SAFETEA-LU. Officials in 5 states we visited noted that limitations in their crash location and roadway characteristics data made it difficult for them to identify qualifying roadways and appropriate remedies. Additionally, in our review of 25 state reports, we found states cited data limitations as a difficulty in implementing the program. For example, at the end of fiscal year 2007, Texas had yet to implement the program due to data limitations. Chairman Boxer and Members of the Committee, this concludes my prepared statement. We plan to report in more detail on changes in the Highway Safety Improvement Program and may have recommendations at that time. I would be pleased to respond to any questions that you or other Members of the Committee might have. For further information on this statement, please contact Katherine A. Siggerud at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony were Rita Grieco, Assistant Director; Richard Calhoon; Elizabeth Eisenstadt; Bert Japikse; Sara Ann Moessbauer; John W. Stambaugh; and Frank Taliaferro. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
About 43,000 traffic fatalities occur annually, and another 290,000 people are seriously injured on the nation's roads. To reduce these numbers, the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) nearly doubled funding for the Federal Highway Administration's (FHWA) Highway Safety Improvement Program (HSIP), authorizing $5.1 billion for 2006 through 2009. SAFETEA-LU also added requirements for states to develop strategic highway safety plans that cover all aspects of highway safety, including infrastructure, behavioral (education and enforcement), and emergency medical services projects; develop crash data analysis systems; and publicly report on the top 5 percent of hazardous locations on all their public roads. SAFETEA-LU also set aside funds for a legacy rail-highway crossing program and a new high-risk rural road program. This testimony provides preliminary information on the implementation of HSIP since SAFETEA-LU. It is based on ongoing work that addresses (1) states' implementation of HSIP following SAFETEA-LU, (2) FHWA's guidance and assistance for states, and (3) results of HSIP to date, including for the two set-aside programs. To conduct this study, GAO visited 6 states, judgmentally selected based on highway safety attributes, analyzed plans and reports from these 6 states and 19 randomly selected states, and interviewed FHWA and state safety officials. All states submitted strategic highway safety plans and reports listing the top 5 percent of their hazardous locations, according to FHWA. The 25 state plans GAO reviewed generally cover all aspects of highway safety, but the 25 states have not fully developed the required crash data analysis systems. FHWA and state safety officials cited the collaboration that occurred among safety stakeholders in developing the plans as a positive influence on state safety planning. Many of the 25 states lacked key components of crash data analysis systems, including crash location data, roadway characteristics data, and software for analyzing the data. As a result, most states cannot identify and rank hazardous locations on all public roads, determine appropriate remedies, and estimate costs, as required by SAFETEA-LU, and their 5 percent reports often lack required information on remedies and costs. FHWA provided written guidance and training to assist the states, especially in preparing their strategic highway safety plans, and participated in every state's strategic safety planning process. However, FHWA has not required states to submit schedules for obtaining complete roadway characteristics data, and because states lack complete data, FHWA's guidance on the 5 percent reports did not specify a methodology. As a result, states' 5 percent reports vary widely, raising questions about how this report can be used. It is too soon to evaluate the results of HSIP as carried out under SAFETEA-LU because states need more time to identify, implement, and evaluate projects they have undertaken since adopting their strategic highway safety plans. However, preliminary evidence indicates that some HSIP provisions may not be aligned with states' safety priorities. First, most states have not taken advantage of a new spending provision that allows states to use some HSIP funds for behavioral or emergency medical services projects, partly because a certification requirement--that all state highway safety infrastructure needs have been met--may make them reluctant to do so. Second, the rail-highway crossing set-aside program does not target the top safety priorities of some states. Lastly, states are still in the early stages of implementing the high-risk rural road set-aside program, and data limitations may make it difficult for some of them to identify qualifying projects, especially for locally owned rural roads. FHWA agreed with GAO's findings.
Congress has long been concerned about the movement of government officials from DOD to private employers who do business with their former agencies and has passed laws that place limitations on the employment of former government officials. The laws include penalties for violations by the former government employee and civil or administrative penalties for the contractors who employ them. There are acknowledged benefits to employing former government officials for both DOD and defense contractors; for example, former DOD officials bring with them the knowledge and skills in acquisition practices they have developed at DOD which also benefit DOD when communicating with these contractor personnel. However, a major concern with post- government employment has been that senior military and civilian officials and acquisition officials working for defense contractors immediately after leaving DOD could lead to conflicts of interest and affect public confidence in the government by creating the following perceptions, among others: DOD personnel who anticipate future employment with a defense contractor might be perceived as using their position to gain favor with the contractor at the expense of the government, and former DOD personnel who work for a defense contractor might be perceived as using their contacts with former colleagues at DOD to the benefit of the defense contractor and to the detriment of the public. The principal restrictions concerning post-government employment for DOD and other federal employees after leaving government service are found in 18 U.S.C. § 207 (post-employment conflict of interest) and 41 U.S.C § 423 (restrictions on former officials’ acceptance of compensation from a contractor). Importantly, the laws do not prohibit an individual from working on a contract under the responsibility of the official’s former agency or even a contract that was under the official’s direct responsibility if the appropriate cooling-off periods are met or if the former officials restrict their activities to behind-the-scenes work and do not represent their new company to their former DOD employer. The laws are complex, and brief summaries here are intended only to provide context for the issues discussed in this report. The title 18 U.S.C. § 207 provision generally prohibits an individual from representing a contractor to their former agency on particular matters involving specific parties that they handled while working for the federal government; for example, a specific defense contract. The law restricts representing the contractor to the official’ s former agency for defined cooling-off periods that vary according to the former official’s involvement and seniority (i.e., high-level) for example: former personnel are permanently barred from representing their new employer to their former agencies for matters on which they were personally and substantially involved; even if the officials were not directly involved in the matter, former personnel may not represent their new employer to their former agency on matters that were pending under their official responsibility in their last year of service for 2 years after leaving federal service; and former senior-level officers and employees may not contact their former agency on particular government matters (such as a contract) that is pending or is of substantial interest to the former agency for 1 year after leaving federal service. The 41 U.S.C. § 423 provision more narrowly applies to the work former DOD and other government acquisition officials may do after leaving federal service. The law restricts former DOD acquisition officials from accepting compensation from a defense contractor during a 1 year cooling-off period. Specifically, this provision prohibits employment with a contractor if the acquisition official performed certain duties at DOD involving the contractor and a contract valued in excess of $10 million. However, the law permits former acquisition officials to accept employment from “any division or affiliate of a contractor that does not produce the same or similar products or services” that were produced under the contract. The laws establish penalties for individuals and contractors who do not comply with the restrictions. Recent high-profile cases involving former senior DOD officials’ violations of these laws or related conflict of interest law on seeking post- government employment with contractors have resulted in serious consequences for both the officials and their defense contractor employers. Examples are as follows: In July 2007, a retired Navy rear admiral pleaded guilty to a charge of violating 18 U.S.C. § 207. The former admiral admitted to signing a major contract proposal and cover letter on behalf of his new contractor employer and sending it to his former Navy command in San Diego within the 1-year cooling-off period. In his plea, the former officer admitted that his intent in sending the letter was to influence the Navy’s decision and obtain the contract award for his new company. The former admiral was sentenced to a year’s probation and fined $15,000. In response to the conflict of interest, the Navy also eliminated the contractor’s bid before awarding the contract. In 2006, the Boeing Company was fined $615 million and had a lease contract valued at $20 billion canceled, in part, due to the failure of Ms. Darleen Druyun, a former senior Air Force procurement officer, to obey conflict of interest laws that prohibit officials from continuing to participate in work with a company while pursuing future employment. Specifically, when she was working for the Air Force, Ms. Druyun negotiated a job with Boeing for her daughter, son-in-law, and herself, while Boeing was seeking a $20 billion contract to lease tanker aircraft to the Air Force. Ms. Druyun served a prison sentence for the violations, and the Boeing Company’s Chief Financial Officer pleaded guilty to aiding and abetting fraud and was sentenced to 4 months in prison, fined $250,000, and given 200 hours of community service. About 86,000 military and civilian personnel who had left DOD service in a 6 year period since 2001 were employed in 2006 by the 52 major defense contractors, including 2,435 former DOD officials who were senior civilian executives, generals, admirals, and acquisition officials including program managers, deputy program managers, and contracting officers. This latter group of contractor employees, hired between 2004 and 2006, served at DOD in positions that made them subject to post-government employment restrictions. Contractors’ employment of former DOD officials was highly concentrated—1,581 former DOD officials were employed by seven of the 52 contractors. To estimate how closely related work assignments of former DOD officials were to their previous assignments at DOD, we examined in greater detail the job histories of a randomly selected sample of former DOD senior and acquisition officials employed by the contractors. While there may be proper justification for their post- government employment with a contractor, on the basis of this sample we estimate that at least 422 individuals could have been working on defense contracts directly related to their former DOD agency and we estimate at least nine could have been working on the same defense contracts for which they had program oversight responsibilities or decision-making authorities while at DOD. The information we analyzed to make this estimate was not designed to identify, nor should this estimate be used to suggest, that we found any violations of the restrictions on post- government employment. Moreover, contractors provided justification for the former government employees in our sample to work on the contracts. However, the estimated number of former DOD officials who could have worked on defense contracts related to their prior agencies or to their prior direct responsibility indicates why there is concern over how contractors monitor their former DOD employees. The 1,857,004 military and civilian employees who left DOD service over 6 years since 2001 included 35,192 who had served in the type of senior or acquisition official positions that made them subject to post-government employment restrictions if they were to subsequently be hired by defense contractors. As shown in table 1, our analysis of the major defense contractors’ employment found that contractors employed 86,181 former DOD military and civilian personnel in 2006. This tally includes 2,435 former senior-level and acquisition officials who one or more of the contractors hired since 2004 and employed in 2006. Although the number of former DOD senior-level and acquisition officials employed in 2006 varied greatly across the 52 defense contractors, as shown in table 2, post-DOD employment was highly concentrated at seven contractors—Science Applications International Corporation, Northrop Grumman Corporation, Lockheed Martin Corporation, Booz Allen Hamilton, Inc., L3 Communications Holding, Inc., General Dynamics, and Raytheon Company. These contractors accounted for about 65 percent of the former DOD senior and acquisition officials hired at the 52 companies and for over 40 percent of the value of contract awards for the 52 contractors. Employment of former DOD senior and acquisition officials at the remaining 45 contractors was much less concentrated. Specifically, in 2006, employment of the former DOD officials totaled 10 or fewer at 24 of the contractors, and 4 of these contractors did not employ any former DOD senior or acquisition officials in 2006. Appendix III presents more detail on the employment of former DOD senior and acquisition officials in 2006 for each of the 52 contractors. To obtain an understanding of the characteristics of the major defense contractors’ employment of former DOD senior and acquisition officials in relationship to these officials’ prior DOD positions—i.e., military or civilian, senior-level, or acquisition-related, and DOD employer (such as Air Force or Army)—we analyzed contractor employment at the 52 companies to look for significant differences, if any, across categories related to the officials’ former DOD positions. As shown in table 3, of the total former DOD officials that the contractors employed in 2006, we found there were nearly five times as many former acquisition officials (2,021 individuals) as former senior officials (414 individuals). In their former DOD positions, these 2,021 acquisition officials served in key procurement-related positions—such as program manager, deputy program manager, or contracting officer—and generally had the type of critical responsibilities, relationships, and influence that characterize DOD’s business interactions with its contractors. Also shown in table 3, in 2006 the contractors employed 414 former senior DOD officials. Our analysis of these senior officials’ DOD positions before their post-government employment with the contractors found they had served in a range of high-level positions—including generals, admirals, and civilian senior executives. As such, in their former positions, these DOD senior officials had served in key positions that could influence DOD’s mission-related decision-making. We also found contractors’ post-DOD employment was almost evenly divided across former military and civilian officials, as shown in table 4. In addition, most of the former DOD officials employed by the contractors in 2006 had previously served in positions at the Air Force and Navy, followed by those who had previously served in Army positions. To provide information about former DOD officials work assignments with contractors, we analyzed job histories and work assignments for a stratified random sample of former DOD officials to determine if these individuals worked on defense contracts or programs for which they had direct responsibility at DOD or which were the responsibility of their former DOD agency, office, or command. We estimate that many former DOD officials could have been working on defense contracts under the responsibilities of their prior DOD agencies and a few could have been working on the same defense contracts for which they had program oversight responsibilities or decision-making authorities while working at DOD. It is important to keep in mind, however, that post-government employment in these instances could be lawful depending on the role the employee had with the government, the role the employee had with the contractor, and the length of time between government service or work relating to the contract and the private employment. Also, contractors responding to our survey were self-reporting on a sensitive issue dealing with circumstances that could indicate potential conflicts of interest. As such, the information we sought from contractors was not designed or expected to elicit specific cases of post-government employment violations, nor did we identify any. Further, contractors provided justifications for the former DOD officials in our sample working on the defense contracts. Nevertheless, the results provide insight on the estimated magnitude of former officials’ post-government employment with major defense contractors tied to their prior agencies and direct responsibilities. In our view, the results also indicate the importance of careful monitoring to ensure that conflicts of interest do not occur. To estimate how many former DOD officials were working on assignments that were the responsibilities of their former DOD agencies or for which they had program oversight responsibilities or decision-making authorities at DOD, we drew a stratified random sample of 125 individuals from the former DOD senior and acquisition officials identified by contractors as being employed in 2006. We sent a questionnaire asking the contractor for information concerning the individual’s job history, including the circumstances of the assignment if the job history showed that they were working on assignments related to their former positions while they were at DOD. (App. IV provides a copy of the questionnaire we used.) Extrapolating from the sample results, we estimate that at least 422 officials could have had contractor assignments working on defense contracts that were the responsibilities of their former DOD agencies. We estimate that at least nine could have worked on contracts for which they had program oversight responsibilities or decision-making authority at DOD. The contractors reported other information about the sampled individuals that justified why these work assignments would not involve potential conflicts of interest or violations of post-government employment restrictions, including the following: The individuals’ cooling-off (i.e., restriction) periods had expired. The individuals were performing behind-the-scenes work and did not have direct contacts with their former DOD agencies about the particular defense contracts. The individuals were working on different defense projects than they had worked on while at DOD but for the same agencies. For example, while the contractors reported that 20 former Navy officials in our sample worked on Navy contracts, the contractors also reported that none of the individuals were working on the same project they were responsible for when in the Navy. Most of the 47 major defense contractors who responded to our survey on practices related to post-government employment report using a range of techniques to ensure awareness of and employee compliance with restrictions, although we found contractors were challenged to provide accurate information identifying their former DOD officials. Notably, information from the contractors showed little more than half the level of employment of former DOD officials than information we derived from matching IRS and DOD data, suggesting the information challenge defense contractors and DOD face in monitoring former DOD officials. Moreover, what information the contractors may have on former DOD officials’ assignments on defense contracts is, for the most part, not available to DOD. New legislation requiring former officials to obtain ethics advisory letters and DOD to keep them in a central database could provide some additional information, but will not give DOD the kind of information needed—that is, the names of contractor employees who are former DOD officials and are working on a particular contract and the contractor’s assurance that these employees are in compliance with their post- government employment restrictions related to the contract. Post-government employment restrictions on former DOD officials can affect every aspect of defense contractors’ hiring practices, including when employment discussions may occur, who may be hired, and what tasks they may perform during a 1 to 2 year period after leaving DOD. Post-government employment laws do not require contractors to identify, monitor, or provide reports on former DOD employees regarding compliance with their restrictions. However, violating existing laws may result in civil and criminal penalties for aiding misconduct of former government officials and thus, according to contractors’ ethics and personnel representatives, provide an impetus for adopting a range of practices to ensure awareness and compliance. In initial interviews with some of the major defense contractors on the need for and scope of corporate compliance with post-government employment practices, ethics and personnel representatives told us about a variety of ways and means for identifying, screening, tracking, training, and keeping personnel records for former DOD officials. To gain a better understanding of the scope of major defense contractors’ practices in these areas, we surveyed the 52 contractors on their practices. The following is a summary and analysis of information from the 47 contractors who responded. Appendix V presents detailed results from the contractor survey. Our survey asked contractors if they seek affirmation about a potential employee’s previous DOD or other government status prior to offering employment. As shown in table 5, most of the contractors reported that they ask potential permanent hires if they were formerly a DOD official, and a majority of contractors ask the same question of independent contractors (e.g., self-employed consultants), temporary employees, and members of the Board of Directors. Contractors were about evenly split on the use of the question on a job application and use of a special form to capture this information from job applicants. Similarly, contractors were divided on the use of electronic or paper collection of an applicant’s information with some contractors citing the use of both methods. Our survey asked contractors if they request that employees provide a copy of their written ethics advice letters and if so, how long, if at all, do they keep these letters on file once they hire these applicants. As shown in table 6, a majority of contractors responded that they request permanently hired employees, temporary employees, and members of the Board of Directors to provide a copy of their DOD ethics advice letters from the agencies’ ethics counselors detailing their DOD experience and providing an opinion on whether employment with a specific contractor is permitted under post-government employment restrictions. Nearly half of the contractors said they also ask for these letters from independent contractors they hire. Some contractors indicated that they were not sure if the DOD ethics advice letters were requested from applicants who are potential job candidates. Regarding how long the DOD ethics advice letters are kept on file, the contractors reported varying practices, with many keeping them throughout the former DOD official’s employment and other contractors keeping them for the period of restriction or for a specified time. Our survey asked contractors to describe what steps, if any, they take to ensure that former DOD officials working for them comply with their post- government employment restrictions. As shown in table 7, a majority of contractors cited counseling/legal review and recruitment/hiring processes as the primary methods to ensure former DOD employees comply. Further analysis of contractor survey responses indicates that 12 contractors track former DOD employees’ government-project-related job assignments electronically to ensure compliance and nine indicated that such records are not kept. However, more than half of the contractors indicated that they use internal and external audits to ensure the sufficiency of their procedures to track assignments, including post-government assignments of former DOD officials. Our survey asked contractors about training requirements to inform employees about policies regarding post-government employment restrictions for former federal employees or to reinforce them. As shown in table 8, a majority of contractors indicated that they require training for at least some employees. Further analysis of contractor responses indicates that their training is targeted to one or more employee groups such as senior-level managers, human resources staff, middle-level managers, or former federal government employees. Also, the training varies in timing and frequency. Training can take place initially upon employment with refresher training, annually or every 2 years, for example. Twelve contractors reported they mandated training for all employees; five contractors reported mandatory annual training. As noted, most major defense contractors report using a range of practices for monitoring their DOD hires to ensure compliance with restrictions, even though no laws or regulations require them to track or provide reports to that effect. However, the contractors’ ability to access and provide information on former DOD officials’ employment and work on specific defense contracts proved challenging. For example, contractor-provided data on the numbers of former DOD officials working with them was significantly less than what we determined through our match with IRS information. Specifically, our analysis of the status of major defense contractors’ employment of former DOD officials in 2006, which was based on matching contractor-supplied information with DOD personnel data, found that the contractors employed a total of 1,263 former DOD senior and acquisition officials, while our match of IRS information and DOD personnel data showed the contractors employed a total of 2,435 former DOD officials, or almost twice as many. In addition, as shown in table 9, only 15 of the 30 major defense contractors who responded to our questionnaire were able to provide ethics advice letters for at least one of the individuals in our stratified random sample. Specifically, 24 of the 30 who responded to our survey on their practices said that they asked employees for their DOD ethics advice letters as one of their practices for ensuring compliance with post- government employment restrictions and many reported keeping these letters on file throughout the former officials’ employment. However, 10 of the contractors that reported asking for the letters did not provide any ethics advice letters in response to our questionnaire. As noted earlier in this report, contractors are not required to keep copies of these letters. In the future however, information on DOD ethics advice letters for former DOD senior and acquisition officials could be more readily available to all DOD contractors as a result of a provision in the National Defense Authorization Act for Fiscal Year 2008 imposing new requirements on defense officials and contractors. Specifically, with this provision (enacted January 28, 2008), defense contractors may not knowingly compensate (i.e., employ) former DOD officials who are subject to post-government employment restrictions without first determining that the official has sought and received a written ethics advice opinion from DOD within 30 days of seeking the opinion. To implement this requirement however, defense contractors are likely to face new information challenges in keeping records that adequately document that they did not knowingly employ a former DOD official who did not seek or receive the applicable DOD written ethics opinion. Contractors responding to our survey were generally able to provide information about DOD- and contractor-job histories for most of the former DOD officials in our sample. However, according to the corporate headquarters staff for several contractors—who had to collect the detailed job histories from information submitted from across their companies in order to respond to our survey—accumulating this information was challenging. According to these contractor staff, the absence of automated assignment tracking or standardized personnel information systems across their companies made it difficult for them to centrally compile the information. That is, to respond to our survey, for some contractors it appears the currently available information on former officials’ post-DOD work on specific pending or awarded contracts is decentralized at the various business units responsible for those defense contracts. We found that the scope and quality of the job histories contractors provided to us were sufficient for our analysis on the magnitude of post-DOD work related to prior agencies and responsibilities. However, our questionnaire was not designed or expected to elicit contractor information on specific conflicts of interest or noncompliance cases, such as whether cooling-off periods were unexpired, for example. Similar to the requirements of defense contractors, no laws or regulations require DOD ethics or acquisition officials to track or monitor former DOD employees after they begin their new contractor jobs to ensure compliance with applicable post-government employment restrictions. As discussed earlier in this report, past legislative requirements to make the employment of former officials with defense contractors more transparent to DOD by having individuals or contractors report to DOD on the post- government employment with contractors were not successful and were repealed by 1995. However, the changed requirements left DOD without a mechanism to obtain information about its former senior and acquisition officials who go to work for its contractors. In our view, and DOD ethics and procurement officials agree, the information currently available to DOD from providing written ethics opinions to former DOD senior and acquisition officials who request them regarding prospective employment restrictions has limited utility for monitoring compliance with post- government employment restrictions once former DOD officials go to work for defense contractors for several reasons: while officials have been encouraged to seek an ethics advisory opinion, they were not required to obtain them, nor were contractors required to ask for them; DOD’s record-keeping for its written ethics opinions is decentralized at the many defense ethics offices that issued them; and DOD lacks a mechanism for providing the information to contracting officers or program managers for a particular contract. Nonetheless, for DOD’s purposes, ethics advisory opinions may now be more readily available and centrally located because of the 2008 defense authorization act provision that requires former officials to obtain written ethics opinions on applicable post-government employment restrictions from their DOD ethics officials before accepting compensation from defense contractors for a period of 2 years after leaving DOD service. DOD also has a new record-keeping requirement to retain each request and each written opinion provided in a central database or repository for at least 5 years. While this requirement may help to increase transparency over which former officials are working with contractors and what may raise a potential conflict of interest, its utility may be limited because information is not being tied to specific contracts. Senior ethics officials in DOD’s Standards of Conduct Office and the director of Defense Procurement and Acquisition Policy and Strategic Sourcing (DPAP), for example, told us that DOD currently does not have a mechanism to link information on former officials’ post-DOD work for their new employers for specific defense contracts that are pending or awarded before their former agencies, offices, or commands. They believed that such a mechanism would be valuable to program managers and contracting officers who need to ensure that contracted work being done in their programs is free of conflicts. They also believed that such a mechanism would be relatively cost-effective to implement. After learning of the results of our data collection efforts, in fact, these officials were concerned that current mechanisms do not provide DOD a clear picture of how many former officials are working with contractors and what risks of conflicts are present. The public needs to be assured that decisions related to the hundreds of billions of dollars spent each year on defense contracts comply with the applicable post-government employment restrictions and are free of conflicts of interest. But this task is highly challenging when it comes to monitoring whether former DOD officials are in compliance with these rules or have a conflict of interest by working for a defense contractor. Our review illustrated aspects of this challenge, including difficulties associated in collecting data on thousands of employees working for just 52 contractors. It is likely our surveys would have been more difficult to accomplish if they had been applied to the entire spectrum of defense contractors, which includes hundreds of small companies that may not have automated or complete information on their employees. Further, requirements that have been imposed in the past to collect information on former DOD officials working for contractors have not been effective for a variety of reasons. These include difficulties associated with asking private citizens to report back to the government on their employment for extended periods of time and disparities in the way information was collected and reported. Moreover, when information was collected, its value was limited, according to DOD officials, because it could not be tied to specific programs or contracts, where it could inform those responsible for ensuring integrity at the front line of acquisitions. Despite these challenges, there may be ways that more accurate and useful information could be collected, for example, by asking potential contractors to certify that their employees are in compliance with post- government employment restrictions when contracts are being awarded. The results of our review—particularly results relating to the estimated numbers of former DOD senior and acquisition officials who could be working in areas that tie back to their work at DOD—show that examining such options is worthwhile. To provide greater transparency during the acquisition process given the fact that former DOD officials can and do work on defense contracts related to their prior agencies or their direct responsibilities, the risk of conflicts of interest and the appearance of conflicts of interest, and the need to maintain public trust in the integrity of defense contracting, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to consider the relevant recent statutory changes and determine if changes in procurement policy are needed to impose additional reporting requirements or other requirements to guard against violations of the government’s post- employment rules. For example, DOD could consider requiring defense contractors who are awarded a contract, within a set number of days after contract award, to (1) disclose to the contracting officer the names of employees who are certain former DOD officials (e.g., civilian senior executives, high-level military officers, or acquisition officials) who worked on the response to the solicitation and (2) certify that these employees are in compliance with the applicable post-government employment restrictions. In addition, after assessing the benefits and costs associated with the certification process, DOD could consider whether and to what extent it should apply a similar mechanism throughout the term of the contract. In responding to a recent report we issued on contractor employee personal conflicts of interest, DOD tasked its Panel on Contracting Integrity to examine issues we raised and potential solutions. It may also want to do the same with regard to post- government employment reporting. We provided a draft of this report to DOD for comment. The DPAP director wrote that DOD concurs with our recommendation. Specifically, he wrote that the recommendation will be referred to the Panel on Contracting Integrity for consideration and action. DOD’s Acting General Counsel also provided written technical comments, which we incorporated into the report as appropriate. DOD’s comments are reproduced in appendix II. We are sending copies of this report to the Secretary of Defense, the Director of the Office of Management and Budget, the Director of the Office of Government Ethics, and other interested parties. We will make copies available to others upon request. We will make this report available to the public at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Congress included a provision in the John Warner National Defense Authorization Act for Fiscal Year 2007 requiring us to report on the recent employment of former Department of Defense (DOD) officials by major defense contractors. In response, our report objectives were to (1) develop information on how many former DOD military and civilian personnel recently worked for major defense contractors and develop an estimate of how many of these were former DOD senior or acquisition officials who worked on defense contracts for these employers that were related to their former positions at DOD and (2) identify the practices used to monitor compliance with post-government employment restrictions and the information challenges that contractors and DOD face in monitoring the movement of former DOD employees to defense contractors. This report does not address any government employment restrictions which might be applicable when former private sector employees are employed by DOD or other federal government agencies. In November 2007, in part to meet our reporting requirement, we provided an interim briefing to the Senate and House Armed Services Committees. Section 851 of the National Defense Authorization Act for Fiscal Year 2007 defined major defense contractors as any company that received at least $500 million in contract awards from DOD in fiscal year 2005. To identify those contractors, we analyzed data on the values of contracts awarded to all companies from DOD’s Statistical Information Analysis Division. As a result, we identified the 52 contractors meeting the major defense contractor criteria to include in our review. As shown in table 10, which ranks the 52 major defense contractors by the value of their fiscal year 2005 DOD contract awards, these companies accounted for more than half of DOD’s total contract awards in 2005—$142.8 billion of the total $269.2 billion. We conducted this performance audit from November 2006 through May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The scope and methods we used to perform this audit are described in greater detail in the remainder of this appendix. To develop information on how many former DOD military and civilian personnel worked for major defense contractors, section 851 required us to report on employment during the most recent year for which data were available. Through initial discussions with five of the major defense contractors and the IRS, we determined that data on former DOD officials’ employment were reasonably available for 2006. To determine how many military and civilian personnel left DOD service, as agreed with committee staff, we limited our analysis to data from the Defense Manpower Data Center’s databases for all military and civilian employees who left DOD service for any reason other than being deceased in a 6-year period between January 1, 2001, and December 31, 2006 (N=1,857,004). We determined that data from the data center were sufficiently reliable to accurately support our analysis in support of this objective. DOD’s data included personally-identifiable characteristics for each former employee such as name, social security number (SSN), end date of employment, branch of service, military rank, civilian grade, and if the employee’s job specialty was coded as any of the several defense acquisition workforce positions. To analyze defense contractors’ post- government employment for a subgroup of former DOD senior and acquisition officials, we used DOD’s personnel data to include in the subgroup the following range of former DOD officials: senior officials such as military officers ranked O-7 and above (e.g., generals, admirals) and members of the Senior Executive Service (SES) regardless of whether they also were coded as serving in a defense acquisition workforce position. Acquisition officials include military (from O-3 to O-6) and civilian (from grades GS-12 through 15) officials for which DOD coded their status as members of its acquisition workforce, including program managers, deputy program managers, and contracting officers (N=35,192 individuals). To determine how many of the 1,857,004 former military and civilian personnel (including the 35,192 former DOD senior and acquisition officials) worked for the 52 major defense contractors in 2006, we matched DOD’s personnel data with (1) income tax data from IRS and (2) personnel data from the contractors on former DOD senior or acquisition officials they directly compensated in 2006 as employees or independent contractors. The data obtained from IRS included Form W-2 and Form 1099- Miscellaneous information. We used data from the returns identifying the contractor who submitted the income tax data and the SSN and name for all individual taxpayers for whom the 52 major defense contractors reported taxable income for the 4-year period between 2003 and 2006. Because contractor-supplied data identified DOD officials they hired between 2004 and 2006, we compared SSNs from 2003 income tax data to the 2006 income tax data and eliminated SSNs for individuals that matched because this showed the contractors hired those individuals prior to 2004. We also obtained data from 51 of the 52 contractors on individuals who we or they matched to our criteria for former DOD senior and acquisition officials they compensated in 2006 and hired between 2004 and 2006. Contractors were permitted to provide the SSNs for either (1) all individuals compensated in 2006 and hired in the 3-year period between 2004 and 2006 or (2) the individuals they identified as matching our criteria for being a former DOD senior or acquisition official and hired between 2004 and 2006. In either case, we analyzed the contractors’ SSN data to match against the SSNs in DOD’s personnel data. From the matches we determined that 1,263 individuals matched our criteria as former DOD senior and acquisition officials. For each of the 51 contractors who provided us this SSN data, we assessed the accuracy and completeness of their information by analyzing how many former DOD senior and acquisition officials their information showed were employed in 2006 (N=1,263) compared with our analysis of IRS data for the same purpose (N=2,435). We based our analysis of demographic data for this objective on the IRS and DOD data. To develop an estimate of how many of the former DOD senior or acquisition officials subject to post-government employment restrictions the major defense contractors assigned to work on defense contracts related to their former DOD agencies or their direct responsibilities, as shown in table 11, we used the contractor-identified population of 1,263 individuals. To ensure that we had adequate representation of these officials from contractors with fewer former DOD officials, we stratified the population into two strata based on the number of former DOD officials reported by each contractor as employees—contractors reporting 50 or more former DOD officials were assigned to one stratum and contractors reporting less than 50 former DOD officials were assigned to another stratum. From this population we selected a statistically based random sample of 125 individuals who worked for 32 of the contractors. We asked the contractors to respond to a questionnaire on related DOD and contractor job histories for the sampled individuals. We analyzed responses from 30 contractors on job histories and contractor work assignments for their respective individuals in our sample. Based on the sample size and the response rate, the estimate from the results achieves a precision of ± 8 percent at a 95-percent confidence level. To obtain the job histories, we used a Web-based questionnaire to collect data on work histories of the individuals in our sample. (App. IV reproduces the Web-based questionnaire used for this survey). The questionnaire was designed to obtain individual information for each of the former DOD senior and acquisition officials in our sample, such as his or her previous DOD assignments and contractor assignments over a 3-year period as well as to request a copy of any DOD ethics advice letters. Our questionnaire was intended to develop information on defense contracts or programs on which former DOD officials were assigned in order to consider whether the former officials were assigned to work on contracts they or their agencies had previously been responsible for. Recognizing that the contractors responding to our survey were self- reporting on a sensitive issue, the information sought from contractors was not designed or expected to identify specific violations of post- government employment restrictions. Instead, the survey also asked contractors for information on circumstances surrounding the post-DOD work in relationship to prior DOD positions and responsibilities. To protect the confidentiality of the responses concerning these individuals, we took steps to remove personally identifiable information from our analysis and evidentiary files. We projected the results of our sample to estimate the extent that former DOD officials in our study group population of 1,263 individuals engaged in post-government employment tied to their former DOD agencies or to their direct responsibilities. We used these estimates to assess the magnitude of such post-DOD work tied to former DOD agencies, offices, or commands or to direct responsibilities. To identify the practices major defense contractors report using to ensure awareness of and compliance with post-government employment restrictions for employing former DOD officials, we interviewed ethics and personnel officials with five of the contractors to gain an initial understanding of the variety and scope of information reasonably available concerning a range of practices used for these purposes. We also conducted a survey to collect additional information from all 52 contractors on personnel assignment record-keeping and practices for identifying, screening, tracking, and training former DOD officials for purposes of compliance with post-government employment restrictions To conduct this survey, we pre-tested it with three contractors before e-mailing a questionnaire to all 52 contractors to collect information on their reported practices. (Appendix V reproduces the questions used for this survey as well as the aggregated responses.) The survey was designed to obtain information on contractors’ reported practices to ensure awareness and compliance in various key ways such as (1) how contractors identified new hires with potential post-government employment restrictions, (2) how they tracked post-DOD assignments of former DOD officials during their cooling off periods, (3) whether they collected and maintained copies of DOD ethics advisory letters for former DOD officials, and (4) whether they provided training in post-government employment restrictions to various employee categories in their workforce. We analyzed responses from the 47 contractors who responded to the survey. This is a survey response rate of 90 percent. Our survey results cannot be generalized for the purpose of describing nonrespondent contractors’ practices. To identify information challenges contractors and DOD face, we reviewed post-government employment laws and implementing regulations, prior GAO reports, and other studies; and held discussions with and obtained information from officials at the Office of Government Ethics concerning any requirements and performance problems DOD and defense contractors have had regarding the adequacy of monitoring former DOD officials’ compliance with restrictions. To identify information challenges that defense contractors face in monitoring employees’ compliance with post-government employment restrictions, we analyzed the extent to which the 52 major defense contractors were able to submit sufficient information to us in response to our data requests. Specifically, we analyzed the extent to which the contractors were able to submit sufficient data on how many former DOD officials worked for them in 2006 and provide us with copies of DOD’s written ethics opinions and related job histories for the pre-selected former DOD officials sampled for our survey on post-government employment. We also met with and reviewed information from ethics officials in the Office of the Secretary of Defense’s Standards of Conduct Office and Defense Procurement and Acquisition Policy (DPAP) officials from the Office of the Undersecretary of Defense (Acquisition, Technology, and Logistics). We held these discussions to obtain information and views on DOD’s practice of providing written ethics advice concerning prospective employment and restrictions to former DOD senior and acquisition officials who request them. We discussed the sufficiency of this information for DOD transparency on certain former DOD officials’ compliance with post-government employment restrictions after these officials begin their new jobs. We also discussed their views on the sufficiency of information available to DOD’s contracting officials from defense contractors regarding the names of former DOD senior and acquisition officials who are working on a particular pending defense procurement or defense contracts and whether or not they are in compliance with their post-government employment restrictions. We used this information to assess whether DOD has sufficient insight into post- government employment to reduce the risk for conflicts of interest or apparent conflicts of interest that could undermine public trust in the integrity of defense contracting. Table 12 presents our analysis of how many former DOD senior and acquisition officials were employed by each of the 52 defense contractors in 2006, ranked in descending order according to how many they employed. In order to prevent reporting of information that could be used to identify specific former officials with post-DOD employment with contractors, the table presents summary analysis that discloses which of the 28 major defense contractors employed 11 or more such individuals in 2006. For those 20 major defense contractors who employed fewer than 11 such individuals, the table presents a limited summary that discloses that they employed “10 or fewer” such individuals. Also shown in table 12, four of the contractors did not employ any former DOD senior and acquisition officials in 2006. Table 13 shows in greater detail our analysis of the major defense contractors who employed more than 50 former DOD senior and acquisition officials in 2006 and a breakout of their former status as DOD military and civilian employees. The purpose of the questionnaire was to understand how defense contractors identify former DOD officials, maintain information about the job assignments of former DOD employees, and provide training on post- government employment restrictions. Q1. How many individuals did your company compensate directly either as employees, independent contractors (individuals for whom a form 1099 was generated) or members of the board of directors during any part of calendar year 2006? Q2. How many of the individuals listed in question 1, were hired directly, engaged as independent contractor, or added to the board of directors on or after January 1, 2001? Q3. For each of the following compensated positions, does your company ask job candidates whether or not they are former DOD military or civilian employees before offering employment? a. Permanently hired employees 38 9 47 b. Directly hired contractors (Form 1099 contractors) Asked Not asked Subtotal Not applicable c. Directly hired temporary employees Asked Not asked Subtotal Not applicable d. Members of the Board of Directors Asked Not asked Subtotal Not applicable 47 Q3e. If your company currently asks candidates whether they are former DOD employees, what means does it use to collect this information? Application only Form only Other only App and form App and other Form and other All three Subtotal Not applicable 8 10 6 8 3 1 3 39 Q3f. If your company does collect information on an individual’s prior DOD employment status, in what form is this information maintained? Electronic Paper only Other only Electronic and paper Electronic and other Subtotal Not applicable 47 Q4. How many of the individuals who were compensated by your company during calendar year 2006 and who joined your company in some capacity on or after January 1, 2001, were previously employed by DOD? Do not know Number given Subtotal Not applicable Total Q5. Does your company ask individuals it is compensating or considering for each of the following positions if they have any restrictions on their employment as a result of being former DOD military or civilian employees? a. Permanently hired employees Asked Not asked Not sure Total b. Directly hired contractors (Form 1099 contractors) Asked Not asked Not sure Subtotal Not applicable 47 c. Directly hired temporary employees Asked Not asked Not sure Subtotal Not applicable d. Members of the Board of Directors Asked Not asked Not sure Subtotal Not applicable Q6. Does your company request compensated individuals in each of the following positions who have current employment restrictions as a result of previous DOD employment to provide a copy of the written advice from DOD Ethics Counselors regarding post-government employment restrictions known as a “Safe Haven” letter? a. Permanently hired employees Requested Not requested Not sure Total 34 6 7 47 b. Directly hired contractors (Form 1099 contractors) Requested Not requested Not sure Subtotal Not applicable c. Directly hired temporary employees Requested Not requested Not sure Subtotal Not applicable d. Members of the Board of Directors Requested Not requested Not sure Subtotal Not applicable 47 Q6e. How long, if at all, does your company keep “Safe Haven” letters on file for individuals it compensates? Not kept Employment Other Employment + Other Subtotal Not applicable 3 23 15 2 43 Q7. What steps, if any, does your company take to ensure that former DOD employees comply with their post-government employment restrictions? N=45 (Open-ended responses) Personnel Record Systems for Compensated Individuals Q8. Did your company compensate directly any INDEPENDENT CONTRACTORS (individuals for whom a form 1099 was generated) during 2006? Q8a. How, if at all, does your company maintain records of which government project related assignments independent contractors worked on while paid by your company? N=42 (Open-ended responses) Q9. In what form does your company maintain records of which government project related job assignments EMPLOYEES have worked on? Not kept Electronic only Paper only Other only Electronic and paper Electronic and other Three forms used Subtotal Not Applicable Q10. How is information on which government project related job assignments employees’ have worked on entered into your records? N=36 (Open-ended responses) Q11. What procedures, if any, are in place to ensure that the record of government project related job assignments for each employee accurately record ALL of the assignments the employee has worked on? N=35 (Open-ended responses) Q11a. Are any of these procedures documented? 47 Q12. Are any audit checks performed to assure that ALL of an employee’s government project related job assignments are included in their record? Yes No Subtotal Not applicable a. What checks are performed to assure all assignments are included? N=28 (Open-ended responses) b. How often are these checks performed? N=26 (Open-ended responses) c. Who performs these checks? N=26 (Open-ended responses) d. What are the procedures to correct any errors found? N=26 (Open-ended responses) Q13. How often are the records of government project related job assignments updated? N=36 (Open-ended responses) Q14. How would you characterize the completeness of your personal data records regarding the government project related job assignments employees have worked on at your company? Very complete Somewhat complete Not very complete Subtotal Not applicable 47 Q15. What limitations, if any, are there of the government project related job assignments data your company maintains? N=36 (Open-ended responses) Q16. What reviews, if any, have there been of the integrity of your company’s government project related job assignments record keeping system? N=36 (Open-ended responses) Q17. Does your company require training that informs and reinforces its policies regarding post-employment restrictions for former federal government employees? Yes No Subtotal Not applicable Continue to question 18. Which groups are required to take this training? Yes No Subtotal Not applicable 47 Q17b1. about how often are they required to take this training? 1 per year < 1 per 2 yrs Other Subtotal Not applicable Continue to question 17c. If all employees are not required to take this training, which of the following groups of employees are? Yes No Subtotal Not applicable Q17b5. about how often are they required to take this training? 5 2 2 3 12 1 per year 1 per 2 yrs < 1 per 2 yrs Other Subtotal Not applicable 47 Q17a6. Other [please specify] Yes No Subtotal Not applicable Q17b6. about how often are they required to take this training? 1 per year 1 per 2 yrs < 1 per 2 yrs Other Subtotal Not applicable 3 4 1 6 14 Q17c. Does your company maintain records of whether people who are required to take the training have completed it? Yes No Subtotal Not applicable 47 Q18. Has any government agency or independent entity assessed the adequacy of your company’s procedures for hiring current and former government employees?
Department of Defense (DOD) officials who serve in senior and acquisition positions and then leave for jobs with defense contractors are subject to the restrictions of post-government employment laws, in order to protect against conflicts of interest. Congress required GAO to report on employment of such officials by contractors who received more than $500 million in DOD's 2005 contract awards. In response, this report (1) provides information on how many former DOD employees worked for contractors in 2006 and estimates how many worked on contracts that were related to their former agencies or to their direct responsibilities and (2) identifies the practices used to monitor restrictions and information challenges in monitoring post-DOD employment. To do this work, GAO matched data from DOD for all employees who left DOD over a 6 year period with data from the Internal Revenue Service (IRS) and from 52 contractors; conducted surveys; and interviewed DOD and contractor officials. In 2006, 52 contractors employed 2,435 former DOD senior and acquisition officials who had previously served as generals, admirals, senior executives, program managers, contracting officers, or in other acquisition positions which made them subject to restrictions on their post-DOD employment. Most of the 2,435 former DOD officials were employed by seven contractors. On the basis of a stratified random sample of contractor-supplied information, GAO estimates that at least 422 former DOD officials could have worked on defense contracts related to their former agencies and that at least nine could have worked on the same contracts for which they had oversight responsibilities or decision-making authorities while at DOD. The information GAO obtained from contractors was not designed to identify violations of the restrictions. While contractors could have employed quite a few former DOD officials on assignments related to their prior DOD positions, there could be appropriate justification for each of these situations. Most of the contractors who responded to our survey reported using a range of practices to ensure awareness and compliance with post-government employment restrictions, although GAO's request proved challenging for contractors to provide accurate information identifying their former DOD officials. According to the surveyed contractors, they can identify former DOD officials with post-government employment restrictions and track their assignments during their cooling-off periods. However, GAO's analysis found a significant under-reporting of the contractors' employment of former DOD officials. Specifically, contractor-supplied data showed they employed 1,263 former DOD officials in 2006, while IRS data showed the contractors employed 2,435. New post-government employment requirements enacted in January 2008 are likely to make written ethics opinions for former DOD officials more readily available to contractors. DOD also must now keep ethics opinions in a central database. This information was not designed to provide a mechanism for DOD to effectively monitor former DOD officials' post-government employment compliance after they begin working for contractors on specific contracts.
The NGJ is DOD’s program to replace the ALQ-99 tactical jamming system. The ALQ-99 is a five-pod jamming system that is capable of automatically processing, and jamming radio frequency signals. It counters a variety of threats in low-, mid-, and high-band frequency ranges. Figure 1 shows the radars that operate in different frequency bands and ranges. The ALQ-99 was originally flown on EA-6B aircraft, which are expected to be fully retired in 2019, and is transitioning to the EA-18G, an electronic attack variant of the Navy’s F/A-18 fighter jet. Figure 2 shows the ALQ-99 on the EA-18G. EA-6B and EA-18Gs can be based on aircraft carriers or in expeditionary squadrons that are deployed to land-based locations as needed. The ALQ-99/EA-6B combination was originally developed for use in major combat operations, and in 1995, the EA-6B was selected to become the sole tactical radar support jammer for all services after the Air Force decided to retire its fleet of EF-111 aircraft. The role of the EA-6B has continued to expand over time. According to DOD officials, when Operation Iraqi Freedom began, EA-6Bs were used in irregular warfare environments along with another aircraft, the EC-130H Compass Call, because they provided needed jamming capabilities and there were no other airborne electronic attack assets available for this role. These and other demands have strained DOD’s airborne electronic attack capacity and increased the stress on systems, such as the ALQ-99. Like the ALQ-99, the NGJ will be comprised of jamming pods that will fly on the Navy’s EA-18G. Its main purpose will be to counter integrated air defense systems in major combat operations. The EA-18G with NGJ is to primarily be based on aircraft carriers at sea where it is to be employed in U.S. Navy carrier strike groups to counter both sea- and land-based weapon systems. DOD also plans for it to support joint expeditionary warfare missions. The EA-18G with NGJ is currently planned to primarily serve in a modified escort role, in which it is expected to jam enemy radars while the aircraft is outside the range of known surface-to-air missiles. It is also expected to be capable of conducting stand-off jamming missions, in which the aircraft is located outside of defended airspace. In both cases, the idea is to protect or “hide” other systems from enemy radars. The EA-18G with the NGJ is also intended to be used for other purposes, such as communications jamming. Figure 3 shows the NGJ with other airborne electronic attack systems countering enemy air defense systems. In July 2013, DOD conducted a milestone A review for the NGJ program, which is a planned major defense acquisition program, and authorized it to enter the technology development phase. Subsequent to the milestone A review, the Navy awarded a $279.4 million contract to Raytheon for NGJ technology development. Figure 4 shows the time line for the milestone A review and other key NGJ events. The NGJ program plans to use an incremental approach to development in which the most critical capabilities are to be delivered first. In total, the Navy’s acquisition strategy calls for three increments: mid-, low-, and high-band. The specific frequency ranges covered by these bands is classified. Both federal statute and DOD policies include provisions designed to help prevent unnecessary duplication of investments. Section 2366a of title 10 of the U.S. Code provides that a major defense acquisition program may not receive milestone A approval until the Milestone Decision Authority certifies, after consultation with the Joint Requirements Oversight Council, that if the program duplicates a capability already provided by an existing system, the duplication provided by such program is necessary and appropriate. In addition, DOD’s JCIDS Manual directs that initial capabilities documents, which describe capability gaps that require a materiel solution, identify proposed capability requirements for which there exists overlaps or redundancies. Initial capabilities documents should also assess whether the overlap is advisable for operational redundancy or whether it should be evaluated as a potential trade-off or alternative to satisfy identified capability gaps. The manual also states that, when validating key requirements documents, the chair of the group responsible for that capability area is also certifying that the proposed requirements and capabilities are not unnecessarily redundant to existing capabilities in the joint force. This applies to initial capabilities documents, capability development documents, and capability production documents, which helps ensure that potential redundancies are discussed at multiple points in the acquisition process. However, assessing duplication among airborne electronic attack investments is challenging for a variety of reasons. There is a lack of documentation comparing all current existing and planned airborne electronic attack capabilities; electronic warfare investments are distributed among the services; systems in the electronic warfare portfolio are classified at multiple levels; future needs and threats and plans to address them change quickly; planned programs of record or upgrades are not always known until funding is requested; and some overlap among systems is intentional. DOD has assessed whether the planned NGJ program is duplicative using a variety of means, but none of them address all of the system’s planned roles or take into account the military services’ evolving airborne electronic attack investment plans. DOD’s analyses of its airborne electronic attack capability gaps over the last decade, as well as the NGJ analysis of alternatives, support its conclusion that the NGJ is not duplicative of existing capabilities in its primary role – the joint suppression of enemy air defenses. However, these analyses do not address potential duplication or overlap between the NGJ and other systems being developed for other roles, such as communications jamming in irregular warfare environments. The military services also plan to invest in additional airborne electronic attack systems, so new duplication issues could emerge. Several ongoing DOD efforts could provide a mechanism for updating its analysis of potential overlap and duplication related to the NGJ. However, we found weaknesses in the execution of some of these efforts. According to DOD and Joint Staff officials, the NGJ addresses a clear capability gap and is not duplicative of other airborne electronic attack systems. It is a direct replacement for the Navy’s ALQ-99 tactical jamming system and addresses validated capability gaps. DOD analyses dating back a decade have identified capability gaps and provided a basis for service investments in airborne electronic attack capabilities, such as the NGJ. DOD outlined its findings in reports that included analyses of alternatives and initial capabilities documents. None of these documents are specifically assessments of duplication; they serve other purposes. For example, the two initial capabilities documents – the 2004 Airborne Electronic Attack and 2009 Electronic Warfare Initial Capabilities Documents – identified the capability gaps that the NGJ is intended to address. Table 1 lists key documents and describes the extent to which they assessed duplication and overlap for NGJ. According to DOD and Joint Staff officials, the analyses contained in these documents provided support for the certification the department is required to make that the NGJ is not unnecessarily duplicative before receiving milestone A approval to begin technology development. In addition, Joint Staff officials stated that they reviewed the NGJ and its potential capabilities for duplication before endorsing the NGJ Analysis of Alternatives. We were not able to review the Joint Staff’s analysis due to its classification level. DOD analyses of NGJ capabilities and potential duplication do not reflect all of its planned roles, particularly in irregular warfare environments, or evolving service acquisition plans. Section 2366a of title 10 of the U.S. Code provides that a major defense acquisition program may not receive milestone A approval until the Milestone Decision Authority certifies, after consultation with the Joint Requirements Oversight Council, that if the program duplicates a capability already provided by an existing system, the duplication provided by such program is necessary and appropriate. DOD’s analyses support its conclusion that the NGJ is not duplicative of existing capabilities in its primary role—the joint suppression of enemy air defenses in a modified escort setting, which includes defended airspace outside the range of known surface-to-air missiles. In fact, the NGJ Analysis of Alternatives found that the planned system would complement other DOD investments in electronic warfare and stealth. However, these analyses do not address potential duplication or overlap between the NGJ and systems being developed for other roles, such as communications jamming in irregular warfare environments—an area where we have found potential duplication in our prior work. Most of these systems have been developed or incorporated into military service investment plans since these analyses were conducted. Since the preparation of key NGJ-related documents, DOD has focused on increasing its airborne electronic attack capabilities and capacity, resulting in several systems that were not considered in those analyses. When these analyses were being completed, DOD had few airborne electronic attack systems and programs of record, none of which were specifically designed for the irregular warfare environment. Table 2 shows existing and planned airborne electronic attack systems and whether they were discussed in key NGJ-related documents. Based on our analysis of DOD airborne electronic attack systems and missions, none of the systems we reviewed that have emerged since DOD’s NGJ analysis was completed duplicate planned capabilities; however, there is some overlap in the roles that the systems are intended to perform. For example, according to the F-35 program office, some aircraft with electronic attack enabled AESA radar may be able to perform some jamming functions in a modified escort role. However, unlike the NGJ, they are not designed to be dedicated jamming systems. In addition, NGJ is to be capable of communications jamming in an irregular warfare type environment, like systems such as CEASAR and Intrepid Tiger II, which were fielded under rapid acquisition authorities and in very limited quantities. Army and Marine Corps officials explained that their systems are a more suitable and economic alternative to the NGJ for these missions. For example, Army officials stated that the systems the Army is investing in, such as CEASAR and Multi-Function Electronic Warfare, would provide the right amount of power for their needs, be more readily available to units, and cost less. According to DOD, these systems also provide additional capacity in an area where there has been significant demand. However, as DOD and the military services continue to invest in new additional airborne electronic attack capabilities, the potential for duplication and overlap to occur increases. DOD has several ongoing efforts that could provide a mechanism for updating its analysis of potential overlap and duplication related to the NGJ and other airborne electronic attack investments, including its annual Electronic Warfare Strategy Report to Congress, a U.S. Strategic Command review of DOD’s portfolio of electronic warfare systems, and the NGJ capability development document. However, we found weaknesses in two of the three efforts. DOD could address new duplication issues as they emerge and, if necessary, explain the need for overlapping capabilities in its electronic warfare strategy report to Congress. Section 1053 of the National Defense Authorization Act for Fiscal Year 2010 requires that for each of fiscal years 2011 through 2015, the Secretary of Defense, in coordination with the Joint Chiefs of Staff and secretaries of the military departments, submit to the congressional defense committees an annual report on DOD’s electronic warfare strategy. Each report must provide information on both unclassified and classified programs and projects, including whether or not the program or project is redundant or overlaps with the efforts of another military department. DOD has produced two reports in response to this requirement. In these reports, DOD assessed duplication of airborne electronic attack systems, including NGJ. However, the analysis was limited and did not examine potential overlap between capabilities or explain why that overlap was warranted. DOD officials explained that the report relied primarily on the military services to self- identify overlap and duplication. Redundancy in some of these areas may, in fact, be desirable, but pursuing multiple acquisition efforts to develop similar capabilities can also result in the same capability gap being filled twice or more, which may contribute to other warfighting needs going unfilled. This report is supposed to be submitted at the same time the President submits the budget to Congress, but DOD has not yet issued its report for fiscal year 2013 and could not provide a definitive date for when it plans to do so. The U.S. Strategic Command also has an ongoing review that could help assess duplication and overlap issues related to the NGJ and other systems. Joint Staff officials stated that, during the course of our review, they began in collaboration with U.S. Strategic Command to review DOD’s portfolio of electronic warfare systems at all levels of classification. They explained that the review will examine capability requirements in select approved warfighting scenarios as well as potential redundancy within the portfolio. According to the Joint Staff, the review should be completed sometime in fiscal year 2013. Capability development documents, which define the performance requirements of acquisition programs, are another vehicle to discuss potential redundancies across proposed and existing programs. The Navy must produce and the Joint Requirements Oversight Council must validate a capability development document for the NGJ program before it can receive approval to enter system development—currently planned for fiscal year 2015. The JCIDS manual provides that, when validating capability development documents, the chair of the group responsible for that capability area is also certifying that the proposed requirements and capabilities are not unnecessarily redundant to existing capabilities in the joint force. The draft NGJ capability development document addresses potential redundancies by stating that the NGJ is fully synchronized with existing systems and will be synchronized with future systems, and that individual airborne electronic attack systems all concentrate on unique portions of the electromagnetic spectrum—frequency ranges—for different mission sets. However, the Navy did not identify the systems that it considered in its analysis, so it will be difficult for others to validate this conclusion or whether it applies to all of the NGJ’s planned roles. The NGJ is not a joint acquisition program, but it is planned to provide airborne electronic attack capabilities that will support all military services in both major combat operations and irregular warfare environments. The NGJ is not intended to meet all of the military services’ airborne electronic attack needs, and the services are planning to make additional investments in systems that are tailored to meet their specific warfighting roles. The military services may be able to leverage the NGJ program in support of their own acquisition priorities and programs because its current acquisition strategy is based on a modular open systems approach, which allows system components to be added, removed, modified, replaced, or sustained by different military customers or manufacturers without significantly impacting the remainder of the system. This approach could make it easier to integrate the NGJ or its technologies into other systems in the future. Despite its role in joint military operations, the NGJ program is led and funded by the Navy and is not a joint acquisition program. The definition of a joint acquisition program is related to whether it is funded by more than one DOD component, not whether other organizations have provided input on it. In the case of the NGJ, the Joint Requirements Oversight Council, which is chaired by the Vice Chairman of the Joint Chiefs of Staff, and includes one senior leader from each of the military services, such as the Vice Chief of Staff of the Army or the Vice Chief of Naval Operations, has validated that the need exists for the program. The Marine Corps, Army, Air Force, and Joint Staff have provided input into the program as part of DOD’s requirements and acquisition processes. This included collaboration on requirements documents and the NGJ Analysis of Alternatives. The Air Force’s 2004 Airborne Electronic Attack Initial Capabilities Document and Strategic Command’s 2009 Electronic Warfare Initial Capabilities Document, which informed NGJ requirements, included input from senior-level oversight boards representing all the military services. In addition, advisors from various parts of the Office of the Under Secretary of Defense, the Joint Staff, and all services provided input into the NGJ Analysis of Alternatives through forums such as working groups, integrated product teams, and a high-level executive steering committee. DOD plans to use Navy EA-18Gs with the NGJ to support multiple military services in joint operational environments. In the joint operational environment, each service relies on the capabilities of the others to maximize its own effectiveness while minimizing its vulnerabilities. For example, in conducting military operations, U.S. aircraft are often at risk from enemy air defenses, such as surface-to-air missiles. EA-18Gs can use the NGJ jamming capabilities in these settings to disrupt enemy radar and communications and suppress enemy air defenses. Because aircraft, such as the EA-18G, are to protect aircraft of all services in hostile airspace, the joint suppression mission necessarily crosses individual service lines. The system the NGJ is replacing–ALQ-99–has also been used extensively in irregular warfare environments, including in Iraq and Afghanistan in response to electronic attack requests from all the military services. DOD has placed an emphasis on increasing airborne electronic attack capacity and capabilities. While the Navy’s NGJ is expected to provide airborne electronic attack capabilities to support all military services in both major combat operations and irregular warfare environments, the other services are also planning to make additional investments in airborne electronic attack systems that are tailored to their specific warfighting roles. The services’ airborne electronic attack plans vary in part because of these roles. For example, DOD officials explained that the Navy is responsible for ensuring freedom of navigation in the world’s oceans and has a key role in force projection; the Marine Corps is a rapid expeditionary force; the Air Force provides long range strike and close air support and is responsible for establishing air superiority; and the Army is the primary force for land operations in war and usually enters a battle area after the Air Force has established air superiority. Military service officials characterized their airborne electronic attack plans and the role of the NGJ in them as follows: Air Force: The Air Force is focused on developing long range strike capabilities, enabling the electronic attack capabilities of its F-22A and F-35 aircraft for penetrating escort roles, and investing in improvements to self protection systems for its fighter aircraft, including the F-15 and the F-16. Air Force requirements officials stated that the planned capabilities of NGJ will complement the other systems it is developing. Army: Officials from the Army’s Electronic Warfare Division stated that although the NGJ-equipped EA-18Gs would have a role in helping to establish air superiority before the Army enters an area, the Army plans to rely on its own airborne electronic attack systems to perform the necessary jamming in support of its ground forces. According to Army officials, the service plans to invest in less expensive, less powerful systems that will be readily available at the brigade combat team level. The Army developed CEASAR, a jamming pod on C-12 aircraft, and is now developing a more capable successor to CEASAR under the Multi-Function Electronic Warfare program, which is early in the acquisition process. Marine Corps: Officials from the Marine Corps’ Electronic Warfare Branch stated that each Marine Air Ground Task Force commander must possess its own airborne electronic attack capabilities and the Marine Corps does not plan to rely solely on Navy EA-18G’s with NGJ to support its air and ground forces. Historically, the Marines have relied on their own expeditionary EA-6B squadrons to meet joint electronic warfare requirements, but the EA-6Bs will be phased out by 2019 and the Marine Corps does not plan to acquire the EA-18G, which will be equipped with the NGJ. According to the Marine Corps, it will coordinate the use of NGJ support from the Navy when appropriate but it expects to rely on its own systems for its core missions. The Marine Corps plans to upgrade its Intrepid Tiger II jamming pods to support both communications and radar jamming, and develop a system to integrate air and ground electronic warfare units with other payloads designed to be used on any platform. The current acquisition strategy for the NGJ program calls for it to be integrated on one aircraft—the EA-18G—however, the program is planning on pursuing a modular open systems approach to development that could make it easier to integrate the NGJ or its technologies into other systems in the future. An open systems approach allows system components to be added, removed, modified, replaced, or sustained by the military customer or different manufacturers, in addition to the prime manufacturer that developed the system. It also allows independent suppliers to build components that can plug-in to the existing system through the open connections. Fundamental elements of an open systems approach include the following: Designing a system with modular components that isolate functionality. This makes the system easier to develop, maintain, and modify because components can be changed without significantly impacting the remainder of the system. Developing and using open, publicly available standards for the key interfaces, or connections, between the components. According to NGJ program officials, a modular open systems approach would allow the NGJ to be designed so that it could adapt to threat and technology changes. It also enables future growth of the system. Furthermore, Navy officials stated that the approach could make it possible for NGJ components to be used and modified for application on significantly different platforms, including unmanned aerial vehicles. This approach is encouraged by DOD guidance, including its Better Buying Power initiative, as well as Navy guidance. The NGJ Analysis of Alternatives also examined integrating the NGJ onto the F-35, which is being acquired by the Air Force, Marine Corp, and Navy, but the option was found to be too risky and costly for a near-term solution. Navy officials explained that, even with an open systems approach, integrating the NGJ with any platform is difficult. Even the integration associated with moving the ALQ-99 to the EA-18G was challenging. The cost of the effort was about $2 billion and took 5 years. Part of the integration challenge was adapting the operator workload system because the EA-6B is a four-operator aircraft while the EA-18G is a two-operator aircraft. The F-35 is a single-operator aircraft, which officials explained would cause significant integration challenges for the NGJ. Airborne electronic attack is an important enabling capability for U.S. military forces in both major combat operations and irregular warfare environments. In response to rapidly evolving threats and mission needs, DOD is making investments to increase both its airborne electronic attack capacity and capabilities. At an estimated cost of over $7 billion, the NGJ represents a significant investment in airborne electronic attack capabilities. Investments of this size must be well-justified and are required by statute and DOD policy to be examined for unnecessary redundancy. DOD’s analysis of its airborne electronic attack capability gaps over the last decade, as well as the NGJ analysis of alternatives, supports its conclusion that the NGJ meets a valid need and is not duplicative of existing capabilities in its primary role. However, in the time since DOD completed some of these analyses, the investment plans of the military services have changed, particularly in the irregular warfare area. The military services are quick to differentiate their airborne electronic attack needs and justify individual service, rather than joint or common, solutions to meet them. While none of the new programs planned duplicate NGJ capabilities, new areas of overlap and potential duplication could emerge as these plans continue to evolve. Redundancy in some of these areas may, in fact, be desirable, but pursuing multiple acquisition efforts to develop similar capabilities can also result in the same capability gap being filled twice or more, lead to inefficient use of resources, and contribute to other warfighting needs going unfilled. DOD has mechanisms, such as the Electronic Warfare Strategy Report to Congress, U.S. Strategic Command Annual Electronic Warfare Assessment, and NGJ capability development document, that it can use to continue to assess overlap and duplication between the NGJ and other airborne electronic capabilities at key points in the acquisition process and communicate its evolving airborne electronic attack investment plans to Congress. Identifying existing and planned systems across all of the NGJ’s planned roles in its capability development document could help ensure that DOD’s analysis of potential overlap and duplication is complete. Moreover, providing Electronic Warfare Strategy Reports to Congress as required and incorporating information on potentially overlapping systems and why such overlap is warranted would provide Congress with more complete information about the relationship between electronic warfare programs. We recommend that the Secretary of Defense take the following two actions: To help ensure that the NGJ does not unnecessarily duplicate existing or planned capabilities, require the Navy, in coordination with the Joint Staff, to address overlap and duplication between the NGJ and other systems in all of its planned roles in the NGJ capability development document. The NGJ capability development document should identify the existing and planned systems that the Navy assessed for potential redundancies to help determine if its analysis was comprehensive. To provide Congress complete information about the relationship between electronic warfare programs, ensure that the Electronic Warfare Strategy Reports to Congress include information on potentially overlapping capabilities among systems, such as the NGJ and Electronically Attack Enabled AESA Radar, CEASAR, Intrepid Tiger II, and Multi-Function Electronic Warfare, and why that overlap is warranted. We provided a draft of this report to DOD for review and comment. In its written comments, which are reprinted in full in appendix II, DOD partially concurred with our first recommendation and concurred with our second recommendation. DOD also provided technical comments that were incorporated as appropriate. DOD partially concurred with our recommendation to address overlap and duplication between the NGJ and other systems in all of its planned roles in the NGJ capability development document. DOD responded that it concurs with the need to continue to assess unnecessary duplication and redundancy, but it does not concur with including the assessment in the capability development document. Rather DOD stated that it will address unnecessary duplication and redundancy in accordance with its existing processes, such as the Joint Capabilities Integration Development System (JCIDS), and statutory requirements. DOD explained that changes it made to the JCIDS process in January 2012 address the concerns about potential capability overlaps and redundancies raised in this and other GAO reports. For example, the revised JCIDS manual emphasized the role of functional capabilities board in assessing potential unnecessary capability redundancy prior to forwarding a program’s requirements documents for approval. In addition, DOD stated that the Joint Staff is further improving these processes through a pending update to JCIDS that will include increased emphasis on functional area portfolio management. DOD also reiterated in its comments and in a classified enclosure that NGJ’s capabilities are not unnecessarily duplicative. We acknowledged the existing JCIDS mechanisms that address potential overlap and duplication in this report and have discussed the value of effective portfolio management in prior reports. However, as we point out in our recommendation, documenting the assessments that support these processes is important because it allows others to determine if DOD’s analysis was comprehensive. We identified the NGJ capability development document as the appropriate vehicle to document DOD’s assessment of potential duplication because DOD already requires that potential overlap and duplication be considered before the document can be validated and the program can move forward in the acquisition process. Finally, while DOD’s current analysis indicates that none of its current or planned programs duplicate NGJ capabilities, new areas of overlap and potential duplication could emerge as military service investment plans continue to evolve. DOD concurred with our second recommendation regarding providing complete information about the relationship between electronic warfare programs in its Electronic Warfare Strategy Reports to Congress. DOD did not provide details regarding how it plans to implement this recommendation. We are sending copies of this report to interested congressional committees, the Secretary of Defense, the Secretary of the Army, the Secretary of the Navy, the Secretary of the Air Force, and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) assessed duplication among the Next Generation Jammer (NGJ), existing capabilities, and other acquisition programs, we reviewed key NGJ and DOD electronic warfare documents, including the 2004 Airborne Electronic Attack Initial Capabilities Document, the 2009 Electronic Warfare Initial Capabilities Document, the NGJ Analysis of Alternatives (AOA), and the DOD Annual Electronic Warfare Strategy Report to Congress, to determine whether potential duplication was considered as DOD developed NGJ requirements and prepared for initiation of the NGJ acquisition program. We interviewed DOD, military service, and program officials and the Joint Staff about how these analyses were conducted. We assessed DOD’s analysis of duplication against DOD’s Joint Capabilities Integration and Development System (JCIDS) Manual. In addition, we reviewed information up to the SECRET level provided by the military services regarding the capabilities and missions of existing and planned airborne electronic attack systems. Our analysis was limited to non-kinetic airborne electronic attack systems as opposed to kinetic capabilities which focus on destroying forces through the application of physical effects. To determine the extent to which the NGJ is being managed as a joint solution, we reviewed key requirements and acquisition documents reflecting military service and Joint Staff input into NGJ requirements and the acquisition program. We also interviewed DOD, military service, and program officials to determine the extent to which the military services provided input into NGJ requirements and the acquisition program. In addition, we analyzed documents, such as memorandums of agreement among the military services, and interviewed military service and Joint Staff officials to obtain an understanding of how NGJ is expected to operate in the joint force. We also reviewed the NGJ AOA and interviewed program officials to determine if the system is intended to be used on multiple platforms. We conducted this performance audit from November 2012 to August 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. Our analysis was also limited to information classified no higher than SECRET, but we believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following individuals made key contributions to this report: Ronald E. Schwenn, Assistant Director; Teakoe Coleman; Laura Greifner; John Krump; Laura Holliday; Brian Lepore; Anh Nguyen; Madhav Panwar; Mark Pross; and Roxanna Sun.
At an estimated cost of over $7 billion, the Navy's NGJ program represents a significant investment in airborne electronic attack capabilities. Jammers, like the planned NGJ, fly on aircraft, such as Navy EA-18Gs, and transmit electronic signals that can neutralize or temporarily degrade enemy air defenses and communications, thus aiding combat aircraft and ground forces' freedom to maneuver and strike. Senate Report 112-196 mandated GAO to review the NGJ program and potential duplication. This report examines the extent to which (1) DOD assessed whether there is duplication among NGJ, existing capabilities, and other acquisition programs, and (2) NGJ is being managed as a joint solution. GAO reviewed key NGJ requirements and acquisition documents and DOD and military service documents describing airborne electronic attack capabilities. Generation Jammer (NGJ) program is duplicative using a variety of means, but none of them address all of the system's planned roles or take into account the military services' evolving airborne electronic attack investment plans. DOD analyses support its conclusion that the NGJ meets a valid need and is not duplicative of existing capabilities in its primary role--suppressing enemy air defenses from outside the range of known surface-to-air missiles. However, these analyses do not address all planned NGJ roles, such as communications jamming in irregular warfare environments, or take into account the military services' evolving airborne electronic attack investment plans. According to GAO's analysis, none of the systems that have emerged since DOD completed its NGJ analyses duplicate its planned capabilities; however there is some overlap in the roles they are intended to perform. Redundancy in some of these areas may, in fact, be desirable. However, pursuing multiple acquisition efforts to develop similar capabilities can result in the same capability gap being filled twice or more, lead to inefficient use of resources, and contribute to other warfighting needs going unfilled. Therefore, continued examination of potential overlap and duplication among these investments may be warranted. DOD has several ongoing efforts that could provide a mechanism for updating its analysis of potential overlap and duplication to address these shortcomings as the program moves forward. However, GAO found weaknesses in two of these efforts as well. Electronic Warfare Strategy Report to Congress : DOD could address new duplication issues as they emerge and, if necessary, explain the need for overlapping capabilities in this report. However, to date, the analysis of overlap and duplication in this report has been limited and did not examine potential overlap between capabilities or explain why overlap was warranted. NGJ Capability Development Document : Redundancies are required to be considered when a capability development document--which defines the performance requirements for an acquisition program--is validated. The draft NGJ capability development document does not identify the systems the Navy considered when analyzing potential redundancies, so it is difficult to evaluate whether its analysis includes existing and proposed programs across all of the NGJ's planned roles. The NGJ is not being managed as a joint acquisition program, which is a distinction related to funding, but it is expected to provide the Navy with airborne electronic capabilities that will support all military services in both major combat operations and irregular warfare environments. The NGJ's capabilities are not intended to meet all of the military services' airborne electronic attack needs and the services are planning to make additional investments in systems that are tailored to meet their specific warfighting roles. The military services might be able to leverage the NGJ program in support of their own acquisition priorities because it plans to use a modular open systems approach, which allows for components to be added, removed, or modified without significantly impacting the rest of the system. This approach could make it easier to integrate the NGJ or its technologies into other systems in the future. To help ensure that DOD’s analysis of potential overlap and duplication is complete, GAO recommends that the Secretary of Defense: (1) require the NGJ capability development document to discuss potential redundancies between NGJ and existing and proposed programs across all of its planned roles and (2) ensure that the Electronic Warfare Strategy Report to Congress includes information on potentially overlapping capabilities and why that overlap is warranted. DOD agreed to continue to assess duplication and redundancies but not with using the capability development document to do so. GAO believes the recommendation remains valid as discussed in the report. DOD agreed with the second recommendation.
The practice of advertising prescription drugs to consumers has been controversial. The United States is one of only two nations that allow DTC advertising (the other is New Zealand). In the United States, there have been concerns about the impact of DTC advertising on prescription drug spending and about potential safety issues, particularly with regard to the advertising of new drugs. These concerns have led to calls to restrict DTC advertising. For example, the Institute of Medicine recently recommended that DTC advertising be restricted during the first two years a new drug is marketed because some of the health risks of new drugs are not fully understood. FDA regulates the content of all prescription drug advertising, whether directed to consumers or medical professionals. Advertising that is targeted to consumers includes both DTC and “consumer-directed” materials. DTC advertising includes, for example, broadcast advertisements (such as those on television and radio), print advertisements (such as those in magazines and newspapers), and Internet advertisements (such as consumer advertising on drug companies’ Web sites). In contrast, consumer-directed advertisements are designed to be given by medical professionals to consumers and include, for example, patient brochures provided in doctors’ offices. FDA requires that drug companies submit all final prescription drug advertising materials to the agency when they are first disseminated to the public. Drug companies are generally not required to submit advertising materials to FDA before they are disseminated. However, drug companies sometimes voluntarily submit draft DTC advertising materials to FDA in order to obtain advisory comments from the agency. Advertising materials must contain a “true statement” of information including a brief summary of side effects, contraindications, and the effectiveness of the drug. To meet this requirement, advertising materials must not be false or misleading, must present a fair balance of the risks and benefits of the drug, and must present any facts that are material to the use of the drug or claims made in the advertising. With the exception of broadcast advertisements, materials must present all of the risks described in the drug’s approved labeling. Broadcast materials may present only the major side effects and contraindications, provided the materials make “adequate provision” to give consumers access to the information in the drug’s approved or permitted package labeling. Within FDA, DDMAC is responsible for implementing the laws and regulations that apply to prescription drug advertising. The division, which had 41 staff as of July 2006, is responsible for the oversight of both advertising directed to consumers and advertising directed to medical professionals. In March 2002, DDMAC created a DTC Review Group, which is responsible for oversight of advertising materials that are directed to consumers. Four Professional Review groups are responsible for oversight of promotional materials targeted to medical professionals. The DTC Review Group was allocated a group leader, four reviewers, and two social scientists when it was created. This group’s responsibilities include reviewing final DTC materials and reviewing and providing advisory comments on draft DTC materials. The group also monitors television, magazines, and consumer advertising on drug companies’ Web sites to identify advertising materials that were not submitted to FDA at the time they were first disseminated and reviews advertising materials cited in complaints submitted by competitors, consumers, and others. The two social scientists support reviewers in both the DTC and professional groups in their assessment of the content of advertising materials and conduct research related to DTC advertising, such as surveys of consumer and physician attitudes toward DTC advertising. Once submitted to FDA, final and draft DTC advertising materials are distributed to a reviewer in the DTC Review Group. For final materials, if the reviewer identifies a concern, the agency determines whether it represents a violation and merits a regulatory letter. For draft materials submitted by drug companies, FDA may provide the drug company with advisory comments to consider before the materials are disseminated to consumers if, for example, the reviewers identify claims in materials that could violate applicable laws and regulations. If FDA identifies violations in disseminated DTC materials, the agency can issue two types of regulatory letters—either a “warning letter” or an “untitled letter.” Warning letters are typically issued for violations that may lead FDA to pursue enforcement action if not corrected; untitled letters are issued for violations that do not meet this threshold. FDA generally posts issued letters on its Web site within several days of issuance. Both types of letters—which ranged from 2 to 9 pages, from 1997 through 2005—cite the type of violation identified in the company’s advertising material, request that the company submit a written response to FDA within 14 days, and request that the company take specific actions. Untitled letters request that companies stop disseminating the cited advertising materials and other advertising materials with the same or similar claims. In addition, warning letters further request that the company issue advertising materials to correct the misleading impressions left by the violative advertising materials. While FDA does not have explicit authority to require companies to act upon these letters, if the companies continue to violate applicable laws or regulations, the agency has other administrative and judicial enforcement avenues that could encourage compliance or result in the product being taken off the market. For example, FDA, through the Department of Justice, may seek additional remedies in the courts resulting in the seizure of drugs deemed to be misbranded because their advertising is false or misleading. As reviewers from the DTC Review Group draft the regulatory letters, they sometimes obtain consultations from other FDA experts. For example, they may consult with the social scientists in the DTC Review Group about how consumers might interpret the violative materials, with the regulatory counsel in DDMAC about regulatory issues, or with a medical officer in FDA’s Office of New Drugs who has knowledge of a drug’s clinical testing and approval history. The reviewers may also consult with reviewers in DDMAC’s Professional Review groups. The draft regulatory letters are subsequently reviewed by officials in DDMAC, FDA’s Office of Medical Policy (which oversees DDMAC), and OCC. In January 2002, at the direction of the Deputy Secretary of HHS, FDA implemented a policy change requiring OCC to review and approve all regulatory letters prior to their issuance, including letters drafted by the DTC Review Group, to ensure “legal sufficiency and consistency with agency policy.” In its written comments on a draft of our 2002 report, FDA stated that, prior to the policy change, there had been complaints that FDA would not follow up on many of its regulatory letters, and that the goal of the policy change was to promote voluntary compliance by ensuring that drug companies who receive a regulatory letter understand that the letter has undergone legal review and the agency is prepared to go to court if necessary. The amount that drug companies spend on DTC advertising increased twice as fast as spending on promotion to physicians or on research and development. IMS Health estimated that, from 1997 through 2005, spending on DTC advertising in the United States increased from $1.1 billion to $4.2 billion—an average annual increase of almost 20 percent. In contrast, over the same time period, IMS Health estimated that spending on drug promotion to physicians increased by 9 percent annually. Further, PhRMA reported that spending on the research and development of new drugs increased by about 9 percent annually during the same period. While spending on DTC advertising has grown rapidly, companies continue to spend more on promotion to physicians and on research and development. In addition, IMS Health reports that the retail value of the free drug samples that companies provide to medical professionals to distribute to their patients has increased by about 15 percent annually. (See table 1.) Some types of promotional spending are not captured in the data we report. For example, figures for spending on DTC advertising do not include spending to develop and maintain drug companies’ Web sites or spending on sponsorship of sporting events. In addition, some spending on promotion to medical professionals is not captured. For example, the data do not include drug company spending on meetings and events, or spending on promotion that targets medical professionals other than physicians, such as nurse practitioners and physicians assistants. Drug companies concentrate their spending on DTC advertising in specific forms of media and on relatively few drugs. Television and magazine advertising represented about 94 percent of all spending on DTC advertising in 2005. DTC advertising also tends to be concentrated on relatively few brand name prescription drugs—in 2005, the top 20 DTC advertised drugs accounted for more than 50 percent of all spending on DTC advertising. Many of the drugs most heavily advertised to consumers in 2005 were for the treatment of chronic conditions, such as high cholesterol, asthma, and allergies. Several of the drugs that have high levels of DTC advertising are also often promoted to physicians, and the drug companies often provide physicians with free samples of these drugs to be given to consumers. Studies we reviewed suggest that DTC advertising increases prescription drug spending and utilization. It increases utilization by prompting some consumers to request the drugs from their physicians and for some physicians to prescribe the requested drugs. Evidence about increased utilization prompted by DTC advertising suggests it can have both positive and negative effects on consumers. Studies we reviewed suggest that DTC advertising can increase drug spending for both the advertised drug and for other drugs that are used to treat the same condition. Studies have found that, for many drugs, DTC advertising increases sales of the drug itself, though the amount varies substantially. Across the studies we examined, estimates for certain drugs range from little change in sales to an increase of more than $6 for every $1 spent to advertise the specific drug. For example, one study of 64 drugs found a median increase in sales of $2.20 for every $1 spent on DTC advertising. The impact of DTC advertising on the sales of an individual drug depends on many factors. For example, one study found that, for the 63 drugs with the largest revenues in 2000, DTC advertising for newer drugs—launched in 1998 or 1999—increased sales more than DTC advertising for drugs launched from 1994 through 1997. Further, research suggests that the sales of a specific drug may be affected by DTC advertising for other drugs that treat the same condition. For example, one study found that every $1,000 spent on advertising for allergy drugs was associated with 24 new prescriptions for one specific allergy drug. The studies we reviewed also suggest that DTC advertising increases prescribing by prompting some consumers to request the drugs from their physicians, and that physicians are generally responsive to the patient requests. Across the consumer and physician surveys that we reviewed, about 90 percent of consumers report having seen a DTC advertisement. Studies have found that about 30 percent (ranging from 18 to 44 percent) of consumers who have seen DTC advertising reported discussing with their physician either the condition seen in an advertisement or an advertised drug. Of consumers who reported discussing an advertised condition or drug, about one quarter (ranging from 7 to 35 percent) reported requesting a prescription for the advertised drug. Surveys have found that of consumers who requested a drug they saw advertised, generally more than half (ranging from 21 to 84 percent) reported receiving a prescription for the requested drug. The surveys we reviewed found that between 2 and 7 percent of consumers who see a DTC advertisement requested and ultimately received a prescription for the advertised drug. Studies suggest that physicians are generally responsive to consumers’ requests, and that decisions to prescribe a drug are influenced by a variety of factors in addition to a patient’s medical condition. For example, studies have found that advertising in medical journals and visits from drug sales representatives may influence physician prescribing to a greater degree than DTC advertising. Studies about DTC advertising and the increased utilization of prescription drugs it can prompt suggest that its effect on consumers can be both positive and negative. Some research suggests that DTC advertising can have benefits for consumers, such as encouraging them to talk to their doctors about previously undiagnosed conditions. For example, one study found that DTC advertising is associated with the diagnosis and treatment of high cholesterol with prescription drugs. Similarly, another study found that DTC advertising for antidepressant drugs was associated with an increase in the number of people diagnosed with depression and who initiated drug therapy, as well as with a small increase in patients who received the appropriate duration of therapy. In contrast, other research suggests that DTC advertising can have negative effects, such as encouraging increases in prescriptions for advertised drugs when alternatives may be more appropriate. For example, one study found that consumers who requested a pain medication as a result of DTC advertising were more likely to get the requested drug than a drug more appropriate for those consumers. Another study, using actors posing as patients, found that 55 percent of those who presented with symptoms of adjustment disorder and requested a specific antidepressant received an antidepressant, even though treatment with drugs may not have been appropriate given their symptoms. FDA reviews a small portion of the increasingly large number of DTC materials it receives. FDA attempts to target available resources by focusing its reviews on the DTC advertising materials that have the greatest potential to impact public health, but the agency has not documented criteria for prioritizing the materials it receives for review. FDA officials told us that agency reviewers consider several informal criteria when prioritizing the materials. However, FDA does not apply these criteria systematically to the materials it receives. Instead, FDA relies on each of the reviewers to be aware of the materials the agency has received and accurately apply the criteria to determine the specific materials to review. Further, the agency does not document if a particular DTC material was reviewed. As a result, the agency cannot ensure that it is identifying or reviewing the materials that are the highest priority. FDA reviews a small portion of the increasingly large number of DTC materials submitted to the agency by drug companies. In 2005, FDA received 4,600 final DTC materials (excluding Internet materials) and 6,168 final Internet materials. FDA also received 4,690 final consumer-directed materials—such as brochures given to consumers by medical professionals. As shown in figure 1, FDA has received a steadily increasing number of final materials from 1999 through 2005. We could not determine whether there has been a similar increase in the number of draft DTC materials FDA has received because the agency does not track this information. FDA officials told us that the agency receives substantially more final and draft materials than the DTC Review Group can review. The total number of final materials has almost doubled since FDA formed its DTC Review Group in March 2002. FDA officials told us that the group was not fully staffed until September 2003 and that turnover has been a problem, temporarily reducing the number of reviewers in the group from four to one in late summer 2005. FDA has since filled all of the positions in the group and it added a fifth reviewer in September 2006. Officials told us that it can take 6 months to a year for new reviewers to become fully productive. FDA officials estimate that reviewers spend the majority of their time reviewing and commenting on draft materials. However, we were unable to determine the number of final or draft materials FDA reviews, because FDA does not track this information. In the case of final and draft broadcast materials, FDA officials told us that the DTC group reviews all of the materials it receives; in 2005, it received 337 final and 146 draft broadcast materials. However, FDA does not document whether these or other materials it receives have been reviewed. As a result, FDA cannot determine how many materials it reviews in a given year. FDA cannot ensure that it is identifying and reviewing the highest-priority DTC materials because it does not have documented criteria that it systematically uses to select DTC materials for review. FDA officials told us that, to target available resources, the agency prioritizes the review of the DTC advertising materials that have the greatest potential to impact public health. However, FDA has not documented criteria for reviewers in the DTC Review Group to consider when prioritizing materials for review. Instead, FDA officials identified informal criteria that reviewers use to prioritize their reviews. For example, FDA officials told us that the DTC Review Group reviews all final and draft broadcast DTC advertising materials because they are likely to be disseminated to a large number of people. In addition, FDA officials told us that the agency places a high priority on reviewing other draft materials because they provide the agency with an opportunity to identify problems and ask drug companies to correct them before the materials are disseminated to consumers. In addition, FDA officials told us that reviewers consider whether a nonbroadcast material is likely to be widely disseminated to consumers; a drug has been cited in previous regulatory letters; a drug is being advertised to consumers for the first time; a drug is one of several drugs that can be used to treat the same condition, which FDA believes increases the likelihood that advertising will use comparative claims that may not be supported by available scientific evidence; a drug is cited in a complaint submitted by a competitor, consumer, or a drug has had recent labeling changes, such as the addition of new risk a drug was approved under FDA’s accelerated approval process. FDA officials indicated that the agency does not systematically apply its informal criteria to all of the materials that it receives. Specifically, at the time FDA receives the materials, the agency does not identify the materials that meet its various criteria. FDA officials told us that the agency does identify all final and draft broadcast materials that it receives, but does not have a system for identifying any other high-priority materials. Absent such a system for all materials, FDA relies on each of the reviewers—in consultation with other DDMAC officials—to be aware of the materials that have been submitted and to accurately apply the criteria to determine the specific materials to review. This creates the potential for reviewers to miss materials that the agency would consider to be a high priority for review. Furthermore, because FDA does not track information on its reviews, the agency cannot determine whether a particular material has been reviewed. As a result, the agency cannot ensure that it is identifying and reviewing the highest-priority materials. Since the 2002 policy change requiring legal review by OCC of all draft regulatory letters, the agency’s process for drafting and issuing letters has taken longer and FDA has issued fewer regulatory letters per year. As a result of the policy change, draft regulatory letters receive additional levels of review and the DTC reviewers who draft the letters must do substantially more work to prepare for and respond to comments resulting from review by OCC. Since the policy change, FDA has issued fewer regulatory letters per year than it did in any year prior to the change. FDA officials told us that the agency issues letters for only the violative DTC materials that it considers the most serious and most likely to impact consumers’ health. Since the 2002 policy change requiring legal review of all draft regulatory letters, FDA’s process for issuing letters has taken longer. Once FDA identifies a violation in a DTC advertising material and determines that it merits a regulatory letter, FDA takes several months to draft and issue a letter. (See fig. 2.) For letters issued from 2002 through 2005, once DDMAC began drafting a letter for violative DTC materials it took an average of about 4 months to issue the letter. The length of this process varied substantially across these regulatory letters—one letter took around 3 weeks from drafting to issuance, while another took almost 19 months. In comparison, for regulatory letters issued from 1997 through 2001, it took an average of 2 weeks from drafting to issuance. During this earlier time period, 11 letters were issued the day they were drafted, and the longest time from drafting to issuance was slightly more than 6 months. The primary factor contributing to the increase in the length of FDA’s process for issuing regulatory letters is the additional work that resulted from the 2002 policy change. In addition to the time required of OCC, DDMAC officials told us that the policy change has created the need for substantially more work on their part to prepare the necessary documentation for legal review. According to DDMAC officials, to prepare for initial meetings with OCC on draft regulatory letters reviewers prepare extensive background information describing the violations as well as the drug and its promotional history. As a part of this process, DDMAC reviewers sometimes seek consultations with regulatory and clinical experts within FDA. For example, reviewers may request consultations with the medical officers in FDA’s Office of New Drugs in order to determine whether available data from the drug approval process are sufficient to support the advertising claims being made in DTC materials. After incorporating comments from the requested consultations, DDMAC reviewers hold their initial meeting with OCC and subsequently revise the draft regulatory letter to reflect the comments from OCC. Once these initial revisions are complete, DDMAC formally submits a draft regulatory letter to OCC for legal review and approval. All DDMAC regulatory letters are reviewed by both OCC staff and OCC’s Chief Counsel. OCC often requires additional revisions to the draft regulatory letter before OCC will concur that a letter is legally supportable and can be issued. Depending on comments provided by OCC, the DDMAC reviewers may request additional consultations with FDA experts at each stage of review. OCC officials told us that the office has given regulatory letters that cite violative DTC materials higher priority than other types of regulatory letters, but that the attorneys have many other responsibilities. Prior to 2005, OCC had two staff attorneys and one supervising attorney assigned to review all of the regulatory letters submitted by DDMAC, including the letters that cite DTC materials. However, OCC officials told us that the review of DDMAC’s draft regulatory letters is a small portion of their total responsibilities and must be balanced with other requests, such as the examination of legal issues surrounding the approval of a new drug. OCC officials told us that, in 2005, the office assigned two additional attorneys in an attempt to help issue the DDMAC regulatory letters more quickly. Prior to September 2005, OCC had a goal of providing initial comments to DDMAC within 15 business days from the date that a letter citing DTC materials was formally submitted. Based on our review of DDMAC’s and OCC’s documentation for the 19 letters issued from 2004 through 2005, we estimated that OCC generally met its 15-day goal for providing initial comments. However, the goal OCC established is not directly relevant to the total amount of time it takes FDA to issue the regulatory letter once it has been formally submitted to OCC because DDMAC must make changes to the letters to respond to OCC’s comments and OCC may review letters more than once. For regulatory letters issued from 2004 through 2005 that cited violative DTC materials, we found that, once DDMAC had formally submitted a draft letter to OCC, it took an average of about 3 months for the letter to receive final OCC concurrence and be issued. FDA does not have a goal for how long it should take the agency to issue a letter from the time that OCC first formally receives a draft of the letter. The number of regulatory letters FDA issued per year for violative DTC materials decreased after the 2002 policy change lengthened the agency’s process for issuing letters. From 2002 to 2005, the agency issued between 8 and 11 regulatory letters per year that cited DTC materials. (See fig. 3.) Prior to the policy change, the agency issued about twice as many such regulatory letters per year. From 1997 through 2001, FDA issued between 15 and 25 letters citing DTC materials per year. An FDA official told us that both the lengthened review time resulting from the 2002 policy change and staff turnover within the DTC Review Group contributed to the decline in the number of issued regulatory letters. In addition, from 2002 through 2005, FDA did not ultimately issue 10 draft regulatory letters citing DTC materials that DDMAC had submitted to OCC for the required legal review. For 5 letters, OCC determined that there was insufficient legal support for issuing the letters and, therefore, did not concur with DDMAC. DDMAC withdrew the other 5 letters from OCC’s consideration but could not provide us with information on why it withdrew these letters. Although the total number of regulatory letters FDA issued for violative DTC materials decreased, the agency issued relatively more warning letters—which cite violations FDA considers to be more serious—in recent years. Historically, almost all of the regulatory letters that FDA issued for DTC materials were untitled letters for less serious violations. From 1997 through 2001, FDA issued 98 regulatory letters, 6 of which were warning letters. From 2002 through 2005, 8 of the 37 regulatory letters were warning letters. FDA regulatory letters may cite more than one DTC material or type of violation for a given drug. Of the 19 regulatory letters FDA issued from 2004 through 2005, 7 cited more than 1 DTC material, for a total of 31 different materials. These 31 materials appeared in a range of media, including television, radio, print, direct mail, and Internet. Further, FDA identified multiple violations in 21 of the 31 DTC materials cited in the letters. The most commonly cited violations related to a failure of the material to accurately communicate information about the safety of the drug. For example, FDA wrote in 5 letters that distracting visuals in cited television advertisements minimized important information about the risk of the drug. The letters also often cited materials for overstating the effectiveness of the drug or using misleading comparative claims. FDA officials told us that the agency issues regulatory letters for DTC materials that it believes are the most likely to negatively impact consumers and does not act on all of the concerns that its reviewers identify. When reviewers have concerns about DTC materials, they discuss them with others in DDMAC and may meet with OCC and medical officers in FDA’s Office of New Drugs to determine whether a regulatory letter is warranted or on the content of the letter itself. FDA officials told us that the agency issues regulatory letters only for the violative materials that it considers the most likely to negatively impact public health. For example, they said the agency may be more likely to issue a letter when a false or misleading material was broadly disseminated to a large number of consumers. In addition, FDA officials told us that they are more likely to issue a regulatory letter when the drug is one of several drugs that can be used to treat the same condition; they said that the issuance of a regulatory letter in this situation may enhance future voluntary compliance by promoters of the competing drugs. However, because FDA does not document decisions made at the various stages of its review process about whether to pursue a violation, officials were unable to provide us with an estimate of the number of materials about which concerns were raised but the agency did not issue a letter. FDA regulatory letters have been limited in their effectiveness at halting the dissemination of false and misleading DTC advertising materials. We found that, from 2004 through 2005, FDA issued regulatory letters an average of about 8 months after the violative DTC materials they cited were first disseminated. By the time these letters were issued, drug companies had already discontinued more than half of the cited materials. For the materials that were still being disseminated, drug companies removed the cited materials in response to FDA’s letter. Drug companies also identified and removed other materials with claims similar to the materials cited in the regulatory letters. Although drug companies complied with FDA’s requests to create materials that correct the misimpressions left by the cited materials, these corrections were not disseminated until 5 months or more after FDA issued the regulatory letter. Despite halting the dissemination of both cited and other violative materials at the time the letter was issued, FDA’s issuance of these letters did not always prevent drug companies from later disseminating similar violative materials for the same drugs. FDA’s regulatory letters have been limited in their effectiveness at halting the dissemination of the violative DTC materials they cite. Because of the length of time it took FDA to issue these letters, violative advertisements were often disseminated for several months before the letters were issued. From 2004 through 2005, FDA issued regulatory letters citing DTC materials an average of about 8 months after the violative materials were first disseminated. FDA issued one letter less than 1 month after the material was first disseminated, while another letter took over 3 years. The cited materials were usually disseminated for 3 or more months, though there was substantial variability across materials. Of the 31 violative DTC materials cited in these letters, 16 were no longer being disseminated by the time the letter was issued. On average, these letters were issued more than 4 months after the drug company stopped disseminating these materials, and therefore had no impact on their dissemination. For the 14 DTC materials that were still in use when FDA issued the letter, the drug companies complied with FDA’s request to stop disseminating the violative materials. However, by the time the letters were issued, these 14 materials had been disseminated for an average of about 7 months. See figure 4 for information on the timeliness of the 19 regulatory letters relative to the dissemination of the DTC advertising materials they cited. As requested by FDA in the regulatory letters, drug companies often identified and stopped disseminating other materials with claims similar to those in the violative materials. For 18 of the 19 regulatory letters issued from 2004 through 2005, the drug companies indicated to FDA that they had either identified additional similar materials or that they were reviewing all materials to ensure compliance. Some of these drug companies indicated in their correspondence with FDA which similar materials they had identified. Specifically, drug companies responding to 13 letters indicated that they had identified and stopped disseminating between 1 and 27 similar DTC and other materials directed to consumers that had not been cited in the regulatory letter. In addition to halting materials directed to consumers, companies responding to 11 letters also stopped disseminating materials with similar claims that were targeted directly to medical professionals. Drug companies disseminated the corrective advertising materials requested in FDA warning letters, but took 5 months or more to do so. In each of the six warning letters FDA issued in 2004 and 2005 that cited DTC materials, the agency asked the drug company to disseminate truthful, nonmisleading, and complete corrective messages about the issues discussed in the regulatory letter to the audiences that received the violative promotional materials. In each case, the drug company complied with this request by disseminating corrective advertising materials. For four warning letters we were able to examine the resulting corrective materials and found that they each contained an explicit reference to the regulatory letter and a message intended to correct misleading impressions created by the violative claim. In addition, the drug companies provided evidence to FDA that the materials would be disseminated to a consumer population similar to the one that received the original violative advertising materials. For example, one drug company provided FDA with the broadcast schedule for the violative television advertisement and the planned schedule for the corrective advertising material to demonstrate that it would run on similar channels, at similar times, and with similar frequency. For the six warning letters FDA issued in 2004 and 2005 that cited DTC materials, the corrective advertising materials were initially disseminated more than 5 to almost 12 months after FDA issued the letter. For example, for one allergy medication, the violative advertisements ran from April through October 2004, FDA issued the regulatory letter in April 2005, and the corrective advertisement was not issued until January 2006. FDA officials told us that the process of issuing a corrective advertisement is lengthy because the agency and the drug company negotiate the content and format of the corrective advertisements. They also said that, in some cases, FDA reviewers work closely with the drug company to develop, and sometimes suggest specific content for, the corrective advertisement. See figure 4 for more detail on the dissemination of the corrective advertisements. FDA regulatory letters do not always prevent the same drug companies from later disseminating violative DTC materials for the same drug, sometimes using the same or similar claims. From 1997 through 2005, FDA issued regulatory letters for violative DTC materials used to promote 89 different drugs. Of these 89 drugs, 25 had DTC materials that FDA cited in more than one regulatory letter, and one drug had DTC materials cited in eight regulatory letters. For 15 of the 25 drugs, FDA cited similar broad categories of violations in multiple regulatory letters. For example, FDA issued regulatory letters citing DTC materials for a particular drug in 2000 and again in 2005 for “overstating the effectiveness of the drug.” However, the specific claims cited in each of these regulatory letters differed. In 2000, FDA wrote in its regulatory letter that the “totality of the image, the music, and the audio statements” in a television advertisement overstated the effectiveness of the drug. The 2005 letter stated that a different television advertisement overstated effectiveness by suggesting that the drug was effective for “preventing or modifying the progression of arthritis” when the drug was approved for the “relief of the signs and symptoms” of arthritis. For 4 of the 15 drugs, FDA cited the same specific violative claim for the same drug in more than one regulatory letter. (See table 2.) For example, in 1999 FDA cited a DTC direct mail piece for failing to convey important information about the limitations of the studies used to approve the promoted drug. In 2001, FDA cited a DTC broadcast advertisement for the same drug for failing to include that same information. Given substantial increases in drug company spending on DTC advertising in recent years, and evidence that DTC advertising can influence consumers’ behavior, it is important to develop a full understanding of its impact on the U.S. health care system. It is also important that FDA effectively limit the dissemination of DTC advertising that is false or misleading. Because FDA reviews a small portion of the final and draft DTC materials that it receives, it is important that the agency have a process to identify and review the materials that are the highest priority. However, FDA lacks documented criteria for identifying and prioritizing DTC materials for review, a process to ensure that criteria are applied systematically to all materials received, and a system for tracking whether materials have been reviewed. As a result, FDA cannot be assured that the highest-priority materials have been identified or reviewed. Given the length of time it takes FDA to issue regulatory letters and the potential for repeated use of violative claims, we are concerned about FDA’s effectiveness at limiting consumers’ exposure to false or misleading DTC advertising. In our 2002 report, we recommended that HHS take steps to reduce the time that FDA’s DTC draft regulatory letters are under review. In its written response to the recommendation in that report, HHS agreed that it needs to issue DTC regulatory letters more quickly and established a goal of issuing the letters “within 15 working days of review at OCC.” However, we have now found that it takes FDA months to complete the process of drafting and reviewing the letters. As we previously recommended, we believe that regulatory letters must be issued more quickly. To improve FDA’s processes for identifying and reviewing final and draft DTC advertising materials, we recommend that the Acting Commissioner of the Food and Drug Administration take the following three actions: document criteria for prioritizing materials that it receives for review, systematically apply its documented criteria to all of the materials it track which materials have been reviewed. HHS reviewed a draft of this report and provided comments, which are reprinted in appendix II. In its comments, HHS generally agreed with our description of FDA’s oversight of DTC advertising, but disagreed with our recommendations and some aspects of our conclusions. First, HHS disagreed with our recommendations that it systematically prioritize and track the DTC advertising materials it reviews. HHS stated that DDMAC now reviews all of some types of high priority DTC materials, especially final and draft broadcast advertisements. HHS also commented that, although DDMAC has not documented its selection criteria, those criteria are systematically applied by its reviewers to determine workload priorities. HHS also noted that reviewing each DTC material received according to selection criteria and tracking the reviews that DDMAC conducts would require DDMAC’s staff to be vastly increased. We recognize that, with current staffing, DDMAC’s DTC Review Group cannot review in detail the more than 10,000 DTC materials that are submitted to the agency each year and that DDMAC now focuses its review efforts specifically on broadcast materials and draft materials. However, it is because DDMAC’s reviewers are only able to review selected materials that we believe it is important for FDA to develop a more complete and systematic process for screening the materials the agency receives. To do so, the informal criteria that reviewers now consider when prioritizing reviews should be formalized to help ensure consistent application. Contrary to HHS’s comments, we do not agree that systematically applying these criteria would require that every DTC material be reviewed in detail. Instead, FDA should apply the criteria as a screening mechanism to all materials it receives. Furthermore, FDA already has most of the information that would be necessary to establish a system to screen submitted materials against these criteria. For instance, when drug companies submit DTC materials to FDA, the agency records information about the drug being advertised and the type of material submitted. Additionally, for most of the priority criteria described in our report, FDA already has information—such as whether the drug has been the subject of a previous regulatory letter or a recent label change— needed to determine how the criteria would apply to materials used to promote a given drug. Second, HHS also expressed concern that our draft report criticized the agency for the length of time it takes to issue regulatory letters and declines in the number of letters issued since the policy change requiring review by OCC, without adequately addressing the underlying purpose of that review. HHS commented that its policy change has led to more defensible regulatory letters and better compliance after issuance. We agree with HHS that it is important to ensure that FDA’s regulatory letters are legally supportable, and, as HHS noted, we did not examine the effect of the policy change on the legal sufficiency of the letters in this report. However, we also believe that it is important for letters to be issued in a timely manner if they are to have an impact on halting the dissemination of the violative materials that the letters cite. In 2002, HHS agreed with the recommendation of our earlier report that DTC regulatory letters be issued more quickly. Nonetheless, as we noted in the draft of this report, we found that violative advertisements had often been disseminated for several months before letters were issued in 2004 and 2005. More than half of the violative DTC materials cited in the 2004 and 2005 letters were no longer being disseminated by the time the letter was issued. Delays in issuing regulatory letters limit FDA’s effectiveness in overseeing DTC advertising and in reducing consumers’ exposure to false or misleading advertising. Finally, HHS commented that our discussion of research on DTC advertising implies that we statistically aggregated data from different studies to generate summary figures on the impact of DTC advertising on various types of consumer requests to their physicians. We have revised the report to clarify that the information we present is from the studies we reviewed and that we did not aggregate data across studies. HHS also provided technical comments which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to the Secretary of Health and Human Services, the Acting Commissioner of the Food and Drug Administration, and other interested parties. We will also make copies available to others who request them. In addition, the report will be available at no charge on GAO’s Web site at http://www. ao. ov. gg If you or your staffs have any questions about this report, please contact me at (202) 512-7119 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To examine trends in pharmaceutical industry spending on direct-to- consumer (DTC) advertising, promotion to medical professionals, and research and development of new drugs, we reviewed publicly reported data. For overall drug company spending from 1997 through 2005 on DTC advertising and promotion to medical professionals, we obtained data from IMS Health. We interviewed knowledgeable IMS Health officials to verify the data’s accuracy and the methodologies used for collecting them and reviewed related documentation and determined that the data were sufficiently reliable for the purposes of this report. In addition, we obtained data on drug company spending from 1997 through 2005 on research and development of new drugs from the Pharmaceutical Research and Manufacturers of America (PhRMA), which represents U.S. pharmaceutical research and biotechnology companies. For 2005, we reviewed more detailed data on DTC advertising by prescription drug from Neilsen Monitor-Plus, which were reported in the May 2006 edition of Med Ad News, a publication targeted to the pharmaceutical industry. For the PhRMA and Neilsen Monitor-Plus data, we reviewed related documentation and determined that the data were sufficiently reliable for the purposes of this report. The scope of our analysis focuses on trends since 1997 because in that year the Food and Drug Administration (FDA) issued its draft guidance clarifying the requirements for broadcast advertising. To examine the relationship between DTC advertising and prescription drug spending and utilization, we conducted a literature review. We conducted a structured search of 33 databases that included peer- reviewed journal articles, dissertations, and industry articles issued from January 2000 through February 2006. We searched these databases for articles with key words in their title or abstract related to DTC advertising, such as various versions of the word “advertising,” “consumer,” “patient,” “physician,” “doctor,” and “return on investment.” We supplemented this list with searches of the references in articles identified through the database search. We also included articles cited during our interviews with representatives from advocacy organizations—Consumers Union and Public Citizen—and industry representatives from PhRMA, AstraZeneca Pharmaceuticals LP, and Pfizer Inc. From all of these sources, we identified over 600 articles published from 1982 through 2006. Within the more than 600 articles, we identified for detailed review 64 journal articles and dissertations that were original research and had subject matter directly relevant to the relationship between DTC advertising and prescription drug spending and utilization. To examine the DTC advertising materials that FDA reviews, we reviewed applicable laws and regulations and data from FDA on the number and type of advertising materials that the agency receives and reviews. For materials submitted from 1997 through 2005, we obtained data from FDA’s Advertising Management Information System database, which tracks the number of final advertising materials the drug companies submit to FDA at the time of their dissemination to the public. FDA officials told us that these data may contain errors because drug companies do not always properly identify the type of advertising material in their submission to FDA. For example, a DTC material may be incorrectly coded as a material directed to professionals. Although FDA officials do not know the extent to which such errors are entered into the database, based on our review of their data collection methods and our interviews with knowledgeable agency officials, we determined that these data were sufficiently reliable for reporting on trends in the volume of materials submitted to FDA. We also obtained data from FDA’s Marketing, Advertising, and Communications Management Information System database—which tracks correspondence between the agency and drug companies—to determine the number of submissions of draft materials received by FDA from 1997 through 2005. We discussed these data with the responsible FDA official, and determined that they were sufficiently reliable for their use in this report. We also interviewed FDA officials, including staff who are directly responsible for reviewing DTC materials, about their processes for reviewing advertising materials. We did not examine the effectiveness of FDA’s review of draft materials at preventing potentially violative materials from being disseminated. To examine the number of FDA regulatory letters that cited DTC materials and FDA’s process for issuing regulatory letters, we reviewed all letters issued by FDA from 1997 through 2005 citing prescription drug promotion and identified those that cited DTC advertising materials. We excluded regulatory letters that cited only materials intended to be given to consumers by medical professionals or that cited only materials directed to medical professionals. We then asked FDA officials to review our list and add letters we had not identified and remove letters that did not specifically cite DTC materials. As a result of this process, we identified 135 regulatory letters—citing materials promoting 89 different drugs—that cited a violative DTC material. In our review of the regulatory letters, we did not evaluate the appropriateness of the cited violations or evaluate the legal sufficiency of the letters. We examined the content of FDA’s most recent regulatory letters—the 19 regulatory letters, 6 warning letters and 13 untitled letters, FDA issued from 2004 through 2005—in order to determine the types of violations that FDA identified and the actions that the agency requested the drug companies to take. (See table 3.) Of these 19 regulatory letters, 18 cited violative materials for a single drug. In one instance, the letter cited materials promoting two drugs promoted by a single company. We also reviewed FDA documentation to determine how long it took the agency to draft and issue the 135 regulatory letters it issued from January 1997 through December 2005. We used information from FDA records to obtain the date on which reviewers first began drafting a regulatory letter. These records also contained information about key meetings that occurred, internal consultations requested by FDA’s Division of Drug Marketing, Advertising, and Communications (DDMAC), and the comments obtained during the drafting and review of each regulatory letter. Because FDA does not track when the agency identifies a violation, we considered the date on which reviewers first began drafting a regulatory letter as the earliest date in the letter drafting and review process. For each of the 19 regulatory letters issued from 2004 through 2005, we obtained the date DDMAC formally submitted the draft letter to the Office of the Chief Counsel (OCC) from FDA’s Agency Information Management System database. This system is designed to document the dates of key interactions between OCC and other FDA offices. OCC officials told us that the date DDMAC submitted draft regulatory letters to OCC was consistently documented in the system. Based on our discussions with OCC officials and our review of similar dates recorded in DDMAC’s case files, we determined that these data were sufficiently reliable for the purposes of this report. To examine the effectiveness of FDA’s regulatory letters, we focused on the 19 regulatory letters issued from 2004 through 2005 that cited DTC materials. We reviewed the files that FDA maintains for each advertised drug cited in these letters. These files contain correspondence from the drug companies, copies of advertising materials, and documentation of FDA actions. We reviewed FDA’s correspondence with the drug companies to obtain information regarding the regulatory letters, the dates the violative advertisements started and ended, and the drug companies’ compliance with any corrective action requested by FDA. The information we collected is based both on what drug companies reported in correspondence with FDA and, in some cases, what we obtained directly from the sponsoring drug company. We did not confirm the accuracy of the information drug companies reported to FDA or to us. We also identified the violations cited in the 135 regulatory letters FDA issued from 1997 through 2005. We conducted our work from January 2006 through November 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, Martin T. Gahart, Assistant Director; Chad Davenport; William Hadley; Cathy Hamann; Julian Klazkin; and Eden Savino made key contributions to this report.
The Food and Drug Administration (FDA) is responsible for overseeing direct-to-consumer (DTC) advertising of prescription drugs. If FDA identifies a violation of laws or regulations in a DTC advertising material, the agency may issue a regulatory letter asking the drug company to take specific actions. GAO was asked to discuss (1) trends in drug company spending on DTC advertising and other activities; (2) what is known about the relationship between DTC advertising and drug spending and utilization; (3) the DTC advertising materials FDA reviews; (4) the number of regulatory letters that cited DTC materials and FDA's process for issuing those letters; and (5) the effectiveness of these letters at limiting the dissemination of violative DTC advertising. GAO reviewed research literature, analyzed FDA's processes, and examined FDA documentation. Drug company spending on DTC advertising--such as that on television and in magazines--of prescription drugs increased twice as fast from 1997 through 2005 as spending on promotion to physicians or on research and development. Over this period, drug companies spent less each year on DTC advertising ($4.2 billion in 2005) than on promotion to physicians ($7.2 billion in 2005) or research and development ($31.4 billion in 2005). Studies GAO reviewed suggest that DTC advertising has contributed to increases in drug spending and utilization, for example, by prompting consumers to request the advertised drugs from their physicians, who are generally responsive to these requests. Evidence suggests that the effect of DTC advertising on consumers can be both positive, such as encouraging them to talk to their doctors, and negative, such as increased use of advertised drugs when alternatives may be more appropriate. FDA reviews a small portion of the DTC materials it receives. To identify materials that have the greatest potential to impact public health, FDA has informal criteria to prioritize materials for review. However, FDA has not documented these criteria, does not apply them systematically to all of the materials it receives, and does not track information on its reviews. As a result, the agency cannot ensure that it is identifying or reviewing those materials that it would consider to be the highest priority. FDA has taken longer to draft and review regulatory letters and the agency has issued fewer letters per year since 2002, when legal review of all draft regulatory letters was first required. From 2002 through 2005, from the time FDA began drafting a regulatory letter for a violative DTC material, it took the agency an average of 4 months to issue a regulatory letter, compared with an average of 2 weeks from 1997 through 2001. FDA has issued about half as many regulatory letters per year since the 2002 policy change. The effectiveness of FDA's regulatory letters at halting the dissemination of violative DTC materials has been limited. The 19 regulatory letters FDA issued in 2004 and 2005 were issued an average of 8 months after the materials were first disseminated. By the time FDA issued these letters, companies had already discontinued use of more than half of the violative materials. When the cited materials were still being disseminated, drug companies complied with FDA's requests to remove the materials, and identified and removed other materials with similar claims. FDA's issuance of regulatory letters did not always prevent drug companies from later disseminating similar violative materials for the same drugs. These issues are not new. In 2002, GAO reported that, by delaying the issuance of regulatory letters, the 2002 policy change had adversely affected FDA's ability to enforce compliance. At that time, GAO recommended, and FDA agreed, that letters be issued more quickly. GAO continues to believe this is necessary in order to limit consumers' exposure to false or misleading advertising.
The District of Columbia Family Court Act of 2001 (P.L. 107-114) was enacted on January 8, 2002. The act stated that, not later than 90 days after the date of the enactment, the chief judge of the Superior Court shall submit to the president and Congress a transition plan for the Family Court of the Superior Court and shall include in the plan the following: The chief judge’s determination of the role and function of the presiding judge of the Family Court. The chief judge’s determination of the number of judges needed to serve on the Family Court. The chief judge’s determination of the number of magistrate judges of the Family Court needed for appointment under Section 11-1732, District of Columbia Code. The chief judge’s determination of the appropriate functions of such magistrate judges, together with the compensation of and other personnel matters pertaining to such magistrate judges. A plan for case flow, case management, and staffing needs (including the needs for both judicial and nonjudicial personnel) for the Family Court, including a description of how the Superior Court will handle the one family/one judge requirement pursuant to Section 11-1104(a) for all cases and proceedings assigned to the Family Court. A plan for space, equipment, and other physical needs and requirements during the transition, as determined in consultation with the Administrator of General Services. An analysis of the number of magistrate judges needed under the expedited appointment procedures established under Section 6(d) in reducing the number of pending actions and proceedings within the jurisdiction of the Family Court. A proposal for the disposition or transfer to the Family Court of child abuse and neglect actions pending as of the date of enactment of the act (which were initiated in the Family Division but remain pending before judges serving in other Divisions of the Superior Court as of such date) in a manner consistent with applicable federal and District of Columbia law and best practices, including best practices developed by the American Bar Association and the National Council of Juvenile and Family Court Judges. An estimate of the number of cases for which the deadline for disposition or transfer to the Family Court cannot be met and the reasons why such deadline cannot be met. The chief judge’s determination of the number of individuals serving as judges of the Superior Court who meet the qualifications for judges of the Family Court and are willing and able to serve on the Family Court. If the chief judge determines that the number of individuals described in the act is less than 15, the plan is to include a request that the Judicial Nomination Commission recruit and the president nominate additional individuals to serve on the Superior Court who meet the qualifications for judges of the Family Court, as may be required to enable the chief judge to make the required number of assignments. The Family Court Act states that the number of judges serving on the Family Court of the Superior Court cannot exceed 15. These judges must meet certain qualifications, such as having training or expertise in family law, certifying to the chief judge of the Superior Court that he or she intends to serve the full term of service and that he or she will participate in the ongoing training programs conducted for judges of the Family Court. The act also allows the court to hire and use magistrate judges to hear Family Court cases. Magistrate judges must also meet certain qualifications, such as holding U.S. citizenship, being an active member of the D.C. Bar, and having not fewer than 3 years of training or experience in the practice of family law as a lawyer or judicial officer. The act further states that the chief judge shall appoint individuals to serve as magistrate judges not later than 60 days after the date of enactment of the act. The magistrate judges hired under this expedited appointment process are to assist in implementing the transition plan and, in particular, assist with the transition or disposal of child abuse and neglect proceedings not currently assigned to judges in the Family Court. The Superior Court submitted its transition plan on April 5, 2002. The plan consists of three volumes. Volume I contains information on how the court will address case management issues, including organizational and human capital requirements. Volume II contains information on the development of IJIS and its planned applications. In volume III, the court addresses the physical space it needs to house and operate the Family Court. The D.C. Courts includes three main entities—the Superior Court, the Court of Appeals, and the Court System—and provides the overall organizational framework for judicial operations. The Superior Court contains five major operating divisions: Civil Division, Criminal Division, Family Court, Probate Division, and the Tax Division, as well as the following additional divisions and units: Crime Victims Compensation Program, the Domestic Violence Unit, the Multi-Door Dispute Resolution Division, and the Special Operations Division. The Court of Appeals reviews all appeals from the Superior Court, as well as decisions and orders of District of Columbia government administrative agencies. The Executive Office performs various administrative management functions, and directly supervises the Court System divisions, which support both the Court of Appeals and the Superior Court. Also, the Joint Committee on Judicial Administration in the District of Columbia serves as the policy- making entity for the D.C. Courts. The chief judges of the Superior Court and the Court of Appeals serve on this committee. In addition, a second Court of Appeals judge, elected by the Court of Appeals judges, and two Superior Court judges, elected by their colleagues, serve on the Joint Committee. Courts interact with various organizations and operate in the context of many different programmatic requirements. In the District, the Family Court frequently interacts with the District’s child welfare agency—the Child and Family Services Agency (CFSA)—a key organization responsible for helping children obtain permanent homes. CFSA must comply with federal laws and other requirements, including the Adoption and Safe Families Act (ASFA), which placed new responsibilities on child welfare agencies nationwide. ASFA introduced new time periods for moving children who have been removed from their homes to permanent home arrangements and penalties for noncompliance. For example, ASFA requires states to hold a permanency planning hearing not later than 12 months after the child is considered to have entered foster care. Permanent placements include return home to the birth parents and adoption. The Family Court transition plan provides information on most, but not all, elements required by the Family Court Act; however, some aspects of case management, training, and performance evaluation are unclear. For example, the plan describes the Family Court’s method for transferring child abuse and neglect cases to the Family Court, its one family/one judge case management principle, and the number and roles of judges and magistrate judges. However, the plan does not (1) include a request for judicial nomination, (2) indicate the number of nonjudicial staff needed for the Family Court, (3) indicate if the 12 judges who volunteered for the Family Court meet all of the qualifications outlined in the act, and (4) state how the number of magistrate judges to hire under the expedited process was determined. In addition, although not specifically required by the act, the plan does not describe the content of its training programs and does not include a full range of measures by which the court can evaluate its progress in ensuring better outcomes for children. The transition plan establishes criteria for transferring cases to the Family Court and states that the Family Court intends to have all child abuse and neglect cases pending before judges serving in other divisions of the Superior Court closed or transferred into the Family Court by June 2003. According to the plan, the court has asked each Superior Court judge not serving in the Family Court to review his or her caseload to identify those cases that meet the criteria established by the court for the first phase of case transfer back to the Family Court for attention by magistrate judges hired under the expedited process provided in the act. Cases identified for transfer include those in which (1) the child is 18 years of age and older, the case is being monitored primarily for the delivery of services, and no recent allegations of abuse or neglect exist; and (2) the child is committed to the child welfare agency and is placed with a relative in a kinship care program. Cases that the court believes may not be candidates for transfer by June 2002 include those the judge believes transferring the case would delay permanency. The court expects that older cases will first be reviewed for possible closure and expects to transfer the entire abuse and neglect caseloads of several judges serving in other divisions of the Superior Court to the Family Court. Using the established criteria to review cases, the court estimates that 1,500 cases could be candidates for immediate transfer. The act also requires the court to estimate the number of cases that cannot be transferred into Family Court in the timeframes specified. The plan provides no estimate because the court’s proposed transfer process assumes all cases will be closed or transferred, based on the outlined criteria. However, the plan states that the full transfer of all cases is partially contingent on hiring three new judges. The transition plan identifies the way in which the Family Court will implement the one family/one judge approach and improve its case management practices; however, some aspects of case management, training, and performance evaluation are unclear. The plan indicates that the Family Court will implement the one family/one judge approach by assigning all cases involving the same family to one judicial team— comprised of a Family Court judge and a magistrate judge. This assignment will begin with the initial hearing by the magistrate judge on the team and continue throughout the life of the case. Juvenile and family court experts indicated that this team approach is realistic and a good model of judicial collaboration. One expert said that such an approach provides for continuity if either team member is absent. Another expert added that, given the volume of cases that must be heard, the team approach can ease the burden on judicial resources by permitting the magistrate judge to make recommendations and decisions, thereby allowing the Family Court judge time to schedule and hear trials and other proceedings more quickly. Court experts also praised the proposed staggered terms for judicial officials—newly hired judges, magistrate judges, and judges who are already serving on the Superior Court will be appointed to the Family Court for varying numbers of years—which can provide continuity while recognizing the need to rotate among divisions in the Superior Court. The plan also describes other elements of the Family Court’s case management process, such as how related cases will be assigned and a description of how many judges will hear which types of cases. For example, the plan states that, in determining how to assign cases, preference will generally be given to the judge or magistrate judge who has the most familiarity with the family. In addition, the plan states that (1) all Family Court judges will handle post-disposition child abuse and neglect cases; (2) 10 judges will handle abuse and neglect cases from initiation to closure as part of a judicial team; (3) 1 judge will handle abuse and neglect cases from initiation to closure independently (not as part of a team); and (4) certain numbers of judges will handle other types of cases, such as domestic relations cases, mental health trials, and complex family court cases. However, because the transition plan focuses primarily on child abuse and neglect cases, this information does not clearly explain how the total workload associated with the approximately 24,000 cases under the court’s jurisdiction will be handled. One court expert we consulted commented on the transition plan’s almost exclusive focus on child welfare cases, making it unclear, the expert concluded, how other cases not involving child abuse and neglect will be handled. In addition to describing case assignments, the plan identifies actions the court plans to take to centralize intake. According to the plan, a centralized office will encompass all filing and intake functions that various clerks’ offices—such as juvenile, domestic relations, paternity and support, and mental health—in the Family Court currently carry out. As part of centralized intake, case coordinators will identify any related cases that may exist in the Family Court. To do this, the coordinator will ensure that a new “Intake/Cross Reference Form” will be completed by various parties to a case and also check the computer databases serving the Family Court. As a second step, the court plans to use alternative dispute resolution to resolve cases more quickly and expand initial hearings to address many of the issues that the court previously handled later in the life of the case. As a third step, the plan states that the Family Court will provide all affected parties speedy notice of court proceedings and implement strict policies for the handling of cases—such as those for granting continuances—although it does not indicate who is responsible for developing the policies or the status of their development. The plan states that the court will conduct evaluations to assess whether components of the Family Court were implemented as planned and whether modifications are necessary; the court could consider using additional measures to focus on outcomes for children. One court expert said that the court’s development of a mission statement and accompanying goals and objectives frames the basis for developing performance standards. The expert also said that the goals and standards are consistent with those of other family courts that strive to prevent further deterioration of a family’s situation and to focus decision-making on the needs of those individuals served by the court. However, evaluation measures listed in the plan are oriented more toward the court’s processes, such as whether hearings are held on time, than on outcomes. According to a court expert, measures must also account for outcomes the court achieves for children. Measures could include the number of finalized adoptions that did not disrupt, reunifications that do not fail, children who remain safe and are not abused again while under court jurisdiction or in foster care, and the proportion of children who successfully achieve permanency. In addition, the court will need to determine how it will gather the data necessary to measure each team’s progress in ensuring such outcomes or in meeting the requirements of ASFA, and the court has not yet established a baseline from which to judge its performance. The transition plan states that the court has determined that 15 judges are needed to carry out the duties of the court and that 12 judges have volunteered to serve on the court, but does not address recruitment and the nomination of the three additional judges. Court experts stated that the court’s analysis to identify the appropriate number of judges is based on best practices identified by highly credible national organizations and is, therefore, pragmatic and realistic. However, the plan only provides calculations for how it determined that the court needed 22 judges and magistrate judges to handle child abuse and neglect cases. The transition plan does not include a methodology for how it determined that the court needed a total of 32 judges and magistrate judges for its total caseload of child abuse and neglect cases, as well as other family cases, such as divorce and child support, nor does it explain how anticipated increases in cases will be handled. In addition, the plan does not include a request that the Judicial Nomination Commission recruit and the president nominate the additional three individuals to serve on the Superior Court, as required by the Family Court Act. At a recent hearing on the court’s implementation of the Family Court Act, the chief judge of the Superior Court said that the court plans to submit its request in the fall of 2002. The Superior Court does not provide in the plan its determination of the number of nonjudicial staff needed. The court acknowledges that while it budgeted for a certain number of nonjudicial personnel based on current operating practices, determining the number of different types of personnel needed to operate the Family Court effectively is pending completion of a staffing study. Furthermore, the plan does not address the qualifications of the 12 judges who volunteered for the court. Although the plan states that these judges have agreed to serve full terms of service, according to the act, the chief judge of the Superior Court may not assign an individual to serve on the Family Court unless the individual also has training or expertise in family law and certifies that he or she will participate in the ongoing training programs conducted for judges of the Family Court. The act requires judges who had been serving in the Superior Court’s Family Division at the time of its enactment to serve for a term of not fewer than 3 years, and that the 3-year term shall be reduced by the length of time already served in the Family Division. Since the transition plan does not identify which of the 12 volunteers had already been serving in the Family Division prior to the act and the length of time they had already served, the minimum remaining term length for each volunteer cannot be determined from the plan. In commenting on this report, the Superior Court said it will provide information on each judge’s length of tenure in its first annual report to the Congress. The transition plan describes the duties of judges assigned to the Family Court, as required by the act. Specifically, the plan describes the roles of the designated presiding judge, the deputy presiding judge, and the magistrate judges. The plan states that the presiding and deputy presiding judges will handle the administrative functions of the Family Court, ensure the implementation of the alternative dispute resolution projects, oversee grant-funded projects, and serve as back-up judges to all Family Court judges. These judges will also have a post-disposition abuse and neglect caseload of more than 80 cases and will continue to consult and coordinate with other organizations (such as the child welfare agency), primarily by serving on 19 committees. One court expert has observed that the list of committees to which the judges are assigned seems overwhelming and said that strong leadership by the judges could result in consolidation of some of the committees’ efforts. The plan also describes the duties of the magistrate judges, but does not provide all the information required by the act. Magistrate judges will be responsible for initial hearings in new child abuse and neglect cases and the resolution of cases assigned to them by the Family Court judge to whose team they are assigned. They will also be assigned initial hearings in juvenile cases, noncomplex abuse and neglect trials, and the subsequent review and permanency hearings, as well as a variety of other matters related to domestic violence, paternity and support, mental competency, and other domestic relations cases. As noted previously, one court expert said that the proposed use of the magistrate judges would ease the burden on judicial resources by permitting these magistrate judges to make recommendations and decisions. However, although specifically required by the act, the transition plan does not state how the court determined the number of magistrate judges to be hired under the expedited process. In addition, while the act outlines the qualifications of magistrate judges, it does not specifically require a discussion of qualifications of the newly hired magistrate judges in the transition plan. As a result, no information was provided, and whether these magistrate judges meet the qualifications outlined in the act is unknown. In commenting on this report, the Superior Court said that it considered the following in determining how many magistrate judges should be hired under the expedited process: optimal caseload size, available courtroom and office space, and safety and permanency of children. In addition, the court determined, based on its criteria, that 1,500 child abuse and neglect cases could be safely transferred to the Family Court during the initial transfer period and that a caseload of 300 cases each was appropriate for these judicial officers. As a result, the court appointed five magistrate judges on April 8, 2002. A discussion of how the court will provide initial and ongoing training for its judicial and nonjudicial staff is also not required by the act, although the court does include relevant information about training. For example, the plan states that the Family Court will develop and implement a quarterly training program for Family Court judges, magistrate judges, and staff covering a variety of topics and that it will promote and encourage participation in cross-training. In addition, the plan states new judges and magistrate judges will participate in a 2 to 3 week intensive training program, although it does not provide details on the content of such training for the five magistrate judges hired under the expedited process, even though they were scheduled to begin working at the court on April 8, 2002. One court expert said that a standard curriculum for all court-related staff and judicial officers should be developed and that judges should have manuals available outlining procedures for all categories of cases. In commenting on a draft of this report, the Superior Court said that the court has long had such manuals for judges serving in each division of the court. In our report on human capital, we said that an explicit link between the organization’s training offerings and curricula and the competencies identified by the organization for mission accomplishment is essential. Organization leaders can show their commitment to strategic human capital management by investing in professional development and mentoring programs that can also assist in meeting specific performance needs. These programs can include opportunities for a combination of formal and on-the-job training, individual development plans, and periodic formal assessments. Likewise, organizations should make fact-based determinations of the impact of its training and development programs to provide feedback for continuous improvement and ensure that these programs improve performance and help achieve organizational results. In commenting on this report, the Superior Court said that—although not included in the plan—it has an extensive training curriculum that will be fine-tuned prior to future training sessions. Two factors are critical to fully transitioning to the Family Court in a timely and effective manner: obtaining and renovating appropriate space for all new Family Court personnel and developing and installing a new automated information system, currently planned as part of the D.C. Courts IJIS system. The court acknowledges that its implementation plans may be slowed if appropriate space cannot be obtained in a timely manner. For example, the plan addresses how the abuse and neglect cases currently being heard by judges in other divisions of the Superior Court will be transferred to the Family Court but states that the complete transfer of cases hinges on the court’s ability to hire, train, and provide appropriate space for additional judges and magistrate judges. In addition, the Family Court’s current reliance on nonintegrated automated information systems that do not fully support planned court operations, such as the one family/one judge approach to case management, constrains its transition to a Family Court. The transition plan states that the interim space plan carries a number of project risks. These include a very aggressive implementation schedule and a design that makes each part of the plan interdependent with other parts of the plan. The transition plan further states that the desired results cannot be reached if each plan increment does not take place in a timely fashion. For example, obtaining and renovating the almost 30,000 occupiable square feet of new court space needed requires a complex series of interrelated steps—from moving current tenants in some buildings to temporary space, to renovating the John Marshall level of the H. Carl Moultrie Courthouse by July 2003. The Family Court of the Superior Court is currently housed in the H. Carl Moultrie Courthouse, and interim plans call for expanding and renovating additional space in this courthouse to accommodate the additional judges, magistrate judges, and staff who will help implement the Family Court Act. The court estimates that accommodating these personnel requires an additional 29,700 occupiable square feet, plus an undetermined amount for security and other amenities. Obtaining this space will require nonrelated D.C. Court entities to vacate space to allow for renovations, as well as require tenants in other buildings to move in order to house the staff who have been displaced. The plan calls for renovations under tight deadlines, and all required space may not be available, as currently planned, to support the additional judges the Family Court needs to perform its work in accordance with the act, making it uncertain as to when the court can fully complete its transition. For example, D.C. Courts recommends that a portion of the John Marshall level of the H. Carl Moultrie Courthouse, currently occupied by civil court functions, be vacated and redesigned for the new courtrooms and court-related support facilities. Although some space is available on the fourth floor of the courthouse for the four magistrate judges to be hired by December 2002, renovations to the John Marshall level are tentatively scheduled for completion in July 2003—2 months after the court anticipates having three additional Family Court judges on board. Another D.C. Courts building—Building B—would be partially vacated by non-Court tenants and altered for use by displaced civil courts functions and other units temporarily displaced in future renovations. Renovations to Building B are scheduled to be complete by August 2002. Space for 30 additional Family Court-related staff, approximately 3,300 occupiable square feet, would be created in the H. Carl Moultrie Courthouse in an as yet undetermined location. Moreover, the Family Court’s plan for acquiring additional space does not include alternatives that the court will pursue if its current plans for renovating space encounter delays or problems that could prevent it from using targeted space. The Family Court Act calls for an integrated information technology system to support the goals it outlines, but a number of factors significantly increase the risks associated with attaining this goal, as we reported in February 2002. For example, The D.C. Courts had not yet implemented the disciplined processes necessary to reduce the risks associated with acquiring and managing IJIS to acceptable levels. A disciplined software development and acquisition effort maximizes the likelihood of achieving the intended results (performance) on schedule using available resources (costs). The requirements contained in a draft Request for Proposal (RFP) for the information system lacked the necessary specificity to ensure that any defects in these requirements had been reduced to acceptable levels and that the system would meet its users’ needs. Studies have shown that problems associated with requirements definition are key factors in software projects that do not meet their cost, schedule, and performance goals. The requirements contained in the draft RFP did not directly relate to industry standards. As a result, inadequate information was available for prospective vendors and others to readily map systems built upon these standards to the needs of the D.C. Courts. Prior to issuing our February 2002 report, we discussed our findings with D.C. Courts officials who generally concurred with our findings. The officials said that the D.C. Courts would not go forward with the project until the necessary actions had been taken to reduce the risks associated with developing the new information system. In our report, we made several recommendations designed to reduce the risks. In April 2002, we met with D.C. Courts officials to discuss the actions taken on our recommendations and found that significant actions have been initiated that, if properly implemented, will help reduce the risks associated with developing the new system. For example, D.C. Courts is beginning the work to provide the needed specificity for its system requirements. This includes soliciting requirements from the users and ensuring that the requirements are properly sourced (e.g., traced back to their origin). According to D.C. Courts officials, this work has identified significant deficiencies in the original requirements that we discussed in our February report. These deficiencies relate to new tasks D.C. Courts must undertake. For example, the Family Court Act requires D.C. Courts to interface IJIS with several other District government computer systems. These tasks were not within the scope of the original requirements that we reported on in our February 2002 report. issuing a Request for Information to obtain additional information on commercial products that should be considered by the D.C. Courts during its acquisitions. This helps the requirements management process by identifying requirements that are not supported by commercial products so that the D.C. Courts can reevaluate whether it needs to (1) keep the requirement or revise it to be in greater conformance with industry practices or (2) undertake a development effort to achieve the needed capability. developing a systems engineering life-cycle process for managing the D.C. Courts information technology efforts. This will help define the processes and events that should be performed from the time that a system is conceived until the system is no longer needed. Examples of processes used include requirements development, testing, and implementation. developing policies and procedures that will help ensure that the D.C. Courts’ information technology investments comply with the requirements of the Clinger-Cohen Act of 1996 (P.L. 104-106). developing the processes that will enable the D.C. Courts to achieve a level 2 rating—this means basic project management processes are established to track performance, cost, and schedule—on the Software Engineering Institute’s Capability Maturity Model. In addition, D.C. Courts officials told us that they are developing a program modification plan that will allow the use of the existing (legacy) systems while the IJIS project proceeds. Although they recognize that maintaining two systems concurrently is expensive and causes additional resource needs, such as additional staff and training for them, these officials believe that they are needed to mitigate the risk associated with any delays in system implementation. Although these are positive steps forward, D.C. Courts still faces many challenges in its efforts to develop an IJIS system that will meet its needs and fulfill the goals established by the act. The following sections discuss the challenges the D.C. Courts face. Ensuring that the Systems Interfacing with IJIS Do Not Become the Weak Link The Family Court Act calls for effectively interfacing information technology systems operated by the District government with IJIS. According to D.C. Courts officials, at least 14 District government systems will need to interface with IJIS. However, several of our reviews have noted problems in the District’s ability to develop, acquire, and implement new systems. The District’s difficulties in effectively managing its information technology investments could lead to adverse impacts on the IJIS system. For example, the interface systems may not be able to provide the quality of data necessary to fully utilize IJIS’s capabilities or provide the necessary data to support IJIS’s needs. The D.C. Courts will need to ensure that adequate controls and processes have been implemented to mitigate the potential impacts associated with these risks. Effectively Implementing the Disciplined Processes Needed to Reduce the Risks Associated with IJIS The key to having a disciplined effort is to have disciplined processes in multiple areas. This is a complex task and will require the D.C. Courts to maintain its management commitment to implementing the necessary processes. In our February 2002 report, we highlighted several processes, such as requirements management, risk management, and testing that appeared critical to the development of IJIS. Ensuring that the Requirements Used to Acquire IJIS Contain the Necessary Specificity to Reduce Requirement-Related Defects to Acceptable Levels Although D.C. Courts officials have said that they are adopting a requirements management process that will address the concerns expressed in our February 2002 report, maintaining such a process will require management commitment and discipline. Ensuring that Users Receive Adequate Training As with any new system, adequately training the users is critical to its success. As we reported in April 2001, one problem that hindered the implementation of the District’s financial management system was its difficulty in adequately training the users of the system. In commenting on this report, the Superior Court said that $800,000 has been budgeted for staff training during the 3 years of implementation. According to D.C. Courts officials, the Family Court Act establishes ambitious timeframes to convert to a family court. Although schedules are important, it is critical that the D.C. Courts follow an event-driven acquisition and development program rather than adopting a schedule- driven approach. Organizations that are schedule-driven tend to reduce or inadequately complete activities such as business process reengineering and requirements analysis. These tasks are frequently not considered “important” since many people view “getting the application in the hands of the user” as one of the more productive activities. However, the results of this approach are very predictable. Projects that do not perform planning and requirements functions well typically have to redo that work later. However, the costs associated with delaying the critical planning and requirements activities is anywhere from 10 to 100 times the cost of doing it correctly in the first place. With respect to requirements, court experts report that effective technological support is critical to effective family court case management. One expert said that, at a minimum, the system should include the (1) identification of parties and their relationships; (2) tracking of case processing events through on-line inquiry; (3) generation of orders, forms, summons, and notices; and (4) production of statistical reports. The State Justice Institute’s report on how courts are coordinating family cases states that automated information systems, programmed to inform a court system of a family’s prior cases, are a vital ingredient of case coordination efforts. The National Council of Juvenile and Family Court Judges echoes these findings by stating that effective management systems (1) have standard procedures for collecting data; (2) collect data about individual cases, aggregate caseload by judge, and the systemwide caseload; (3) assign an individual the responsibility of monitoring case processing; and (4) are user friendly. While anticipating technological enhancements through IJIS, Superior Court officials said that the current information systems do not have the functionality required to implement the Family Court’s one family/one judge case management principle. In providing technical clarifications on a draft of this report, the Superior Court reiterated a statement that the presiding judge of the Family Court made at the April 24, 2002, hearing. The presiding judge said that the Family Court is currently implementing the one family/one judge principle, but that existing court technology is cumbersome to use to identify family and other household members. Nonetheless, staff are utilizing the different databases, forms, intake interviews, questions from the bench, and other nontechnological means of identifying related cases within the Family Court. Basically, even though some important issues are not discussed, the Superior Court’s transition plan represents a good effort at outlining the steps it will take to implement a Family Court. While the court has taken important steps to achieve efficient and effective operations, it still must address several statutory requirements included in the Family Court Act to achieve full compliance with the act. In addition, opportunities exist for the court to adopt other beneficial practices to help ensure it improves the timeliness of decisions in accordance with ASFA, that judges and magistrate judges are fully trained, and that case information is readily available to aid judges and magistrate judges in their decision making. Acknowledging the complex series of events that must occur in a timely way to achieve optimal implementation of the family court, the court recognizes that its plan for obtaining and renovating needed physical space warrants close attention to reduce the risk of project delays. In addition, the court has initiated important steps that begin to address many of the shortcomings we identified in our February 2002 report on its proposed information system. The effect of these actions will not be known for some time. The court’s actions reflect its recognition that developing an automated information system for the Family Court will play a pivotal role in the court’s ability to implement its improved case management framework. By following through on the steps it has begun to take and by evaluating its performance over time, the court may improve its implementation of the Family Court Act and provide a sound basis for assessing the extent to which it achieves desired outcomes for children. To help ensure that the District of Columbia Superior Court complies with all statutory requirements contained in the District of Columbia Family Court Act, we recommend that the chief judge of the District of Columbia Superior Court supplement the court’s transition plan by providing the following information: A determination of the number of nonjudicial staff needed for the Family Court when the staffing study is complete. A determination of the number of individuals identified in the transition plan to serve on the Family Court that meet the qualifications for judges on the Family Court. An analysis of how the Family Court identified the number of magistrate judges needed under the expedited appointment procedures. While not required by the Family Court Act to be included in the Family Court’s transition plan, the practices of courts in other jurisdictions, if fully adopted, could optimize the court’s performance. Toward achieving more efficient and effective operations, we recommend that the chief judge of the Superior Court of the District of Columbia consider identifying performance measures to track progress toward positive outcomes for the children and families the Family Court serves. We obtained comments on a draft of this report from the chief judge of the Superior Court. These comments are reproduced in appendix I. The court also provided technical clarifications, which we incorporated when appropriate. The Superior Court generally agreed with the findings of our report and concurred with our recommendations. Regarding our recommendation on the number of nonjudicial staff needed for the Family Court, the Superior Court said that the results of the staffing study will be available shortly and will assist the Family Court in finalizing its staffing request. With regard to providing a determination of the number of individuals identified in the plan that meet the qualifications for judges on the Family Court, the Superior Court said that assignments are based on the judges’ expressed preferences, an evaluation of judicial competencies, and the court’s needs. The court also said that the chief judge had determined that all 12 Family Court judges were qualified, either through experience or training, or both, to serve on the Family Court. Regarding our recommendation that the Superior Court provide its analysis of how the Family Court identified the number of magistrate judges needed under the expedited appointment procedures, the Superior Court provided an explanation that we incorporated in this report. In commenting on the need to develop a training plan, the court said that it has developed training programs that are closely aligned with the mission, goals, and objectives of the Family Court. Therefore, we deleted this recommendation in our final report. Finally, regarding the development of outcome measures, the court said that it will include information on child-related outcomes and agrees that this type of information would contribute to a greater understanding of how children and families before the court are faring. The Superior Court also commented that the presiding judge of the Family Court, in consultation with the chief judge of the Superior Court, is responsible for implementation of all aspects of the Family Court Act. In addition, the court said that, while the court has not yet completed its development of baseline data for all components of the Family Court, it has data in two critical areas—case processing times for abuse and neglect cases prior to the implementation of ASFA and after its implementation. We are sending copies of this report to the Office of Management and Budget, the Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, Senate Committee on Governmental Affairs; and the Subcommittee on the District of Columbia, House Committee on Government Reform. We are also sending copies to the Joint Committee on Judicial Administration in the District of Columbia, the chief judge of the Superior Court of the District of Columbia, the presiding judge of the Family Court of the Superior Court of the District of Columbia, and the executive director of the District of Columbia Courts. Copies of this report will also be made available to others upon request. If you have any questions about this report, please contact me on (202) 512-8403. Other contacts and staff acknowledgments are listed in appendix II. The following individuals made important contributions to this report: Steven J. Berke, Richard Burkard, William Doherty, Nila Garces-Osorio, John C. Martin, Susan Ragland, James Rebbe, and Norma Samuel. DC Family Court: Progress Made Toward Planned Transition, but Some Challenges Remain. Washington, D.C.: 2002. DC Courts: Disciplined Processes Critical to Successful System Acquisition. Washington, D.C.: 2002. District of Columbia: Weaknesses in Financial Management System Implementation. Washington, D.C.: 2001. District of Columbia Child Welfare: Long-Term Challenges to Ensuring Children’s Well-Being. Washington, D.C.: 2000. Foster Care: Status of the District of Columbia’s Child Welfare System Reform Efforts. Washington, D.C.: 2000. Foster Care: States’ Early Experiences Implementing the Adoption and Safe Families Act. Washington, D.C.: 2000. Human Capital: A Self-Assessment Checklist for Agency Leaders. Washington, D.C.: 2000. D.C. Courts: Staffing Level Determination Could Be More Rigorous. Washington, D.C.: 1999. District of Columbia: The District Has Not Adequately Planned for and Managed Its New Personnel and Payroll System. Washington, D.C.: 1999. Management Reform: Elements of Successful Improvement Efforts. Washington, D.C.: 1999. District of Columbia: Software Acquisition Processes for A New Financial Management System. Washington, D.C.: 1998.
The District of Columbia Family Court Act of 2001 was enacted to (1) redesignate the Family Division of the Superior Court as the Family Court of the Superior Court, (2) recruit trained and experienced judges to serve in the Family Court, and (3) promote consistency and efficiency in the assignment of judges and the courts actions and proceedings. The act requires the chief judge of the Superior Court to submit a transition plan outlining the proposed operation of the Family Court. The plan shows that the Superior Court has made progress transitioning its Family Division to a Family Court, but challenges remain. The transition plan addresses most, but not all, of the act's required elements. For example, the plan identifies the number of judges and magistrate judges needed and outlines an approach for closing or transferring cases from other divisions to the Family Court. However, the plan does not include (1) a request that the Judicial Nomination Commission recruit and the president nominate the additional judges the court believes are necessary, (2) the number of nonjudicial staff needed for the Family Court, (3) information on the qualifications of the judges selected for the court, and (4) information on how the court determined the number of magistrate judges to hire under the expedited process provided for in the act. Although not specifically required by the act, the plan includes information on performance management and enumerates performance measures that are oriented more toward the court's processes than on outcomes. Measures that focus on outcomes for children and families could help to optimize the court's performance. This testimony is based on a May report (GAO-02-534).
In recent years, federal agencies, the Advisory Committee on Human Radiation Experiments, GAO, and others have documented hundreds of secret, intentional government releases of radiation and other pollutants into the environment in connection with the Cold War. The releases occurred in the years after World War II at locations around the country, including Tennessee, New Mexico, Washington, Alaska, and Utah. (See app. I.) Such releases typically occurred at remote federal installations, in an era when there was little federal or state environmental regulation of such activities. Today, an extensive environmental oversight framework is in place. In accordance with NEPA, the Council on Environmental Quality’s (CEQ) implementing regulations, and the Clean Air Act, EPA shares with CEQ the responsibility for overseeing federal agencies’ environmental planning, including their classified planning. For example, under NEPA, federal agencies must assess the environmental impact of major federal actions significantly affecting the environment before they proceed and must submit environmental impact statements (EIS) for review by the public and other federal agencies; EPA is supposed to review these EISs, including those portions containing classified information. CEQ, within the Executive Office of the President, conducts administrative oversight of agencies’ NEPA programs. NEPA also places public disclosure requirements on agencies. However, NEPA and its implementing regulations allow agencies to avoid public disclosure of classified proposals in the interest of national security. NEPA still requires agencies to prepare EISs and other NEPA assessments for classified actions, but CEQ regulations allow agencies to segregate information from public oversight in fully classified EIS documents or appendixes. Federal agencies are also subject to the requirements of federal pollution control laws, such as the Clean Water Act, the Clean Air Act, and the Resource Conservation and Recovery Act (RCRA). EPA has a mandate to oversee the enforcement of the environmental laws at federal facilities, including those that conduct highly classified research operations. EPA’s Office of Federal Facilities Enforcement is the agency’s focal point for enforcement, including developing strategies and participating in enforcement oversight and litigation. EPA has some resources for inspecting highly classified facilities and storing classified documents, including headquarters and field personnel with the appropriate security clearances. Under some laws, such as the Clean Water Act and RCRA, EPA can authorize states to carry out their own program for these laws if they meet certain requirements. Whether EPA or a state acts as the regulatory authority, federal agencies with facilities that are releasing pollutants into the environment must obtain required permits and are subject to inspections and enforcement actions. Radioactive materials regulated under the Atomic Energy Act are exempt from RCRA and the Clean Water Act. DOE regulates these materials under its Atomic Energy Act authority. Over the years, we have issued numerous reports addressing how well various EPA, DOE, and DOD programs implement this framework. (See app. II.) We found that although EPA was given many additional pollution prevention, control, abatement, and enforcement initiatives, its budget for carrying out these activities did not keep pace with the increased responsibilities. The Advisory Committee’s report therefore recommended that (1) an independent panel review planned secret environmental releases and (2) EPA permanently keep key documents related to its environmental oversight of classified programs and report periodically to the Congress on its oversight of such programs. A February 1996 draft response by the Human Radiation Interagency Working Group questions the need for the recommended independent review panel but agrees that EPA should keep permanent files of key environmental documents. EPA has responsibilities for overseeing federal facilities’ activities, including classified federal research planning and operations. However, the agency’s capability to conduct such oversight is limited. In large measure, under NEPA and other laws, EPA relies on the agencies themselves to have their own internal environmental monitoring programs. In part because of secrecy requirements, EPA is especially dependent on the cooperation of agencies in identifying their facilities and activities and reporting on the environmental impacts of their classified research planning and operations. EPA’s Office of Federal Activities reviews hundreds of EISs each year, but according to activities office staff, only a tiny fraction of these—perhaps two or three a year—are either partially or fully classified. According to EPA, classified EISs are submitted almost exclusively by DOE and DOD. The activities office has two people with high-level clearances who review these classified EISs. EPA does not keep records of classified EISs that have been sent to it for review and does not store them, although it does have some classified storage capability. Classified EISs are stored at the agencies themselves. Officials in EPA’s activities office said there is little incentive to establish such recordkeeping or more such storage at EPA because classified EIS submittals are rare. Neither EPA nor CEQ has the responsibility or the resources to closely monitor and direct the EIS submittal process. Agencies are required to submit unclassified and classified EISs for EPA’s review, but according to activities office officials, EPA is not charged with conducting outreach to ensure that all such EISs are submitted. Also, EPA is not responsible for reviewing the thousands of other lower level environmental planning documents—such as environmental assessments—which agencies generate each year; its review is limited to EISs, which are required for “major” actions only. As a result, EPA activities office staff said their overview of agencies’ internal NEPA planning is very limited. According to EPA records and activities office officials, historically some agencies have not been sending EISs to EPA for review, either classified or unclassified, as required. Such agencies include the Central Intelligence Agency (CIA), the National Security Agency (NSA), and the Defense Intelligence Agency. According to EPA officials who have been assigned the responsibility to review EIS’s for the CIA and NSA over the past several years, they have not had contact with these agencies concerning EISs and do not know who these agencies’ liaisons are for NEPA matters. Furthermore, environmental compliance officials within the agencies may not be reviewing all classified research activities. According to a responsible Air Force NEPA compliance official, although his office is charged with reviewing classified EISs internally, historically the office has rarely received such documents for review. He said his office may not have a need-to-know for all such documents. He also could not recall his office receiving for review any unclassified or classified NEPA documents prepared for proposed projects at the classified Air Force operating location near Groom Lake, Nevada. Agencies may conduct environmental planning secretly, and a proposed action may proceed without prior public comment. For example, in 1994, the government conducted Project Sapphire, a classified nuclear nonproliferation action that transferred highly enriched uranium from Kazakhstan in the former Soviet Union to storage at Oak Ridge, Tennessee. DOE conducted internal NEPA planning for Project Sapphire in the form of a detailed classified environmental assessment, but because it was an environmental assessment and not an EIS, EPA was not required to review the assessment and prior public comment was not possible for national security reasons. The public was fully apprised of the Project Sapphire environmental assessment after the uranium transfer was completed. According to EPA headquarters and regional enforcement officials, EPA and the states have been conducting enforcement activities at known classified federal research facilities, but management oversight of such enforcement has not been systematic. According to EPA, known facilities are inspected and required through EPA and/or state oversight to comply with environmental laws. However, neither EPA headquarters nor its regions have complete inventories of all classified federal facilities subject to environmental requirements, either nationally or at a regional level. Instead, EPA headquarters and field enforcement officials said they depend on agencies to report the existence of their classified facilities, to report environmental monitoring data, and to cooperate with EPA and authorized states in assuring that such facilities are in compliance. They said they receive a degree of cooperation at known DOE and DOD classified facilities but are constrained by secrecy and need-to-know considerations. When they receive cooperation, they conduct appropriate field enforcement activities. In this regard, an ongoing lawsuit by former employees at an Air Force facility near Groom Lake, Nevada, alleged violations of RCRA, including EPA’s failure to conduct a RCRA inspection there. EPA has affirmed that EPA field inspectors conducted an inspection of the location pursuant to RCRA from December 1994 to March 1995. In August 1995, the U.S. District Court for the District of Nevada ruled that the plaintiffs’ objectives in bringing the suit had been accomplished, in that EPA had performed its duties under RCRA to inspect and inventory the site. In May 1995, EPA and the Air Force affirmed by a memorandum of agreement that EPA will continue to have access at the Groom Lake facility for purposes of administering the environmental laws and that the Air Force is committed to complying with RCRA at the location. The details of the issues resulting in the agreement are classified. According to the director of EPA’s Office of Federal Facilities Enforcement, EPA is fulfilling its oversight responsibility at the facility. However, he said he was uncertain of the extent to which other such highly classified federal facilities—or areas within facilities—may exist and whether their research operations are in environmental compliance. According to the director of federal facilities enforcement, the degree of EPA’s involvement in classified activities may broaden in the future. The agency is currently working with the Air Force on a broader memorandum of agreement applicable to all classified Air Force facilities. Also, the director said that EPA held a meeting in 1995 with other agencies, including intelligence agencies, concerning further possible memorandums of agreement similar to the one signed with the Air Force for Groom Lake. Also, EPA, in conjunction with agencies that have highly classified programs, is working on procedures for improved environmental regulation at classified installations. Nevertheless, it is not clear that EPA will have the resources to oversee additional environmental compliance by any federal facilities. EPA’s Office of Federal Facilities Enforcement is currently responsible for overseeing the cleanup of the 154 federal sites included in the National Priorities List under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). EPA has stated that it has the resources to oversee federal facilities’ overall environmental management and compliance, but few additional resources for greater oversight of classified facilities. Although federal environmental laws allow the President to provide exemptions from environmental requirements in cases involving the paramount interest of the U.S. or in the interest of national security, federal agencies appear to have rarely sought these exemptions. We found only two cases in over 15 years of federal agencies obtaining presidential exemptions from environmental laws. While it is possible that exemptions were sought and obtained in secret, those with whom we spoke, including an official of the National Security Council, generally indicated they did not know of any such exemptions. Under NEPA, numerous less formal special arrangements have been obtained through emergency agreements with CEQ. Presidential exemption provisions are contained in some environmental laws, including the Clean Water Act, the Clean Air Act, RCRA, the Safe Drinking Water Act, CERCLA, and the Noise Control Act. These provisions differ in detail but generally provide that the President can declare a facility or activity exempt from applicable environmental standards. Depending on the law, he may do so in the paramount interest of the nation or in the interest of national security. A presidential exemption can suspend the applicable pollution standards in the laws for whole facilities or specific sources of pollution. Generally, exemptions are for 1 to 2 years, may be renewed indefinitely, and must be reported to the Congress. Executive Order 12088 gives agencies guidance on complying with the laws and contains implementation procedures. Generally, the head of an executive agency may recommend to the President, through the Director of the Office of Management and Budget (OMB), that an activity or facility be exempt from an applicable pollution control standard. According to an EPA official, the exemption mechanism is a “last resort” for agencies that may not be able to comply with environmental laws. We found only two cases in which federal facilities have been exempted by the President from compliance with environmental laws. Responsible officials at several agencies and in the Executive Office of the President were aware of only these two exemptions: In October 1980, President Carter exempted Fort Allen in Puerto Rico from applicable sections of four environmental statutes—the Clean Water Act, the Clean Air Act, the Noise Control Act, and RCRA. The exemption was determined to be in the paramount interest of the U.S., allowing time for the relocation of thousands of Cuban and Haitian refugees to the fort from Florida. The exemption was renewed once, in October 1981, by President Reagan. In September 1995, President Clinton exempted the Air Force’s classified facility near Groom Lake, Nevada from the public disclosure provisions of RCRA, determining that the exemption was in the paramount interest of the United States. According to OMB and the National Security Council (NSC), the most recent exemption was routed through NSC for Presidential attention, not through OMB as provided in Executive Order 12088. NEPA does not contain explicit exemption provisions related to paramount national interest or national security. The CEQ regulations implementing NEPA permit special arrangements when NEPA’s procedures might impede urgent agency actions. According to CEQ’s records, there have been at least 22 instances of emergency NEPA agreements between an agency and CEQ, usually for reasons of time criticality. Three of these recorded emergency arrangements concerned national policy or national security issues: In 1991, the Air Force and CEQ agreed to alternative measures instead of a written EIS—including noise abatement steps—so that aircraft launches from Westover Air Force Base, Massachusetts, toward the Persian Gulf could proceed in a timely manner. In 1991, the Air Force and CEQ agreed that an EIS was not required before conducting a Desert-Storm-related test of aerial deactivation of land mines at the Tonapah Range in Nevada. In 1993, DOE and CEQ agreed on alternative NEPA arrangements for U.S. acceptance of spent nuclear fuel from a reactor in Belgium. Subsequently, Belgium declined the U.S. offer of acceptance. This concludes our testimony. We would be pleased to respond to any questions you or other Members of the Committee may have. Nuclear Waste: Management and Technical Problems Continue to Delay Characterizing Hanford’s Tank Waste (GAO/RCED-96-56, Jan. 26, 1996). Department of Energy: Savings From Deactivating Facilities Can Be Better Estimated (GAO/RCED-95-183, July 7, 1995). Department of Energy: National Priorities Needed for Meeting Environmental Agreement (GAO/RCED-95-1, Mar. 3, 1995). Nuclear Cleanup: Difficulties in Coordinating Activities Under Two Environmental Laws (GAO/RCED-95-66, Dec. 22, 1994). Environment: DOD’s New Environmental Security Strategy Faces Barriers (GAO/NSIAD-94-142, Sept. 30, 1994). Nuclear Health and Safety: Consensus on Acceptable Radiation Risk to the Public is Lacking (GAO/RCED-94-190, Sept. 19, 1994). Environmental Cleanup: Better Data Needed for Radioactivity Contaminated Defense Sites (GAO/NSIAD-94-168, Aug. 24, 1994). Environmental Cleanup: Too Many High Priority Sites Impede DOD’s Program (GAO/NSIAD-94-133, Apr. 21, 1994). Federal Facilities: Agencies Slow to Define the Scope and Cost of Hazardous Waste Site Cleanups (GAO/RCED-94-73, Apr. 15, 1994). Pollution Prevention: EPA Should Reexamine the Objectives and Sustainability of State Programs (GAO/PEMD-94-8, Jan. 25, 1994). Air Pollution: Progress and Problems In Implementing Selected Aspects of the Clean Air Act Amendments of 1990 (GAO/T-RCED-94-68, Oct. 29, 1993). Environmental Enforcement: EPA Cannot Ensure the Accuracy of Self-Reported Compliance Monitoring Data (GAO/RCED-93-21, Mar. 31, 1993). Environmental Enforcement: Alternative Enforcement Organizations for EPA (GAO/RCED-92-107, Apr. 14, 1992). Environmental Enforcement: EPA Needs a Better Strategy to Manage Its Cross-Media Information (GAO/IMTEC-92-14, Apr. 2, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed its review of the Environmental Protection Agency's (EPA) capability to conduct environmental oversight of classified federal research. GAO noted that: (1) EPA conducts limited oversight of classified federal research, primarily relying on agencies' internal environmental monitoring programs; (2) although agencies are required to submit environmental impact statements (EIS) to EPA for review, EPA does not ensure that agencies submit all EIS or know the liaisons for some agencies' environmental issues; (3) environmental compliance officials within agencies may not be reviewing all classified research activities; (4) EPA conducts environmental enforcement activities at known classified federal facilities when the agencies cooperate, but it does not have a complete inventory of all facilities and is sometimes hindered by secrecy and need-to-know considerations; (6) while it is possible that federal agencies have secretly sought exemptions from environmental requirements, it appears that they have rarely sought such exemptions; and (7) agencies have occasionally sought special emergency arrangements concerning environmental standards because of national security concerns.
Under the United States Housing Act of 1937, as amended, Congress created the federal public housing program to assist communities in providing decent, safe, and sanitary dwellings for low-income families. Today, more than 4,100 public housing agencies provide housing for low- income households. Over 3,100 agencies operate low-rent or a combination of low-rent and tenant-based Section 8 units, and about 1,000 provide housing through tenant-based Section 8 units only. Public housing agencies are typically municipal, county, or state agencies created under state law to develop and manage public housing units for low-income families. Housing agencies that participate in the low-rent program contract with HUD to provide housing in exchange for federal grants and subsidies. HUD provides funding to agencies to operate and repair low- rent units through the Operating Fund and the Capital Fund. The Operating Fund provides annual subsidies to housing agencies to make up the difference between the amount they collect in rent and the cost of operating the units. The Capital Fund provides grants to public housing agencies for the major repair and modernization of the units. Under the tenant-based Section 8 program, eligible households select their own units in the private housing market and receive subsidies to cover part of the rent. Public housing agencies that participate in the tenant-based Section 8 program enter into contracts with HUD and receive HUD funds to provide rent subsidies to the owners of private housing on behalf of the assisted households. Fiscal year 2000 was the first year that public housing agencies were required to submit a five-year plan and an annual plan. This requirement only applies to public housing agencies that receive HUD funds to provide housing under the low-rent or tenant-based Section 8 programs. The five- year plan describes the agency’s mission and its long-range goals and objectives for achieving its mission over the subsequent 5 years. The annual plan details the agency’s immediate objectives and strategies for achieving these goals, as well as the agency’s policies and procedures. For agencies that manage low-rent units, the annual plan also serves as the application for the capital fund and public housing drug elimination grant programs. HUD distributes these grants on a formula basis. The Public Housing Reform Act sets forth requirements governing the submission, review, and approval of agency plans. Plans must be submitted to HUD 75 days before the start of the agency’s fiscal year. In addition, the plans are to be developed by the public housing agency in consultation with a resident advisory board and be consistent with other HUD-required community planning documents. Public housing agencies are also required to hold a public hearing on the plans and to address comments received during the hearing before submitting the plans to HUD. HUD, in turn, must review submitted plans to determine that they contain the information required by the act, agree with information from other data sources available to HUD such as community planning documents, and comply with other applicable laws. HUD must issue a written notice either approving or disapproving the plans within 75 days of its receipt of the plans. If HUD does not meet this deadline, plans are considered approved. For fiscal year 2000, 4,055 required plans had been submitted to and approved by HUD, and 89 required plans had not been approved. The 89 unapproved plans were in varying stages: 53 plans had not been submitted; 34 plans had been submitted, disapproved due to cited deficiencies, and not yet resubmitted with the deficiencies corrected; and 2 plans were in the process of being reviewed by HUD. Of the housing agencies that should have had approved plans but did not, 76 provide housing through tenant- based Section 8 units only. The remaining 13 manage low-rent units only or a combination of low-rent and tenant-based Section 8 units. HUD is considering sanctions against all public housing agencies that do not have approved fiscal year 2000 agency plans. Since agencies that manage low-rent units use the annual plan as the application for their capital fund and public housing drug elimination formula grants, HUD does not plan to release the fiscal year 2000 formula grants to agencies without approved plans. Although these grant funds have been committed to the agencies based on the formula allocation, the funds have not been released to agencies without approved fiscal year 2000 plans and are not available for those agencies’ use. According to a HUD official, any agency that manages low-rent units and did not submit its annual plan to HUD by September 30, 2001, may lose its capital fund and public housing drug elimination program formula grants for fiscal year 2000. Fourteen public housing agencies may lose about $2.6 million in fiscal year 2000 capital fund grants and one of these agencies may also lose a $39,426 public housing drug elimination program grant. HUD is considering a similar sanction for those public housing agencies that administer only tenant-based Section 8 units and do not have approved fiscal year 2000 plans. While tenant-based Section 8-only agencies make up 24 percent of all housing agencies, they represent 85 percent of agencies without approved plans. For these agencies, HUD could withhold a portion of the administrative fees these public housing agencies receive for managing the tenant-based Section 8 program. In addition, HUD requires these public housing agencies to have approved fiscal year 2000 plans to be eligible for additional Section 8 vouchers in fiscal year 2002. The majority of HUD field locations reported that they experienced some problems with the fiscal year 2000 plan review process but were able to complete almost all reviews. Some of these problems were addressed in the fiscal year 2001 process. A majority of respondents reported that the fiscal year 2000 plans were useful in helping HUD field locations identify certain housing agency needs but believed the plans were more important to housing agencies with low-rent units than to housing agencies that administer only tenant-based Section-8 units. Most respondents also believed that agencies are implementing their fiscal year 2000 plans, but many also believed that agencies are having difficulty implementing some portions of the plans. Seventy-four percent of field locations that responded to our survey reported problems or difficulties with the fiscal year 2000 plan review and approval process. For example, over 50 percent of respondents said that the electronic transmission of plans from housing agencies to HUD and the conversion of plans into a readable format once received at HUD had a negative or very negative effect on their ability to review and approve plans. Respondents also reported that HUD-provided guidance on the plan process was less than adequate. One respondent reported that headquarters guidance at the beginning of the process was not very good and was delayed in getting to the field locations, while another reported that changing rules made it difficult to know what the housing agencies should do and what the field locations should look for in reviewing plans. Changes that have been made for the 2001 plan process suggest that lessons learned and experience gained during the first year resulted in some improvements, but it is too early to determine whether these changes have fully resolved the problems. For example, several respondents reported that technical data transmission and conversion problems were less frequent for fiscal year 2001. They also reported that HUD headquarters had streamlined guidance and provided it in a timelier manner. HUD headquarters officials also cited several initiatives undertaken as a result of lessons learned during the first year, including developing a database to better track agency plan information, hiring a new contractor to manage the database, and providing consolidated guidance in the form of a desk guide to assist housing agencies and field locations. Respondents reported that, for fiscal year 2000, almost half of the plans reviewed had to be resubmitted by the housing agencies because of deficiencies. The majority of field locations said that deficiencies requiring correction and resubmission commonly occurred in the plans’ sections documenting capital improvement needs, the housing needs of the community, and the fulfillment of resident participation requirements. Among the problems with capital improvement sections were the omission or incompleteness of required documentation, such as plans for the use of the agency’s capital funds. Regarding the sections on determining housing needs, some agencies submitted data sources on housing availability that were unclear or conflicted with other local planning documents. Regarding the sections describing resident participation, one field location that has a large number of small housing agencies in its jurisdiction reported that its agencies had trouble finding residents willing to participate in the planning process and that this was reflected in their plans. Between 60 and 72 percent of survey respondents indicated they found the plans helpful in identifying public housing agency needs relative to setting operational priorities, developing resident participation, and planning strategically. Some also reported that the planning process helped field locations provide technical assistance to housing agencies on identified problem areas. For example, one respondent reported that the plan review process enabled the field locations to provide technical assistance to public housing agencies in the areas of setting priorities and effective strategic planning. Responses to our survey suggested that field locations think that the plans are more important for agencies with low-rent units than for agencies with only tenant-based Section 8 units. Specifically, about 70 percent of respondents thought the plans were important in setting operational priorities for agencies that maintain low-rent units, while only 40 percent thought they were important in setting operational priorities for agencies with tenant-based Section 8 units only. One respondent commented that operating a tenant-based Section 8 program has substantially different planning needs than operating a low-rent housing program. According to this respondent, because tenant-based Section 8 units are located in privately-owned housing, there is no “physical asset” for the tenant-based Section 8 agency to maintain, and other problems with being a landlord or owner are not present. The fact that the plan serves as a grant application for agencies that operate the low-rent program, but not for agencies that operate the tenant-based Section 8 program only, may also contribute to the respondents’ opinion that plans are less important to these agencies. About 72 percent of respondents believed that, for the most part, housing agencies can implement the plans they developed, submitted, and had approved. At the same time, about 54 percent of respondents said housing agencies are having difficulty implementing the resident participation requirement. A recurring theme from several respondents was that housing agencies had difficulty getting residents interested in forming or participating on resident advisory boards. Several respondents emphasized that getting participation in small and tenant-based Section 8 only housing agencies was especially difficult. In addition, some respondents said that it is difficult to get residents appointed to the housing agencies’ board of directors in some areas, as is required. Staff at the eight public housing agencies we visited described varying experiences with the fiscal year 2000 plan process. For example, some found the process useful, while others did not; some found HUD guidance helpful, while others did not. Generally, larger agencies had more positive responses than did smaller agencies. While the information collected on our visits cannot be generalized to the universe of public housing agencies, it provides insight into individual public housing agencies’ concerns. The public housing agencies we visited held varying views on the usefulness of the fiscal year 2000 process. Four had positive experiences, two did not, and two had no comment. One of the larger agencies told us that the first year of the plan process was useful because it forced the agency to review and update its policies. This agency also uses the plan as a training aid for newly hired staff and believes the plan is useful as a vehicle for obtaining resident input. The other larger agency said that the plan is useful in the agency’s strategic planning. In contrast, the two small agencies we visited reported that they did not find the process useful: One said that it took time away from the staff’s essential day-to-day operational duties. The other said it perceived no value in the plan process. Although the amount and type of resources that agencies devoted to the plan process for fiscal year 2000 varied, seven of the eight public housing agencies we visited told us they used additional staff or resources in developing their fiscal year 2000 plans. Three of the eight used consultants to develop their plans. One extra-large agency hired an additional staff person specifically to coordinate development of its fiscal year 2000 plans. In contrast to the other seven public housing agencies we visited, a medium-sized agency told us that it did not spend significantly more staff time or additional resources preparing the plans because most of the required updating of operational policies had been completed earlier. All eight housing agencies we visited expressed some frustration with the quantity or quality of HUD guidance for the first year, particularly regarding the agency plan template that HUD provided electronically to serve as a guide to developing and formatting the agency plans. Although each of the eight agencies had some negative feelings about the template, some balanced their comments with positive remarks. For example, one extra- large agency told us that the template provided guidance for formatting the plan submission. A large agency we visited told us that the template was sufficiently easy to use and added that, in its opinion, HUD had improved the template for fiscal year 2001. On the other hand, one of the small agencies told us that the template does not give individual housing agencies the flexibility to describe unusual situations relating to local needs. In addition, one of the medium-sized agencies told us that the template was not user friendly. Agencies also had mixed experiences with the resident participation requirement for the fiscal year 2000 plan. For example, one extra-large public housing agency, with a widely dispersed housing inventory and several different types of resident populations, had a positive experience. Staff at this agency said that the resident participation requirement brought together a cross-section of residents that would otherwise not have met and provided these residents with an appreciation of the competing needs of resident populations and the commensurate difficulty the housing agency faced in meeting those needs. The other extra-large agency told us that its experience with this requirement was positive because the planning process generally encouraged resident participation. In contrast, one of the small agencies told us that resident apathy made it difficult to meet this requirement. Our work raised questions about the relative value and burden of the planning process for two groups of public housing agencies. Survey responses highlighted questions about the value of the plans to those agencies that administer only tenant-based Section 8 units, while comments received during our visits to eight agencies suggested that small agencies may find less value in the planning process and that the process puts a greater burden on their resources. As we did not visit a representative sample of small public housing agencies, further examination of these agencies’ experiences, including those that provide housing only through the tenant-based Section 8 program, would be needed to determine the value of annual plans to these agencies. As agreed with your offices, we are planning to further investigate the challenges facing small housing agencies, especially the impact and benefits of regulatory and administrative requirements. As many of the smaller agencies provide housing only through the tenant-based Section 8 program, this work might also provide some insights into the usefulness and applicability of the plans for this type of public housing agency. The mandate in Section 511 of the Quality Housing and Work Responsibility Act of 1998 required that we review and audit a representative sample of the nation’s housing agencies that are required to submit agency plans. This is a universe of over 4,000 housing agencies. When we met with you and your office to clarify our reporting requirements under the mandate, we agreed that available resources and reporting deadlines would not permit us to review and audit a representative sample of these housing agencies and their plans. We also agreed that a survey of HUD field locations to assess HUD’s management of the fiscal year 2000 agency plan process would serve as a proxy to auditing the universe of housing agencies, as each HUD field location has direct knowledge of all housing agencies within its respective jurisdiction and was responsible for reviewing and approving those agencies’ plans. We agreed to supplement this survey by collecting data on the status of all required plans and by visiting a nonrepresentative sample of public housing agencies to gain insight into particular agencies’ experiences. To determine the status of plans submitted to and approved by HUD for fiscal year 2000, we interviewed HUD Public and Indian Housing policy development, Grants Management Center, and program officials. We also obtained data from several Public and Indian Housing databases on public housing agencies and fiscal year 2000 approved plans. We analyzed the data, discussed it with HUD staff, and resolved any discrepancies in the data with HUD staff. To assess HUD’s management of the fiscal year 2000 agency plan review process, we developed an automated survey instrument that we posted on our Web site. We requested that all 43 HUD Public and Indian Housing field offices and both troubled agency recovery centers complete the survey. These HUD field locations are responsible for reviewing and approving agency plans. We sent E-mail messages asking officials at these field offices and recovery centers to fill out the questionnaire. We received responses from 41 field offices and both troubled agency recovery centers, which is a 96 percent response rate. Field locations responding to our survey were responsible for reviewing 4,033 or about 97 percent of the plans required to be submitted in fiscal year 2000. Our survey results reflect the information provided by the HUD officials. We did not independently verify the field locations’ responses to our questions. During the design of the questionnaire, we pretested our questionnaire with officials from two field offices and modified it on the basis of the feedback and comments we received during the pretests. In addition, we obtained comments on the questionnaire from HUD’s Office of Public and Indian Housing. To assess selected public housing agencies’ experiences with the fiscal year 2000 agency plan process, we visited eight geographically dispersed agencies with low-rent and tenant-based Section 8 units. We selected the eight housing agencies based on criteria such as size and performance designation, which determines the type of plans each agency is required to submit. We interviewed the executive director or other staff responsible for preparing the agency plans, residents, and resident board members. We also reviewed documents supporting the agencies’ fiscal year 2000 plans. In addition, we contacted public housing industry groups to obtain their constituents’ perspectives on the first year of the required planning process. We conducted our review from January 2001 through March 2002 in accordance with generally accepted government auditing standards. We provided a draft of this report to HUD to obtain comments. On May 2, 2002, the deputy assistant secretary for policy, programs, and legislative initiatives, Office of Public and Indian Housing, provided oral comments. HUD generally agreed with the draft and provided editorial and clarifying comments that were incorporated in the report, as appropriate. We are sending copies of this report to interested congressional committees and members of Congress; the secretary of HUD; and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have further questions, please call me at (202) 512-7631. The key contact and other contributors to this report are listed in appendix II. PHAs without approved plans in an effort to obtain fiscal year 2000 plans. The official added that 14 Low- Rent/Combined PHAs that did not submit fiscal year 2000 plans by September 30, 2001, may forfeit their fiscal year 2000 formula grant funds. Fourteen PHAs may forfeit about $2.6 million in capital fund program grants. One PHA also may forfeit a $39,426 public housing drug elimination program grant. not receive formula funds, HUD could not take the same action against the 76 tenant-based Section 8-only PHAs that do not have approved fiscal year 2000 plans. To address this issue, HUD could withhold a portion of the administrative fees tenant- based Section 8-only PHAs receive for managing the program, pending submission and approval of the required plans, and requires PHAs to have an approved fiscal year 2000 plan in order to apply for additional Section 8 vouchers for fiscal year 2002. Seventy-four percent of HUD field offices that responded to our survey reported they experienced problems with the fiscal year 2000 review process. Specific problems included data transmission delays. Technical problems occurred during the following steps: PHAs’ transmission of plans to HUD headquarters. HUD headquarters’ transmission of plans to HUD field offices. HUD headquarters’ posting of plan approval notification. a general lack of guidance from HUD headquarters, including delayed guidance on how to review plans. changing guidance on how to help PHAs complete plans. HUD took action to address reported problems for fiscal year 2001 plan submissions. Specific changes included developing a new database to track plan approval and hiring a contractor to manage it, and providing more timely guidance, such as a field office desk guide for reviewing the plans. agency plans had to be resubmitted. The most common deficiencies for which plans had to be resubmitted related to PHAs’ completion of the following plan components: capital improvement needs. statement of housing needs. resident participation requirement. plans were useful in helping the field office identify a number of PHA needs. Plans Moderately to Extremely Useful in Identifying Specific PHA Needs (percentage of field offices) Plan Is Important for Setting Management Priorities for Types of Units (percent of field offices) implementing their fiscal year 2000 plans. The most commonly cited problem areas concerned the following plan components. Plan Components PHAs Reported Difficulty In Implementing (percentage of field offices) implementing particular plan components: Resident participation: Resident apathy made it difficult for some PHAs, especially small and Section 8-only PHAs, to fulfill this requirement. Capital improvement plans: PHAs were affected by funding constraints or shortages. Statement of housing needs: Small and rural PHAs with limited resources had difficulty gathering the relevant information, such as local demographics. found the plan process quite difficult for fiscal year 2000, the first year. Problems cited included the following: PHAs were unable to obtain meaningful information from HUD on reasons plans were disapproved. Some PHAs found it hard to establish resident advisory boards. Small PHAs lacked the resources and staff to complete the plans. PHAs’ assessment of the usefulness of the plans varied at the eight PHAs we visited. Larger PHAs generally had more positive assessments than smaller PHAs. Positive remarks: The process and plans helped the PHA get other local funding, forced the PHA to review and update policies, gave PHA residents a vehicle for input, and are used for strategic planning, as a training aid, and as an information source for HUD field offices. Negative remarks: The process and plans took time away from other duties, and are not used. staff time or resources preparing the plans because most of the required updating of policies had already been completed before HUD provided guidance for plans. template also varied. Positive remarks: The template provided guidance for formatting, was sufficiently easy to use, and was improved for fiscal year 2001. Negative remarks: The template lacked flexibility, did not sufficiently define terms such as “affordability” and “quality”, and was not user friendly, as PHAs had to go to several HUD sources to complete it. PHAs’ we visited assessment of the resident participation requirement also varied. Positive remarks: The resident participation requirement brought a cross-section of residents together, and encouraged resident participation. Negative remark: The resident participation requirement was difficult to sustain because of resident apathy. and value of the plans varied. Smaller PHAs we visited viewed the process and plans as consuming a larger portion of their resources, and as having limited value. Most HUD field offices and some larger PHAs we visited as a valuable tool to help PHAs define their strategic vision and monitor their progress toward management goals, and as having limited value to tenant-based Section 8-only PHAs. HUD made changes for fiscal year 2001 plans, including simplified plans for small PHAs, and modified requirements for tenant-based Section 8-only PHAs. year 2000 plans approved, tenant-based Section 8-only PHAs have a higher rate of noncompliance. Tenant-based Section 8- only PHAs are 24 percent of all PHAs, and 85 percent of PHAs without approved plans. HUD has recently determined that it can sanction tenant-based Section 8-only PHAs that fail to submit plans. HUD could withhold a portion of the administrative fee, and requires an approved fiscal year 2000 plan for PHAs to be eligible for additional vouchers for fiscal year 2002. In addition to the individual named above, Johnnie Barnes, Sherrill Dunbar, Gloria Hernandez-Saunders, Miko Johnson, John McGrail, Luann Moy, Don Watson, and Alwynne Wilbur made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The Quality Housing and Work Responsibility Act of 1998 was designed to improve the quality of public housing and the lives of its residents. Since fiscal year 2000, housing agencies managing low-rent or tenant-based Section 8 units have been required to develop and submit five-year and annual plans. As of January 2002, 98 percent of public housing agency plans for fiscal year 2000 had been submitted and approved. The Department of Housing and Urban Development (HUD) had mixed views about the fiscal year 2000 plan process and its value. The field locations that responded to GAO's survey reported that their review of fiscal year 2000 plans was hampered by several factors, including difficulty in transmitting data between public housing agencies and HUD. Most field locations responded that public housing agencies are implementing their plans but acknowledged that there may be some problems, particularly in fulfilling requirements related to resident participation in the process. The eight public housing agencies GAO visited had differing views on the usefulness of the planning process, the level of resources required to prepare the plans, the sufficiency of HUD's guidance on completing the plans, and the difficulty of meeting the resident participation requirement.
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business. It is especially important for government agencies, where maintaining the public’s trust is essential. The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have revolutionized the way our government, our nation, and much of the world communicates and conducts business. Although this expansion has created many benefits for agencies such as IRS in achieving their missions and providing information to the public, it also exposes federal networks and systems to various threats. Without proper safeguards, computer systems are vulnerable to individuals and groups with malicious intent who can intrude and use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. The risk to these systems are well-founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. The Federal Bureau of Investigation has identified multiple sources of threats, including foreign nation states engaged in intelligence gathering and information warfare, domestic criminals, hackers, virus writers, and disgruntled employees or contractors working within an organization. In addition, the U.S. Secret Service and the CERT® Coordination Center studied insider threats in the government sector and stated in a January 2008 report that “government sector insiders have the potential to pose a substantial threat by virtue of their knowledge of, and access to, employer systems and/or databases.” Our previous reports, and those by federal inspectors general, describe persistent information security weaknesses that place federal agencies, including IRS, at risk of disruption, fraud, or inappropriate disclosure of sensitive information. Accordingly, we have designated information security as a governmentwide high-risk area since 1997, most recently in 2009. Recognizing the importance of securing federal agencies’ information systems, Congress enacted the Federal Information Security Management Act (FISMA) in December 2002 to strengthen the security of information and systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program for the information and information systems that support the operations and assets of the agency, using a risk-based approach to information security management. Such a program includes assessing risk; developing and implementing cost-effective security plans, policies, and procedures; providing specialized training; testing and evaluating the effectiveness of controls; planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies; and ensuring continuity of operations. IRS has demanding responsibilities in collecting taxes, processing tax returns, and enforcing the federal tax laws, and relies extensively on computerized systems to support its financial and mission-related operations. In fiscal years 2009 and 2008, IRS collected about $2.3 trillion and $2.7 trillion, respectively, in tax payments, processed hundreds of millions of tax and information returns, and paid about $438 billion and $426 billion, respectively, in refunds to taxpayers. Further, the size and complexity of IRS add unique operational challenges. The agency employs tens of thousands of people in its Washington, D.C. headquarters, 10 service center campuses, 3 enterprise computing centers, as well as numerous other field offices throughout the United States. IRS also collects and maintains a significant amount of personal and financial information on each American taxpayer. Protecting the confidentiality of this sensitive information is paramount; otherwise, taxpayers could be exposed to loss of privacy and to financial loss and damages resulting from identity theft or other financial crimes. The Commissioner of Internal Revenue has overall responsibility for ensuring the confidentiality, integrity and availability of the information and information systems that support the agency and its operations. FISMA requires the Chief Information Officer or comparable official at federal agencies to be responsible for developing and maintaining an information security program. IRS has delegated this responsibility to the Associate Chief Information Officer for Cybersecurity, who heads the Office of Cybersecurity. This group is responsible for ensuring IRS’s compliance with federal laws, policies and guidelines governing measures to assure the confidentiality, integrity, and availability of IRS electronic systems, services and data. It manages IRS’s information security program, including activities associated with identifying, mitigating, and monitoring cybersecurity threats; determining strategy and priorities; and monitoring security program implementation. Within the Office of Cybersecurity, the Computer Security Incident Response Center (CSIRC) is tasked with preventing, detecting, and responding to computer security incidents targeting IRS’s information technology enterprise. IRS develops and publishes its information security policies, guidelines, standards and procedures in the Internal Revenue Manual and other documents in order for IRS divisions and offices to carry out their respective responsibilities in information security. During fiscal year 2009, IRS has made progress toward correcting previously reported information security control weaknesses and information security program deficiencies at its three computing centers, another facility, and enterprisewide. IRS had corrected or mitigated 28 of the 89 previously identified weaknesses and deficiencies that were unresolved at the end of our prior audit. This includes 21 of 74 control weaknesses and 7 of 15 program deficiencies. To illustrate, IRS corrected weaknesses related to user identification and authentication and physical access, among others. For example, it has changed vendor-supplied user accounts and passwords, avoided storing clear-text passwords in scripts, deactivated proximity cards for separated employees in a timely manner, ensured that security guards follow established procedures and screen packages and briefcases for prohibited items. In addition, IRS has improved aspects of its information security program. For example, IRS has enhanced its policies and procedures for configuring mainframe operations and established an alternate processing site for its procurement system. IRS has also continued to take other actions to improve information security. The agency is in the process of implementing a comprehensive plan to address numerous information security weaknesses, such as those associated with network and system access, audit trails, system software configuration, and contingency planning. According to the plan, the last of these weaknesses is scheduled to be resolved in the first quarter of fiscal year 2014. Further, for fiscal year 2010, IRS has targeted initiatives to improve information security controls in areas such as identity and access management, auditing and monitoring, and disaster recovery. These efforts, if fully and effectively implemented, are positive steps towards improving the agency’s overall information security posture. Nonetheless, of the previously identified security weaknesses and program deficiencies reported as unresolved at the completion of our prior year’s audit, 61 of them—or about 69 percent—remain unresolved or unmitigated. For example, IRS continues to use passwords that are not complex, ineffectively remove application accounts in a timely manner for separated employees, allow personnel excessive file and directory permissions, allow the unencrypted transmission of user and administrator login install patches in an untimely manner, ineffectively verify that remedial actions are complete, and not always annually review risk assessments. As a result, IRS is at increased risk of unauthorized disclosure, modification, or destruction of financial and taxpayer information. Although IRS has continued to make progress toward correcting previously reported information security weaknesses at its three computing centers, another facility, and enterprisewide, many deficiencies remain. These deficiencies, and new weaknesses identified during this year’s audit, relate to access controls, configuration management, and segregation of duties. A key reason for these weaknesses is that IRS has not yet fully implemented its agencywide information security program to ensure that controls are appropriately designed and operating effectively. These weaknesses—both old and new—continue to jeopardize the confidentiality, integrity, and availability of IRS’s systems and were the basis of our determination that IRS had a material weakness in internal controls over financial reporting related to information security in fiscal year 2009. A basic management objective for any organization is to protect the resources that support its critical operations from unauthorized access. Organizations accomplish this objective by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Inadequate access controls potentially diminish the reliability of computerized information and increase the risk of unauthorized disclosure, modification, and destruction of sensitive information and disruption of service. Access controls include those related to user identification and authentication, authorization, cryptography, audit and monitoring, and physical security. However, IRS did not fully implement effective controls in these areas. A computer system must be able to identify and authenticate different users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system is able to distinguish one user from another—a process called identification. The system also must establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. The combination of identification and authentication— such as user account/password combinations—provides the basis for establishing individual accountability and for controlling access to the system. According to the Internal Revenue Manual, maximum password age should be 60 days for administrator accounts and strong passwords for authentication to IRS systems should be enforced. In addition, the Internal Revenue Manual states that passwords should be protected from unauthorized disclosure and modification when stored and transmitted. IRS did not always enforce strong identification and authentication controls. For example, administrator passwords for two servers located at one center were not set to comply with IRS’s password age policy. In both instances the administrator password age was set to 118 days, which exceeded IRS’s requirement by 58 days. Consequently, an increased risk exists that compromised administrator passwords will be used by unauthorized individuals for a longer period of time to gain unauthorized access to server resources. In addition, IRS employees continued to use weak passwords for UNIX systems at two centers and stored clear text passwords in computer program scripts at another center. Further, IRS did not sufficiently protect passwords during transmission. For example, IRS implemented weak authentication protocols for network logons. Ten servers, including domain controllers, located at five sites, were configured to accept an authentication protocol that was vulnerable to widely published attacks for obtaining user passwords. As a result, increased risk exists that malicious individuals could capture user passwords and use them to gain unauthorized access to IRS systems. Authorization is the process of granting or denying access rights and permissions to a protected resource, such as a network, a system, an application, a function, or a file. A key component of granting or denying access rights is the concept of “least privilege.” Least privilege is a basic principle for securing computer resources and information. This principle means that users are granted only those access rights and permissions they need to perform their official duties. To restrict legitimate users’ access to only those programs and files they need to do their work, organizations establish access rights and permissions. “User rights” are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that regulate which users can access a particular file or directory and the extent of that access. To avoid unintentionally authorizing users’ access to sensitive files and directories, an organization must give careful consideration to its assignment of rights and permissions. IRS’s manual states that the configuration and use of system utilities are based on least privilege and are limited to those individuals that require them to perform their assigned functions. IRS permitted excessive access to systems and files by granting rights and permissions that gave users more access than they needed to perform their assigned functions. For example, about 120 IRS employees had access to key documents, including cost data for input to its administrative accounting system and a critical process-control spreadsheet used in IRS’s cost allocation process. However, fewer than 10 employees needed this access to perform their jobs. The large number of employees with access to these documents increases the chances that they may intentionally or unintentionally corrupt the data in these documents, which could result in incorrect input and data processing, thus jeopardizing the accuracy of the cost allocation output and, ultimately the information presented in IRS’s annual financial statements. In addition, accounts on three servers supporting the accounting system and used for data transfer at two centers, were given remote login access, which was not needed for these types of accounts and reduces IRS’s ability to control access to the servers. Further, IRS had not corrected previously reported weaknesses related to not restricting users’ ability to bypass application controls for its procurement system and allowing excessive access to server shares that contained sensitive information. As a result, increased risk exists that unauthorized users will gain access to sensitive information or circumvent security controls. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. A basic element of cryptography is encryption. Encryption can be used to provide basic data confidentiality and integrity by transforming plain text into cipher text using a special value known as a key and a mathematical process known as an algorithm. The Internal Revenue Manual requires the use of encryption for transferring sensitive but unclassified information between IRS facilities. The National Security Agency also recommends disabling protocols that do not encrypt information transmitted across the network. IRS configured routers to use protocols that allow unencrypted transmission of sensitive information. For example, 18 routers we reviewed at the three computing centers used a protocol that was configured to authenticate information using plain text. In addition, IRS did not use encryption for routing table messages for six routers we reviewed at two of the centers. Enabling encryption on routing table messages helps to prevent someone from purposely or accidentally adding an unauthorized router to the network and either corrupting routing tables or launching a denial of service attack. Further, IRS had not corrected a previously identified weakness related to encrypting administrator login data to a key application. By not encrypting these data, IRS is at increased risk that an unauthorized individual could view and then use the data to gain unwarranted access to its system and/or sensitive information. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to know what, when, and by whom specific actions have been taken on a system. Organizations accomplish this by implementing system or security software that provides an audit trail, or logs of system activity, that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. The way in which organizations configure system or security software determines the nature and extent of information that can be provided by the audit trail. To be effective, organizations should configure their software to collect and maintain audit trails that are sufficient to track security-relevant events. The Internal Revenue Manual requires that audit records be created, protected, and retained to enable the monitoring, analysis, investigation, and reporting of unlawful, unauthorized, or inappropriate information system activity. In addition, the manual also states that the IRS shall monitor its networks for security events. IRS did not always log and monitor important security events on its systems. For example, IRS did not have event logging enabled for an application that supports its procurement system. In addition, although IRS’s CSIRC was successful in logging most security events, it did not monitor activity on all critical ports. By not logging and monitoring system activities, IRS has limited assurance that it will be able to detect security-relevant events that could adversely affect operations. Physical access controls are used to mitigate the risks to systems, buildings, and supporting infrastructure related to their physical environment and to control the entry and exit of personnel in buildings, as well as data centers containing agency resources. Examples of physical security controls include perimeter fencing, surveillance cameras, security guards, and locks. Without these protections, IRS computing facilities and resources could be exposed to espionage, sabotage, damage, and theft. The Internal Revenue Manual requires department managers of restricted areas to review, validate, sign, and date monthly, the authorized access list for restricted areas and then forward the list to the physical security office for review of employee access. The manual also requires that users activate the password-protected screen saver or lock their workstation when leaving the machine unattended. Although IRS had implemented numerous physical security controls, certain controls were not working as intended, such as the following: Department managers did not always validate and sign access lists within the required month timeframe. We have previously reported this weakness and recommended that managers sign and date authorized access lists for restricted areas. The physical security office at one center did not promptly remove access to restricted areas for 5 out of 15 employees after managers requested their removal. Specifically, 4 employees whose managers marked their name for removal from the authorized access lists between March and June 2009, still had access as of July 2009. A fifth employee was removed 2 months after department managers noted the employee for removal from the access list. Two of five consoles that were part of the operating environment for a key system were not locked with password-protected screen savers while they were left unattended, which could have allowed unauthorized access to this system used for accessing taxpayer information. Because employees still had unnecessary access to restricted areas and computers in the restricted areas were not always secured when left unattended, IRS has reduced assurance that computing resources and taxpayer information are adequately protected from unauthorized access. In addition to access controls, other important controls should be in place to ensure the confidentiality, integrity, and availability of an organization’s information. These controls include policies, procedures, and techniques for securely configuring information systems and segregating incompatible duties. However, IRS weaknesses in these areas have increased the risk of unauthorized use, disclosure, modification, or loss of information and information systems. Configuration management involves, among other things, (1) verifying the correctness of the security settings in the operating systems, applications, or computing and network devices and (2) obtaining reasonable assurance that systems are configured and operating securely and as intended. Patch management is an important element in mitigating the risks associated with software vulnerabilities. When software vulnerabilities are discovered, the software vendor may develop and distribute a patch or work-around to mitigate the vulnerability. Outdated and unsupported software are more vulnerable to attacks and exploitation because vendors no longer provide updates, including security updates. Accordingly, the Internal Revenue Manual states that system administrators will ensure the operating system version is a version for which the vendor still offers standardized technical support. IRS was running outdated and unsupported software, exposing servers to known vulnerabilities. For example, the operating system software supporting the administrative accounting system reached its “end of service” life on March 31, 2009. As a result, IRS may receive limited or no vendor maintenance support, including security patches, thus increasing the risk that known information security vulnerabilities may be exploited. In addition, IRS used outdated and unsupported software on the five critical servers we reviewed at two centers, exposing the organization to a vulnerability that could allow a malicious user to capture user IDs and passwords by re-directing internal users’ access requests to other systems without their knowledge. Segregation of duties refers to the policies, procedures, and organizational structures that help ensure that no single individual can independently control all key aspects of a process or computer-related operation and thereby gain unauthorized access to assets or records. Often, organizations achieve segregation of duties by dividing responsibilities among two or more individuals or organizational groups. This diminishes the likelihood that errors and wrongful acts will go undetected, because the activities of one individual or group will serve as a check on the activities of the other. Inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. The Internal Revenue Manual requires that IRS divide and separate duties and responsibilities of incompatible functions among different individuals, so that no individual shall have all of the necessary authority and system access to disrupt or corrupt a critical security process. Furthermore, the manual specifies that the primary security role of any database administrator is to administer and maintain database repositories for proper use by authorized individuals and that database administrators shall not have system administration capabilities. IRS did not always segregate incompatible duties. Specifically, IRS permitted an individual to hold and execute the roles and responsibilities of both a database and system administrator for the procurement system. By not properly segregating incompatible duties, IRS may have an increased risk that improper program changes could be intentionally or inadvertently implemented. Subsequent to our site visit, IRS informed us that it had corrected this weakness. However, we have not yet evaluated the action taken. A key reason for the information security weaknesses in IRS’s financial and tax processing systems is that it has not yet fully implemented its agencywide information security program to ensure that controls are effectively established and maintained. FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems; policies and procedures that (1) are based on risk assessments, (2) cost- effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in its information security policies, procedures, or practices; and plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. IRS has made important progress in developing and documenting elements of its information security program. However, not all components of its program have been fully implemented. According to the National Institute of Standards and Technology (NIST), risk is determined by identifying potential threats to the organization and vulnerabilities in its systems, determining the likelihood that a particular threat may exploit vulnerabilities, and assessing the resulting impact on the organization’s mission, including the effect on sensitive and critical systems and data. Identifying and assessing information security risks are essential to determining what controls are required. Moreover, by increasing awareness of risks, these assessments can generate support for the policies and controls that are adopted in order to help ensure that these policies and controls operate as intended. Consistent with NIST guidance, IRS requires its risk assessment process to detail the residual risk assessed, as well as potential threats, and to recommend corrective actions for reducing or eliminating the vulnerabilities identified. The Internal Revenue Manual also requires system risk assessments be reviewed annually. IRS had implemented a documented methodology for conducting risk assessments that includes threat and vulnerability identification, impact analysis, risk determination, and recommended corrective actions. The risk assessments for the six systems we reviewed included the identification of threats and vulnerabilities. The assessments also included impact analysis, risk determination, and recommended corrective actions for mitigating or eliminating the threats and vulnerabilities that were identified. However, IRS officials indicated that they had not corrected a weakness we previously reported regarding not annually reviewing system risk assessments. Until IRS annually reviews such assessments, potential risks to these systems and the adequacy of their security controls to reduce risk may be unknown. Another key element of an effective information security program is to develop, document, and implement risk-based policies, procedures, and technical standards that govern security over an agency’s computing environment. If properly implemented, policies and procedures should help reduce the risk associated with unauthorized access or disruption of services. Technical security standards can provide consistent implementation guidance for each computing environment. Developing, documenting, and implementing security policies are the important primary mechanisms by which management communicates its views and requirements; these policies also serve as the basis for adopting specific procedures and technical controls. In addition, agencies need to take the actions necessary to effectively implement or execute these procedures and controls. Otherwise, agency systems and information will not receive the protection that the security policies and controls should provide. Although IRS had developed and documented information security policies, standards, and guidelines that generally provide appropriate guidance to personnel responsible for securing information and information systems, it did not always provide needed guidance for securing network devices or informing CSIRC of network changes. For example, IRS policy lacked specific guidance on how to more securely configure routers to encrypt network traffic and help protect the network from denial of service, spoofing, and man-in-the-middle attacks. In addition, IRS did not have guidance on how to configure network switches to defend against certain attacks that could crash an entire network or network segment. Further, IRS had not developed and implemented procedures for notifying CSIRC of changes that would affect the center’s ability to detect unauthorized access. For example, IRS instructed administrators to change a certain port from the default port number to a lesser known port number. However, according to an IRS official, administrators were never instructed to inform CSIRC of the change, and therefore, the new port number was not being monitored. As a result, IRS’s ability to detect unauthorized access and trace or recreate events was diminished. An objective of system security planning is to improve the protection of information technology resources. A system security plan provides an overview of the system’s security requirements and describes the controls that are in place or planned to meet those requirements. The Office of Management and Budget’s (OMB) Circular A-130 requires that agencies develop system security plans for major applications and general support systems, and that these plans address policies and procedures for providing management, operational, and technical controls. Furthermore, the Internal Revenue Manual requires that security plans be developed, documented, implemented, and periodically updated for the controls in place or planned for an information system. IRS had developed, documented, and updated the plans for six systems we reviewed. Furthermore, those plans documented the management, operational, and technical controls in place and included information required per OMB Circular A-130 for applications and general support systems. People are one of the weakest links in attempts to secure systems and networks. Therefore, an important component of an information security program is providing sufficient training so that users understand system security risks and their own role in implementing related policies and controls to mitigate those risks. IRS’s manual requires that all system users, including contractors, receive security awareness training within the first 10 working days. Although IRS provided security awareness training to new employees as part of its new hire orientation process, IRS did not always provide security awareness training to its contractors. We reviewed training documentation for five contractors newly assigned between January and May 2009, and found that four of them had not received any security awareness training as required. As a result, IRS has less assurance that contractors are aware of the information security risks and responsibilities associated with their activities. Another key element of an information security program is to test and evaluate policies, procedures, and controls to determine whether they are effective and operating as intended. This type of oversight is a fundamental element because it demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and mitigates areas of noncompliance and ineffectiveness. Although control tests and evaluations may encourage compliance with security policies, the full benefits are not achieved unless the results improve the security program. FISMA requires that the frequency of tests and evaluations be based on risks and occur no less than annually. The Internal Revenue Manual also requires periodic testing and evaluation of the effectiveness of information security policies and procedures. Although IRS had tested and evaluated the six systems we reviewed, the test results were not always clearly documented or thoroughly reviewed. IRS has developed a process to test and evaluate their applications on a yearly basis. However, several tests were labeled “pass” based on draft documents or actions that would be completed in the future, and several other tests did not address the entire documented control. In addition, according to IRS, there were a few instances where the tester misinterpreted the control or did not include enough detail in the test results to conclude on whether a control was effective or not. Further, the results of these tests were not effectively reviewed. Although a review and approval was indicated, these shortcomings would have likely been identified had the review been effective. As a result, IRS has limited assurance that controls over its systems are being effectively implemented and maintained. A remedial action plan is a key component of an agency’s information security program as described in FISMA. Such a plan assists agencies in identifying, assessing, prioritizing, and monitoring progress in correcting security weaknesses that are found in information systems. In its annual FISMA guidance to agencies, OMB requires agency remedial action plans, also known as plans of action and milestones, to include the resources necessary to correct identified weaknesses. According to the Internal Revenue Manual, the agency should document weaknesses found during security assessments, as well as planned, implemented, and evaluated remedial actions to correct any deficiencies. The manual further requires that IRS track the status of resolution of all weaknesses and verify that each weakness is corrected. Although remedial action plans were in place, corrective actions were not always appropriately verified. IRS had developed system-specific remedial action plans for six systems and also developed and implemented a remedial action process to address deficiencies in its information security policies, procedures, and practices. However, the verification process used to determine whether remedial actions were implemented was not always effective. To illustrate, IRS informed us that they had corrected 42 of the 89 previously reported weaknesses. However, our tests determined that IRS had not fully implemented the remedial actions it reported for 14 weaknesses that it considered corrected. These weaknesses had not been effectively mitigated. We have previously reported a similar weakness and recommended that IRS revise its remedial action verification process to ensure actions are fully implemented, but the condition continued to exist. Until IRS takes additional steps to fully implement our previous recommendation of improving its remedial action process, it will have limited assurance that weaknesses are being properly corrected and that controls are operating effectively. Continuity of operations planning, which includes developing and testing contingency plans and disaster recovery plans, is a critical component of information protection. To ensure that mission-critical operations continue, organizations develop the ability to detect, mitigate, and recover from service disruptions while preserving access to vital information. In developing this ability, organizations prepare plans that are to be clearly documented, communicated to potentially affected staff, and updated to reflect current operations. In addition, system documentation and operating procedures should be available to adequately provide for recovery and reconstitution of information systems to its original state after a disruption or failure. IRS’s manual requires, among other things, that contingency plans be reviewed and tested at least annually and that individuals with responsibility for disaster recovery be provided copies of or access to application disaster recovery plans. Although contingency plans were tested for the six systems we reviewed, IRS could not readily locate a critical disaster recovery document. Specifically, IRS could not provide, in a timely manner, the appropriate contact or the location of the keystroke manual with the application recovery steps. A keystroke manual provides detailed step-by-step instructions, including keystroke-by-keystroke details, used by individuals with responsibility for disaster recovery to fully recover an application from a significant event. Without a contact and appropriate access to the manual, increased risk exists that IRS could be unable to restore its administrative accounting system to its full operational status after a major disruption. IRS has made progress in correcting or mitigating previously reported weaknesses, implementing controls over key financial systems, and developing and documenting a framework for its agencywide information security program. IRS also has targeted initiatives covering identity and access management, auditing and monitoring, and disaster recovery for fiscal year 2010. However, information security weaknesses—both old and new—continue to impair the agency’s ability to ensure the confidentiality, integrity, and availability of financial and taxpayer information. These deficiencies represent a material weakness in IRS’s internal controls over its financial and tax processing systems. A key reason for these weaknesses is that the agency has not yet fully implemented certain elements of its agencywide information security program. The financial and taxpayer information on IRS systems will remain particularly vulnerable to insider threats until the agency (1) begins to address and correct prior weaknesses across the service and (2) fully implements a comprehensive agencywide information security program that ensures policies and procedures are appropriately specific, contractors receive security awareness training, tests and evaluations are effectively documented and reviewed, and key documents are readily available to support disaster recovery. Until IRS takes these steps, financial and taxpayer information are at increased risk of unauthorized disclosure, modification, or destruction, and the agency’s management decisions may be based on unreliable or inaccurate financial information. In addition to implementing our previous recommendations, we recommend that you take the following four actions to fully implement an agencywide information security program: Develop and implement policies and procedures for more securely configuring routers to encrypt network traffic, configuring switches to defend against attacks that could crash the network, and for notifying CSIRC of network changes that could affect its ability to detect unauthorized access. Ensure contractors receive security awareness training within the first 10 working days. Ensure the results of testing and evaluating controls are effectively documented and reviewed. Ensure key disaster recovery documentation, such as keystroke manuals, are available in a timely manner, and appropriate contacts are readily identified. We are also making 23 detailed recommendations in a separate report with limited distribution. These recommendations consist of actions to be taken to correct specific information security weaknesses related to access controls, configuration management and segregation of duties identified during this audit. In providing written comments (reprinted in app. II) on a draft of this report, the Commissioner of Internal Revenue stated that he appreciated that the draft report recognized the progress IRS has made in improving its information security program, and that the security and privacy of taxpayer and financial information is of the utmost importance to the agency. He also noted that IRS is committed to securing its computer environment and will continually evaluate processes, promote user awareness, and apply innovative ideas to increase compliance. Further, he stated that IRS will develop a detailed corrective action plan addressing each of our recommendations. This report contains recommendations to you. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days from the date of the report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Because agency personnel serve as the primary source of information on the status of recommendations, GAO requests that the agency also provide us with a copy of your agency’s statement of action to serve as preliminary information on the status of open recommendations. We are sending copies of this report to interested congressional committees, the Secretary of the Treasury, and the Treasury Inspector General for Tax Administration. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact Nancy R. Kingsbury at (202) 512-2700 or Gregory C. Wilshusen at (202) 512-6244. We can also be reached by e-mail at [email protected] and [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of our review were to determine (1) the status of the Internal Revenue Service’s (IRS) actions to correct or mitigate previously reported information security weaknesses and (2) whether controls over key financial and tax processing systems were effective in protecting the confidentiality, integrity, and availability of financial and sensitive taxpayer information. This work was performed in connection with our audit of IRS’s financial statements for the purpose of supporting our opinion on internal controls over the preparation of those statements. To determine the status of IRS’s actions to correct or mitigate previously reported information security weaknesses, we reviewed our prior reports to identify previously reported weaknesses and examined IRS’s corrective action plans to determine which weaknesses IRS reported corrective actions as being completed as of April 30, 2009. For those instances where IRS reported it had completed corrective actions, we assessed the effectiveness of those actions by, for example: reviewing databases to determine if vendor-supplied accounts and passwords were changed; examining scripts to determine if they contained clear text passwords; analyzing system registry keys to determine whether access was properly controlled, and that they were configured properly; examining application accounts to determine whether the accounts of separated employees had been removed in a timely manner; observing data transmissions across the network to determine whether sensitive data was being encrypted; reviewing physical access to determine if proximity cards for separated employees was deactivated in a timely manner and whether managers were periodically evaluating employees’ access for restricted areas; observing security guards to determine whether procedures for screening packages and briefcases were followed; examining system software to determine if it was patched in a timely reviewing mainframe policies and procedures to determine if they provide the necessary detail for controlling and logging changes. We evaluated IRS’s implementation of these corrective actions for the Enterprise Computing Centers in Detroit, Martinsburg, and Memphis, and an additional facility in Oxon Hill, Maryland. To determine whether controls over key financial and tax processing systems were effective, we considered the results of our evaluation of IRS’s actions to mitigate previously reported weaknesses, and performed new audit work at the three computing centers as well as IRS facilities in New Carrollton, Maryland; Oxon Hill, Maryland; and Beckley, West Virginia. We concentrated our evaluation primarily on threats emanating from sources internal to IRS’s computer networks and focused on six critical applications/systems and their general support systems that directly or indirectly support the processing of material transactions that are reflected in the agency’s financial statements. Our evaluation was based on our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information; National Security Agency guidance; and IRS’s policies and procedures. We evaluated controls by reviewing the complexity and expiration of password settings to determine if password management was enforced; analyzing users’ system access to determine whether they had more permissions than necessary to perform their assigned functions; observing physical access controls to determine if computer facilities and resources were being protected; inspecting key servers to determine whether critical patches had been installed or software was up-to-date; examining user access and responsibilities to determine whether incompatible functions were segregated among different individuals; and reviewing system back up and recovery procedures to determine if they adequately provide for recovery and reconstitution to the system’s original state after a disruption or failure. Using the requirements in the Federal Information Security Management Act, which establishes elements for an effective agencywide information security program, we reviewed and evaluated IRS’s implementation of its security program by analyzing IRS’s risk assessment process and risk assessments for six IRS financial and tax processing systems which are key to supporting the agency’s financial statements, to determine whether risks and threats were documented; comparing IRS’s policies, procedures, practices, and standards to actions taken by IRS personnel to determine whether sufficient guidance was provided to personnel responsible for securing information and information systems; analyzing security plans for six systems to determine if management, operational, and technical controls were documented and if security plans were updated; examining the security awareness training process for employees and contractors to determine if they received system security orientation within the first 10 working days; analyzing test plans and test results for six IRS systems to determine whether management, operational, and technical controls were tested at least annually and based on risk; reviewing IRS’s system remedial actions plans to determine if they were complete, and reviewing IRS’s actions to correct weaknesses to determine if they effectively mitigated or resolved the vulnerability or control deficiency; and examining contingency plans for six IRS systems to determine whether those plans had been tested or updated. We also reviewed or analyzed our previous reports. In addition, we discussed with management officials and key security representatives, such as those from IRS’s Computer Security Incident Response Center, whether information security controls were in place, adequately designed, and operating effectively. In addition to the individuals named above, David Hayes (Assistant Director), Jeffrey Knott (Assistant Director), Angela Bell, Clayton Brisson, Mark Canter, Larry Crosland, Saar Dagani, Rebecca Eyler, Mickie Gray, Nicole Jarvis, Sharon Kittrell, George Kovachick, Sean Mays, Mark Reid, Eugene Stevens, and Michael Stevens made key contributions to this report.
The Internal Revenue Service (IRS) relies extensively on computerized systems to carry out its demanding responsibilities to collect taxes, process tax returns, and enforce the nation's tax laws. Effective information security controls are essential to protect financial and taxpayer information from inadvertent or deliberate misuse, improper disclosure, or destruction. As part of its audit of IRS's fiscal years 2009 and 2008 financial statements, GAO assessed (1) the status of IRS's actions to correct or mitigate previously reported information security weaknesses and (2) whether controls over key financial and tax processing systems are effective in ensuring the confidentiality, integrity, and availability of financial and sensitive taxpayer information. To do this, GAO examined IRS information security policies, plans, and procedures; tested controls over key financial applications; and interviewed key agency officials at six sites. IRS has continued to make progress during fiscal year 2009 in correcting previously reported information security weaknesses that GAO reported as unresolved at the conclusion of its fiscal year 2008 audit. Specifically, IRS has corrected or mitigated 28 of the 89 weaknesses and deficiencies--21 of 74 previously identified information security control weaknesses and 7 of 15 previously identified program deficiencies. For example, it has (1) changed vendor-supplied user accounts and passwords; (2) avoided storing clear-text passwords in scripts; (3) enhanced its policies and procedures for configuring mainframe operations; and (4) established an alternate processing site for its procurement system. While IRS has corrected 28 control weaknesses and program deficiencies, 61 of them--or about 69 percent--remain unresolved or unmitigated. For example, IRS continued to install patches in an untimely manner and used passwords that were not complex. In addition, IRS did not always verify that remedial actions were implemented or effectively mitigated the security weaknesses. According to IRS officials, they continued to address uncorrected weaknesses and, subsequent to GAO's site visits, had completed additional corrective actions on some of them. Despite these actions, newly identified and the unresolved information security control weaknesses in key financial and tax processing systems continue to jeopardize the confidentiality, integrity, and availability of financial and sensitive taxpayer information. IRS did not consistently implement controls that were intended to prevent, limit, and detect unauthorized access to its systems and information. For example, IRS did not always (1) enforce strong password management for properly identifying and authenticating users; (2) authorize user access to permit only the access needed to perform job functions; (3) log and monitor security events on a key system; and (4) physically protect its computer resources. A key reason for these weaknesses is that IRS has not yet fully implemented its agencywide information security program to ensure that controls are appropriately designed and operating effectively. Although IRS has made important progress in developing and documenting its information security program, it did not, among other things, review risk assessments at least annually for certain systems or ensure contractors receive awareness training. Until these control weaknesses and program deficiencies are corrected, the agency remains unnecessarily vulnerable to insider threats related to the unauthorized access to and disclosure, modification, or destruction of financial and taxpayer information, as well as the disruption of system operations and services. The new and unresolved weaknesses and deficiencies are the basis for GAO's determination that IRS had a material weakness in internal controls over financial reporting related to information security in fiscal year 2009.
All outpatient therapy providers are subject to Medicare part B payment and coverage rules. Payment amounts for each type of outpatient therapy service are based on the part B physician fee schedule. In 2000, Medicare paid approximately $2.1 billion for all outpatient therapy services, of which $87.1 million was paid to CORFs. To meet Medicare reimbursement requirements, outpatient therapy services must be: appropriate for the patient’s condition, expected to improve the patient’s condition, reasonable in amount, frequency, and duration, furnished by a skilled professional, provided with a physician available on call to furnish emergency medical part of a written treatment program that is reviewed periodically by a physician. CMS relies on its claims administration contractors to monitor provider compliance with program requirements. Contractors regularly examine claims data to identify billing patterns by specific providers or for particular services that are substantially different from the norm. Claims submitted by these groups of providers—or for specific services—are then selected for additional scrutiny. Whether such reviews occur prior to payment (prepayment reviews) or after claims have been paid (postpayment reviews), the provider is generally required to submit patient records to support the medical necessity of the services billed. This routine oversight may lead to additional claim reviews or provider education about Medicare coverage or billing issues. With 567 facilities nationwide at the end of 2002, the CORF industry is relatively small. Although CORFs operated in 41 states at the end of 2002, the industry is highly concentrated in Florida, where 191 (one-third) of all Medicare-certified CORFs are located. By contrast, the state with the second largest number of CORFs at the end of 2002 was Texas, with 53 CORFs. The number of CORF facilities in Florida grew about 30 percent during 2002 and the industry is now largely composed of relatively new, for-profit providers. The CORF industry in Florida continued to grow in 2003, reaching 220 facilities by year’s end, of which 96 percent were for profit. The growth in Florida CORFs came after a period of substantial turnover among CORF owners (many closures and new entrants). From 1999 to 2002, Medicare payments to Florida CORFs rose substantially and far outpaced growth in the number of beneficiaries that used CORFs. The number of Medicare beneficiaries receiving services from CORFs grew 13 percent, increasing from 33,653 in 1999 to 38,024 in 2002. However, during the same time period, Medicare expenditures for services billed by CORFs rose significantly, with total payments increasing 61 percent, from $48.1 million to $77.4 million. Half of all Florida CORFs received an annual payment of $91,693 or more from Medicare in 1999; by 2002, the median annual payment more than doubled to $187,680. Although CORFs were added to the Medicare program to offer beneficiaries a wide range of nontherapy services at the same location where they receive therapy, most Florida CORFs do not provide these types of services. For those that do, only a small proportion of Medicare payments are accounted for by these services. In 2002, 98 percent of Medicare payments to Florida CORFs went to furnish physical and occupational therapy or speech-language pathology services. The mix of services reimbursed by Medicare was very different in 1999, when such therapy accounted for 68 percent of all payments, and the remainder paid for nontherapy services, such as pulmonary treatments and psychiatric care. In recent years, payments to Florida CORFs have increasingly shifted toward those made for patients with back and musculoskeletal conditions. Most notably, patients who presented with back disorders accounted for 16 percent of all Medicare payments to Florida CORFs in 1999 and 29 percent of payments in 2002. In addition, payments for treating patients diagnosed with soft tissue injuries increased from 8 percent of Florida CORF payments in 1999 to 24 percent in 2002. One diagnosis group for which there was a notable decrease in the proportion of Medicare payments was pulmonary disorders, which fell from 30 percent of all payments in 1999 to 2 percent in 2002. In 2002, most of the 191 CORFs in Florida were small, with the median CORF in the state treating 150 beneficiaries. CORFs accounted for 15 percent of all Florida Medicare beneficiaries who received outpatient therapy from facility-based providers that year, and 30 percent of Medicare’s payments for outpatient therapy services to Florida facility- based providers. In a few areas, however, CORFs represented a substantial share of the outpatient therapy market, particularly in south Florida. For example, CORFs were the predominate providers of outpatient therapy services in Miami, with 53 percent of all facility-based outpatient therapy patients, and treated 29 percent of patients who received outpatient therapy from facility-based providers in nearby Fort Lauderdale. In 2002, Medicare’s therapy payments per patient to Florida CORFs were several times higher than therapy payments made to other facility-based outpatient therapy providers in the state. This billing pattern was evident in each of the eight Florida MSAs that accounted for the majority of Medicare CORF facilities and patients. Differences in prior hospitalization diagnoses and patient demographic information did not explain the disparities in per-patient therapy payments. Our analysis of claims payment data showed that per-patient therapy payments to Florida CORFs were about twice as high as therapy payments to rehabilitation agencies and SNF outpatient departments, and more than 3 times higher than therapy payments to hospital outpatient departments. (See table 1.) Specifically, at $2,327 per patient, therapy payments for CORF patients were 3.1 times higher than the per-patient payment of $756 for those treated by outpatient hospital-based therapists. Higher therapy payments for Medicare patients treated at CORFs were largely due to the greater number of services that CORF patients received. As shown in table 1, on average, CORF patients received 108 units of therapy compared with 37 to 59 units of outpatient therapy, on average, at the other types of outpatient providers. Typically, a unit of therapy service represents about 15 minutes of treatment with a physical therapist, occupational therapist, or speech-language pathologist. The pattern of relatively high payments to CORFs was evident in all of the localities where CORFs were concentrated. In 8 of the 14 MSAs in Florida that had CORFs in 2002, CORF payments per patient were higher than payments to all other types of facility-based outpatient therapy providers. These MSAs together accounted for 86 percent of all Florida CORF beneficiaries and 90 percent of the state’s CORF facilities. In these localities, per-patient payments to CORFs ranged from 1.2 to 7.4 times higher than payments to the provider type with the next highest payment amount. For example, in Fort Lauderdale, the 2002 average CORF therapy payment was $2,900—more than twice the average payment of $1,249 made for beneficiaries treated by rehabilitation agencies. (See table 2.) Some factors that could account for differences in therapy payment amounts—patient diagnosis and indicators of patient health care needs— did not explain the higher payments that some Florida CORFs received compared with other types of facility-based outpatient therapy providers. We found that CORFs received higher per-patient therapy payments than other facility-based providers for patients in each of the four leading diagnosis categories treated at CORFs. For patients with neurologic disorders, arthritis, soft tissue injuries, and back disorders, payments to CORFs were 66 percent to 159 percent higher than payments to rehabilitation agencies and SNF OPDs and higher yet than payments to hospital OPDs. (See table 3.) Patients treated for back disorders made up the largest share of Florida CORF patients, at 25 percent. For patients with this diagnosis, average payments to CORFs—at $1,734—were twice as high as the average payment of $867 made to rehabilitation agencies—the next highest paid provider type. The higher therapy payments to CORFs were driven by the higher volume of therapy services that CORFs provided to their Medicare patients, compared with the volume of services other facility-based outpatient therapy providers furnished to patients in the same diagnosis group. As shown in table 4, for all four leading diagnosis categories, CORF Medicare patients received far more units of therapy, on average, than Medicare patients treated by other outpatient therapy providers. Differences across provider types were particularly pronounced for Medicare patients with arthritis. CORFs furnished an average of 100 units of therapy to beneficiaries treated for arthritis. In contrast, non-CORF outpatient therapy providers delivered an average of 33 to 53 units of therapy to Medicare arthritis patients. Differences in patient demographic characteristics and prior-year hospital diagnoses—factors that could indicate variation in patient health care needs—did not explain most of the wide disparities in therapy payments per patient across settings. When we considered differences in patient age, sex, disability, Medicaid enrollment, and 2001 inpatient hospital diagnoses across provider types, the data showed that patients served by CORFs could be expected to use slightly more health care services than patients treated by other facility-based therapy providers. However, we found that, after controlling for these patient differences, average payments for CORF patients remained 2 to 3 times greater than for those treated by other provider types. Consistent with this finding, therapy industry representatives we spoke with—including those representing CORFs—reported that, in the aggregate, CORF patients were not more clinically complex or in need of more extensive care than patients treated by other outpatient therapy providers. They told us that patients are referred to different types of outpatient therapy providers based on availability and convenience rather than on their relative care needs. One private consultant to CORFs and other outpatient provider groups noted that there are no criteria to identify and direct patients to a particular setting for outpatient care, and that physicians generally refer patients to therapy providers with whom they have a relationship. Despite the Florida contractor’s increased scrutiny of CORF claims, our analysis of Florida CORFs’ 2002 billing patterns suggests that some providers received inappropriate payments that year. In late 2001, after finding widespread billing irregularities among CORF claims, the Florida claims administration contractor implemented new strategies for reviewing claims that were maintained throughout 2002. Although these strategies were successful at ensuring appropriate claims payments for a limited number of beneficiaries, our analysis of 2002 CORF claims found that many CORFs continued to receive very high per-patient payments. In 2001, the Medicare claims administration contractor for Florida reviewed about 2,500 claims submitted by CORFs and other facility-based outpatient therapy providers for services provided from January 1999 through February 2001. Among these claims, the contractor found widespread billing for medically unnecessary therapy services. These were therapy services related to maintaining rather than improving a patient’s functioning, as required by Medicare reimbursement requirements for covering outpatient therapy. Reviews also found claims for the same beneficiary, made by more than one CORF, sometimes on the same day. The unlikelihood that a patient would receive treatment from more than one CORF provider when each one was equipped to provide the patient’s full range of needed services caused the contractor to investigate further. After interviewing a sample of beneficiaries treated by multiple CORFs, the contractor found that some of the facilities treating these beneficiaries had common owners. It reported that the common ownership was significant, suggesting efforts by the owners to distribute billings for a patient’s services across several providers. The contractor stated that this would allow the CORFs’ owners to avoid the scrutiny of the Medicare contractor, which typically screens claims aggregated by facility rather than by beneficiary. After conducting additional reviews of a sample of paid claims from these CORFs, it found that 82 percent of payments made were inappropriate, largely due to questions about medical necessity. As a result, the contractor required these CORFs to repay Medicare approximately 1 million dollars and referred some of the CORFs to CMS and the HHS OIG for further investigation. In late 2001, the Florida claims administration contractor implemented additional claim review strategies targeting CORF claims. For any new CORF, the contractor began reviewing for medical necessity, prior to payment, about 30 of the first claims submitted. The contractor also began reviewing all therapy claims submitted on behalf of about 650 beneficiaries identified as having high levels of therapy use from multiple CORFs and other facility-based outpatient therapy providers during the 2001 investigation. CORFs and other providers submitting therapy claims for these beneficiaries had to supply documentation of medical necessity before claims were paid. The contractor also conducted prepayment reviews for specific therapy services determined to be at high risk for inappropriate payments, regardless of the beneficiary receiving services. The contractor maintained these intensified claim documentation and review requirements throughout 2002. The contractor indicated that the oversight measures put in place for specific beneficiaries were effective at improving the appropriateness of claims payments for therapy services made for those beneficiaries. Specifically, the contractor reported that Florida CORFs billed Medicare $12.1 million for this group in 2000, $10.2 million in 2001, and $7.3 million during 2002. In addition, the contractor denied an increasing percentage of the amount billed each year—46 percent in 2001, and 53 percent in 2002— based on its medical records reviews. While the contractor succeeded in ensuring that payments to CORFs for this limited group of beneficiaries met Medicare rules, our own analysis of CORF claims submitted in 2002 found several indications that billing irregularities continued. The indicators included a high rate of beneficiaries who received services from multiple CORFs, some CORFs that did not provide any therapy services, and many facilities with very high per-patient payments. Our analysis of 2002 Florida CORF claims by facility showed that the Florida claims administration contractor’s efforts to ensure appropriate CORF payments were not completely effective. We found that 11 percent of the beneficiaries who received CORF services in Florida were treated by more than one CORF facility during the year. While Medicare rules do not prohibit beneficiaries from receiving services from multiple providers in a single year, this occurs much more frequently among Florida CORFs than among CORFs in other states. Specifically, in the five other states with the greatest numbers of CORFs at the end of 2001 (Alabama, California, Kentucky, Pennsylvania, and Texas), fewer than 4 percent of beneficiaries received services from more than one CORF during 2002, and in most of these states, the rate was 1 percent or less. Although many CORFs treated a few patients who received services from multiple providers during 2002, a small group of Florida CORFs had very high rates of “shared” patients that year—suggesting that some CORFs may have continued to operate in the patterns first detected by the Florida contractor during its 2001 review. Of the CORFs operating in Florida in 2002, 32 facilities shared more than half of their patients with other CORF providers. At four CORFs, more than 75 percent of the beneficiaries were treated by multiple CORF providers during the year. Staff from the Florida contractor told us that these patterns of therapy use—receiving services from multiple providers during the same time period—complicate their ability to monitor appropriate use of therapy services. Contractor staff routinely analyze claims data to evaluate appropriate levels of service use and identify trends that may suggest excessive use. However, these analyses are normally conducted on claims data aggregated by CORF provider, not aggregated per beneficiary. When beneficiaries receive outpatient therapy services from multiple providers, traditional methods of oversight are less likely to detect high levels of service use and payments. Our review of 2002 Florida claims data also showed that some CORFs were not complying with Medicare program rules about furnishing required services. Although CORFs are permitted to provide nontherapy services, they must be delivered as part of a beneficiary’s overall therapy plan of care. However, three Florida CORFs received payments exclusively for nontherapy services—such as pulmonary treatment and oxygen saturation tests—in 2002. Four additional providers billed Medicare primarily for nontherapy services, with therapy care accounting for less than 10 percent of their annual Medicare payments. In addition, we found that a number of the CORFs identified during the Florida contractor’s 2001 investigation continued to have very high average payments for all services provided in 2002. As shown in table 5, several of these facilities were among 21 CORFs with per-patient payments that exceeded the statewide CORF average by more than 50 percent. Among this group of high-cost facilities, the per-patient payment in 2002 ranged from $3,099 to $6,080, substantially above the average payment of $2,036 across all Florida CORFs. These relatively high 2002 payments suggested that Florida CORFs responded to the contractor’s targeted medical reviews selectively by reducing the services provided to the small number of patients whose claims were under scrutiny. Other patients, outside the scope of the contractor’s criteria for medical review, continued to receive high levels of services. The contractor continues to rely on the medical review criteria originally established in late 2001. However, contractor staff reported ongoing concerns about the extent to which CORFs bill for services that may not meet the program’s requirements for payment. In particular, they cited the practice of delivering therapy services over relatively long periods of time that only maintain, rather than improve, a patient’s functional status. Sizeable disparities between Medicare therapy payments per patient to Florida CORFs and other facility-based outpatient therapy providers in 2002—with no clear indication of differences in patient needs—raise questions about the appropriateness of CORF billing practices. After finding high rates of medically unnecessary therapy services to CORFs, CMS’s claims administration contractor for Florida took steps to ensure appropriate claim payments for a small, targeted group of CORF patients. Despite its limited success, billing irregularities continued among some CORFS and many CORFs continued to receive relatively high payments the following year. This suggests that the contractor’s efforts were too limited in scope to be effective with all CORF providers. To ensure that Medicare only pays for medically necessary care as outlined in program rules, CMS should direct the Florida claims administration contractor to medically review a larger number of CORF claims. CMS officials reviewed a draft of this report and agreed with its findings. Specifically, the agency noted that “disproportionately high payments made to CORFs indicate a need for medical review of these providers.” The agency also pointed out that, given the high volume of claims submitted by providers, contractors must allocate their limited resources for medical review in such a way as to maximize returns. Furthermore, CMS stated that the Florida claims administration contractor is already taking appropriate steps to address concerns about CORF billing and is prepared to take additional steps if necessary. We recognize that contractors can achieve efficiencies by targeting their medical review activities at providers or services that place the Medicare trust funds at the greatest risk. However, the impact of medical review comes, in part, from the sentinel effect of consistently applying medical review to providers’ claims. Thus, while we support the contractor’s focus on new CORF providers, we continue to believe that enlarging the number of CORF claims reviewed would promote compliance with medical necessity requirements. Given that Florida CORFs continued to bill significantly more per beneficiary than other outpatient therapy providers even after the contractor took steps to examine some claims, compliance could be enhanced by aggressively addressing this vulnerability. CMS’s comments appear in appendix II. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its issue date. At that time, we will send copies of this report to the Administrator of CMS and to other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staff have any questions about this report, please call me at (312) 220-7600. Another contact and key contributors are listed in appendix III. In this report we (1) compared Medicare’s outpatient therapy payments to CORFs in 2002 with its payments that year to other facility-based outpatient therapy providers and (2) assessed the program’s effectiveness in ensuring that payments to CORFs complied with Medicare rules. As agreed with the requester’s staff, we limited the scope of our review to facility-based outpatient therapy providers and beneficiaries in Florida. Florida accounted for one-third of all CORF facilities at the end of 2002. Our primary data source was CMS’s National Claims History (NCH) 100% Nearline File. The NCH file contains all institutional and noninstitutional claims from the Common Working File (CWF)—the system that CMS uses to process and pay Medicare claims through its contractors across the country. We also reviewed data from CMS’s Medicare Provider of Service Files, which contain descriptive information on CORF facility characteristics, such as location, type of ownership, and the date of each provider’s initial program certification. Finally, we interviewed representatives of CMS’s central and regional offices, the Florida claims administration contractor, federal law enforcement agencies, and the therapy industry. To describe the Florida CORF industry and operations, we gathered Medicare claims data from CMS’s NCH File for the years 1999 through 2002. In addition to reviewing trends in total Medicare payments to CORFs, we examined changes in the patient case mix by identifying the primary diagnoses listed on claims for beneficiaries treated by CORFs. We also obtained descriptive information on CORFs’ characteristics from the Provider of Service Files for 1999 through 2003. This work was performed from May 2003 through July 2004 in accordance with generally accepted government auditing standards. In this analysis, we compared Medicare therapy payments to four types of facility-based outpatient therapy providers: CORFs, rehabilitation agencies, hospital OPDs, and SNF OPDs. Although CORFs are authorized to offer a wide range of services, we limited our comparison to a common set of therapy services: physical therapy services, occupational therapy services, and speech-language pathology services. To compare Medicare’s therapy payments to Florida CORFs with therapy payments to other types of facility-based outpatient rehabilitation therapy providers, we examined 2002 Medicare beneficiary claims data from the NCH File. We used the NCH file to identify all beneficiaries who resided in Florida and received outpatient therapy services from in-state providers during 2002. By limiting our review to beneficiaries who were enrolled in part B for all 12 months of the year, we excluded those in managed care and those with less than a full year of fee-for-service coverage. Using beneficiary identification numbers, we aggregated each beneficiary’s total outpatient therapy claims from all provider types. We summed the annual number of therapy units billed for each beneficiary as well as the annual line-item payment amounts. This allowed us to assign each beneficiary to a provider comparison group. To compare Medicare expenditures for similar patients, we assigned each beneficiary to a diagnosis category based on the primary diagnoses listed in their outpatient therapy claims for the year. Our diagnosis groups included stroke, spinal cord injury, neurologic disorders, hip fractures, back disorders, amputation, cardiovascular disorders—circulatory, cardiovascular disorders—pulmonary, rehabilitation for unspecified conditions, arthritis, soft tissue/musculoskeletal injuries, ortho-surgical, multiple diagnoses and other. To consider differences in payment by provider type at the substate level, we compared annual per-patient payments for CORFs and other outpatient facility providers in each of Florida’s 20 metropolitan statistical areas. Variation in treatment patterns and payments (for the same diagnosis category) across provider types may suggest that one type of provider treats a patient population with greater needs for service. To consider patient differences, we applied CMS’s Principal Inpatient Diagnostic Cost Group (PIP-DCG) model. By comparing patients’ use of hospital services and inpatient diagnoses (in the calendar year prior to the year they received therapy) and demographic information such as age, sex, disability, and Medicaid enrollment, the PIP-DCG model allowed a comparison of anticipated patient care needs across provider types. We used the PIP-DCG score developed for each beneficiary in combination with the 2002 therapy payment data to conduct an analysis of covariance. To review strategies used by the Florida claims administration contractor to ensure proper CORF payments, we interviewed representatives of CMS’s central and regional offices and representatives from the contractor. The contractor provided us with the results of its 2001 investigation of Florida CORFs and its subsequent reports on CORF billing patterns. In addition, we interviewed federal law enforcement agencies involved in investigations of Florida CORF facilities. To assess the effectiveness of the contractor’s oversight strategies, we reviewed information developed by the contractor on changes in CORF billing practices. We also analyzed 2002 claims data for CORF services to identify any CORFs with disproportionately high Medicare payments. This analysis included payment data for all claims—for both therapy and nontherapy services. In contrast to our comparison of per-patient payments by provider type, in this analysis we included all beneficiaries, regardless of their total annual therapy payments and duration of Medicare fee-for-service enrollment. We did not independently verify the reliability of CMS’s Medicare claims data. However, we determined that CMS’s Medicare claims data were sufficiently reliable for the purposes of this engagement. CMS operates a Quality Assurance System designed to ensure the accuracy of its Medicare NCH and CWF data files. Specifically, the agency has procedures in place to (1) ensure that files have been transmitted properly and completely, (2) check the functioning of contractor claims edits, and (3) sample claims from the files that exhibit unusual or inconsistent coding practices (indicating that data elements may be unreliable). In addition, we consulted with CMS’s technical staff as necessary to ensure the accuracy and relevance of the data elements used in our analysis. We also screened the files and excluded claims that were denied, claims superseded by an adjustment claim, and claims for services in other years. In addition to the contact named above, Jennifer Grover, Rich Lipinski, and Hannah Fein made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Comprehensive Outpatient Rehabilitation Facilities (CORF) are highly concentrated in Florida. These facilities, which provide physical therapy, occupational therapy, speech-language pathology services, and other related services, have been promoted as lucrative business opportunities for investors. Aware of such promotions, the Chairman, Senate Committee on Finance, raised concerns about whether Medicare could be vulnerable to overbilling for CORF services. In this report, focusing our review on Florida, we (1) compared Medicare's outpatient therapy payments to CORFs in 2002 with its payments that year to other facility-based outpatient therapy providers and (2) assessed the program's effectiveness in ensuring that payments to CORFs complied with Medicare rules. In Florida, CORFs were by far the most expensive type of outpatient therapy provider in the Medicare program in 2002. Per-patient payments to CORFs for therapy services were 2 to 3 times higher than payments to other types of facility-based therapy providers. Higher therapy payments were largely due to the higher volume of services--more visits or more intensive therapy per visit--delivered to CORF patients. This pattern of relatively high CORF payments was evident in each of the eight metropolitan statistical areas (MSA) of the state where nearly all Florida CORFs operated and the vast majority of CORF patients were treated. A consistent pattern of high payments and service levels was also evident for patients in each of the diagnosis categories most commonly treated by CORFs. Differences in patient characteristics--age, sex, disability, and prior inpatient hospitalization--did not explain the higher payments that Florida CORFs received compared to other types of outpatient therapy providers. Steps taken by Medicare's claims administration contractor for Florida have not been sufficient to mitigate the risk of improper billing by CORFs. After examining state and national trends in payments to CORFs in 1999, the contractor increased its scrutiny of CORF claims to ensure that Medicare payments made to CORFs were appropriate. It found widespread billing irregularities in Florida CORF claims, including high rates of medically unnecessary therapy services. Since late 2001, the contractor has intensified its review of claims from new CORF providers and required medical documentation to support certain CORF services considered at high risk for billing errors. It has also required that supporting medical records documentation be submitted with all CORF claims for about 650 beneficiaries who had previously been identified as receiving medically unnecessary services. The contractor's analysis of 2002 claims data for this limited group of beneficiaries suggests that, as a result of these oversight efforts, Florida CORFs billed Medicare for substantially fewer therapy services than in previous years. However, our analysis of all CORF therapy claims for that year indicates that the contractor's program safeguards were not completely effective in controlling per-patient payments to CORFs statewide. With oversight focused on a small fraction of CORF patients, CORF facilities continued to provide high levels of services to beneficiaries whose claims were not targeted by the contractor's intensified reviews.
PBGC was created as a self-financing, nonprofit, wholly owned government corporation under ERISA to protect the retirement income of workers with private-sector defined benefit plans—that is, plans that promise a set benefit amount upon retirement to vested employees typically based on a formula. PBGC administers two separate insurance programs for these pension plans: a single-employer program and a multiemployer program. The single-employer program is the larger of the two, and as of fiscal year 2010 covered about 34 million participants in just over 26,000 plans. The multiemployer program covered over 10 million participants in about 1,500 collectively bargained plans that are maintained by two or more unrelated employers. If the sponsor of a single- employer plan meets the statutory requirements for financial distress and the plan does not have sufficient assets to pay all promised (vested accrued) benefits that have become due, the plan will be terminated and PBGC will likely become the plan’s trustee, assuming responsibility for paying benefits to participants, up to certain limits specified under statute in ERISA and related regulations. If the sponsors of a multiemployer pension plan are unable to pay all promised benefits that have become due, PBGC will provide financial assistance to the plan, usually a loan, so that retirees can receive the guaranteed portion of their benefits, but PBGC does not assume trusteeship of the plan. PBGC’s Director, who is appointed by the President and subject to Senate confirmation, is responsible for managing the agency’s daily operations. A three-member Board of Directors, consisting of the Secretaries of the Departments of Commerce, Labor, and the Treasury, is charged with providing policy direction and oversight of PBGC’s finances and operations. PBGC is self financed through insurance premiums set by Congress and paid by companies that sponsor defined benefit plans, the assets of underfunded single-employer plans terminated and trusteed by PBGC, recoveries from companies formerly responsible for those plans, and the returns earned on the investment of these funds. Thus, PBGC’s primary responsibilities are to collect premiums from the sponsors of defined benefit plans, monitor the financial status of the plans it insures, assume administration of underfunded single-employer plans that terminate, calculate benefit amounts and make payments to participants in those plans when due, and manage the investment of plan assets under its control. In our 2001 high-risk update, we included PBGC in a list of examples of agencies that were facing human capital challenges, stating: “Because the agency did not adequately link its contracting decisions to long-term strategic planning, it may not have the cost-effective mix of contractor and federal employees needed to meet future workload challenges. Further, PBGC employees who monitor contractors lack adequate guidance and policies essential to monitoring contractor performance.” Subsequently, we designated PBGC’s single-employer program as a “high-risk” program in 2003 due to PBGC’s net deficit, as well as the continuing likelihood of future terminations of large, underfunded pension plans, and the program has remained on the list with each subsequent update. In 2009, we also designated the multiemployer program as high risk. Between fiscal years 2008 and 2010, the single-employer program’s deficit grew from $10.7 billion to nearly $22 billion, and the multiemployer program’s deficit grew from $473 million to just over $1.4 billion. We noted in our February 2011 high-risk report that PBGC’s current strategic planning does not adequately incorporate goals in several key management areas, including goals to determine the optimal mix of contract and federal workers. We reported that PBGC could take steps—such as including procurement decision making in its corporate-level strategic planning—to strengthen strategic management of its contractor workforce to better manage the challenges of its unstable financial condition and increasing workload. We have also issued a number of reports on ways to improve contracting practices by federal agencies governmentwide, and by PBGC in particular. For example, in 2005, we published Framework for Assessing the Acquisition Function at Federal Agencies in response to federal agencies’ increasing reliance on contractors to perform their missions and the systemic weaknesses identified in key areas of contracting by us, IGs, and other accountability organizations (see app. I). With respect to PBGC in particular, in September 2000, we reported on a variety of challenges facing its contracting activities, including that the agency did not adequately link decisions to contract for services to longer-term strategic planning considerations. We recommended that PBGC develop a strategic approach to contracting by conducting a review of its future human capital needs and to link contracting decisions to PBGC’s long- term strategic plan. More recently, in 2008, we reported that while PBGC had taken steps to improve its acquisition infrastructure, most of the agency’s contracts still lacked performance incentives and methods to hold contractors accountable. We recommended that PBGC revise its strategic plan to reflect the importance of contracting. PBGC is organized into different program and administrative departments that are responsible for different aspects of its pension plan insurance programs, including the termination of defined benefit plans and administration of plan benefits, and other internal functions such as legal services, financial operations, and procurement. Four program departments account for most contract expenditures at PBGC—the Benefits Administration and Payment Department (BAPD), the Corporate Investment Department (CID), the Department of Insurance Supervision and Compliance (DISC), and the Office of Information Technology (OIT) (see fig. 1). Other departments provide support services for the program departments. For example, PBGC’s Procurement Department manages all contract award activities for the agency. Only contracting officers in the Procurement Department may sign and award contracts on behalf of PBGC. The program departments develop requests for the purchase of goods and services needed to accomplish their objectives, which they submit to the Procurement Department to initiate the contracting process. In addition, PBGC’s Budget and Organization Performance Department (Budget Department) manages the formulation and execution of the PBGC budget and establishes and implements policies, regulations, and guidelines related to organizational performance. According to the Budget Director, when PBGC departments submit their annual budget requests, the Budget Department identifies any changes in contract service requirements and workforce requests to inform its budget recommendation to PBGC executive management. PBGC’s workload has increased in the last 20 years as the cumulative number of plans terminated and trusteed, number of participants eligible for or receiving benefits in those plans, and amount and complexity of plan assets taken over by PBGC have grown. As of fiscal year 2010, PBGC had terminated and trusteed a total of 4,150 underfunded pension plans (see fig. 2). Following the economic downturn, during the combined 2009 and 2010 fiscal years, a total of 301 underfunded single-employer plans were trusteed by PBGC. By comparison, during the combined 2007 and 2008 fiscal years, only 189 underfunded single-employer plans were trusteed. In addition, during fiscal years 2009 and 2010, PBGC provided assistance to 93 multiemployer pension plans, up from 78 plans during the prior 2 years. As a result of the increase in terminated plans, the number of participants in plans terminated and trusteed by PBGC has also grown over the last decade (see fig. 3). By fiscal year 2010, PBGC paid or owed benefits to nearly 1.5 million total participants in 4,150 trusteed plans. During fiscal years 2009 and 2010 alone, PBGC became responsible for the retirement benefits of an additional 300,000 pension plan participants when their underfunded plans were terminated and trusteed. The increase in terminated plans has also contributed to the amount of assets PBGC manages. Total assets managed by PBGC have grown from less than $22 billion in fiscal year 2000 to nearly $80 billion in fiscal year 2010 (see fig. 4). In recent years, PBGC’s investment portfolio has become more challenging as it now includes complex financial instruments and the oversight of additional managers. PBGC has come to rely heavily on contracting to conduct much of the core work of the agency, yet PBGC does not make decisions to use contractors in accordance with an agency-wide strategic plan or focus. Rather, individual departments develop their own rationales for contracting, largely based on historical practice rather than on an assessment that using contractors would be more advantageous than using federal employees to conduct its work. PBGC’s long-term, extensive reliance on contractors raises concern that the agency may be eroding its in house expertise and management control over core functional areas. In 2005, we issued a framework for assessing federal agencies’ contracting activities that identified four cornerstones to promote an efficient, effective, and accountable acquisition function. One of these cornerstones was “organizational alignment and leadership,” with “organizational alignment” defined as appropriately placing the acquisition function within the agency, having clearly defined roles and responsibilities for stakeholders, aligning contracting with the agency’s mission and needs, and organizing the contracting function. Our 2008 report on PBGC’s contracting activities found that PBGC was falling short in this area. The agency had not involved its Procurement Department in helping the agency make strategic decisions about contracting early in the process or in developing long-term strategic approaches, thus leaving the agency less able to effectively identify, analyze, prioritize, and coordinate agency-wide acquisition needs. We recommended that PBGC take several steps to better incorporate contracting into its strategic planning. In our work for this report, we found that while PBGC has taken certain steps to improve its acquisition infrastructure, such as adding staff to the Procurement Department to help manage and monitor contract awards and developing staff training, the agency has not fully integrated its contracting function at the corporate level. Instead, PBGC has continued to leave contracting decisions to the agency’s individual program departments. In our 2008 report, we recommended that one way PBGC could incorporate contracting in its strategic planning would be to include the Procurement Department in agency-wide strategic planning and ensure that the Procurement Director sits on PBGC’s three strategic teams. The teams, now called “governing bodies,” are known as the Executive Management Committee, the Budget and Planning Integration Team (BPIT), and the Information Technology Investment Review Board. These bodies, respectively, review corporate-wide programs, projects, and internal policies; approve corporate-wide resource allocations and align resources to the agency’s strategic objectives; and review information technology investments to assure alignment with strategic objectives. In response to our 2008 recommendation, PBGC maintained that the Procurement Director need not be a member of the three bodies to be effective, as its Chief Management Officer (to whom the Procurement Director reports) represents contracting on these teams. Despite including the Chief Management Officer in these corporate-wide meetings, however, corporate-level strategic planning regarding contracting remains limited. Without some way of better integrating contract decision making into the corporate-level strategic planning process and aligning contract activities with the agency’s mission and goals along the lines outlined in the four cornerstones, the program departments remain responsible for contracting decisions without meaningful top-level management involvement to identify, analyze, prioritize, and coordinate agency-wide contracting needs. In addition, the agency has provided guidance to its departments about how contracting decisions should be made, but not how to link such decisions to agency-wide strategic planning. An August 2009 policy memo from the Chief Management Officer to PBGC managers discussed whether to use contractors or government employees for services. This memo provided a list of factors for departments to consider when deciding whether or not to use contractors, but only one of these factors called on departments to evaluate the appropriateness of using contractors. Specifically, departments are required to ensure that the contractors would not be performing duties that could be considered “inherently governmental functions,” reflecting a requirement contained in the FAR. All other factors addressed the limitations on using federal employees, such as lack of expertise and full-time equivalents (FTE). Moreover, the memo did not include any requirements to ensure decisions to use contractors are linked to agency-wide strategic planning. PBGC also has an agency-wide strategic plan and a human capital strategic plan, but neither of these plans discuss the division of labor between federal employees and contractors, or how to determine the optimal mix of each type of worker. For example, PBGC’s Strategic Plan, FY 2011- 2016, describes human capital management under its goal of effective and efficient stewardship of agency resources, but does not reflect the important role contracting plays in achieving the agency’s mission. Similarly, PBGC’s Human Capital Strategic Plan, FY 2010-2014, acknowledges the importance of contracting and the challenges of balancing the workforce between federal and contract workers, but it does not describe how it plans to achieve that balance; rather, it focuses primarily on recruiting, knowledge retention, and succession planning for PBGC’s federal employees. The plan stated that a strategic focus on human capital requires, among other things, a balanced workforce, succession plans for potential workforce gaps, and an evaluation of maintaining a significant number of contractor workers versus converting those positions to permanent staff. The plan noted that “the gaps in tenure and the heavy use of contracting staff present unique human capital planning challenges in sustaining critical organizational knowledge.” However, the plan did not outline a strategic approach to retaining organizational knowledge, address an optimal mix of federal versus contract workers, or provide specifics about when and how the evaluation would be accomplished. As of May 2011, the proposed evaluation of the potential for converting positions from contractors to federal employees is under review by the Executive Management Committee. The newly hired Chief Management Officer indicated that she plans to study various options regarding the appropriate mix of contractor and federal staff necessary to accomplish the agency mission, but this study was not yet under way. Findings from a recent IG report highlight the need for further action to incorporate contracting decision making into the agency’s strategic planning process. In November 2010, PBGC’s IG found the agency’s strategic planning for workload surges to be inadequate, as it did not reflect, among other things, the importance of contractors, even though PBGC had concluded informally that it would handle such a surge mostly by expanding the contract workforce. The IG recommended that the agency develop a workforce strategy tailored to address gaps in numbers, deployment, and placement of the workers to be obtained through contracts. The IG also recommended that the workforce strategy should reflect the importance of the contract workforce to PBGC and support linkage of staffing and contracting decisions at the corporate level with an expanded coordinating role for the Procurement Department. In response, PBGC management noted the risk of a large influx of pension plans had decreased from early 2009 levels, and, therefore, as an alternative, proposed modifying an existing work group to plan for workload surges that involve more than just large cases. However, the IG has continued to express concerns to PBGC management that it is unclear how the agency would implement the proposed alternative, and noted that as of February 2011, the agency still had not committed to specific preplanned solutions for workload-surge events. The four PBGC programming departments we reviewed decide individually, subject to annual budget approval, whether to accomplish their work through contractors or federal employees, and their rationales for deciding to use contractors vary (see table 1). In accordance with PBGC’s policy memo about how to decide whether to use contractors or government employees for services, managers are to consider various prescribed factors and submit their documented decisions annually to the Procurement Department. One decision factor included in this memo is for departments to consider if the service can be provided more cost effectively by federal employees than by contractors, referring to OMB guidance for estimating the costs. However, department officials told us they do not routinely conduct an evaluation of the costs and benefits of performing work through contractors when making contracting decisions. Officials from all four of the programming departments we reviewed cited the agency’s historical practice of using contractors to accomplish certain types of work among their primary reasons for using contractors. Officials often also cited the need to manage workload fluctuations more efficiently, as well as a lack of needed expertise among federal employees. Officials in all four main program departments we interviewed included historical practice as one of the primary reasons they use contractors. According to agency officials, even though PBGC requires its departments to evaluate the costs and benefits of continuing to perform the work through contractors each time a contract expires, some types of work have been performed by contractors for 10 years or more with no assessment as to whether the use of contractors is the most economical and effective way of getting the work done. For example, BAPD has awarded contracts for its work administering plans through its field offices since 1978, shortly after the agency was established. When PBGC first began taking over large, underfunded pension plans, it awarded contracts to either companies that sponsored a plan or former employees of those companies to continue administering the plan, including determining benefits and processing payments for plan participants. This practice allowed PBGC to take advantage of these workers’ familiarity with the pension plans being terminated and geographic proximity to the affected participants in those plans. In 1981, PBGC expanded the role of such contracts to cover pension plans of other sponsors and established a small number of field offices under contract. Over time, the link to plan expertise and geographic proximity has essentially disappeared. Now, plans are assigned to a field office based on the size of a plan and the capacity of an individual office to take on a plan of that particular size, with prior expertise in a particular industry and geography only occasionally entering into the decision. In response to recommendations from our 2000 report, PBGC commissioned the National Academy of Public Administration to study PBGC’s future workforce needs, with the goal of using the study’s results to better link staffing and contracting decisions to PBGC’s long-term strategic planning process. This study found that, prior to 2001, PBGC had conducted some cost-benefit analyses on the use of contractors that found it was more economical to obtain services through contractors than to hire federal employees for its field offices. However, PBGC officials interviewed for our current study could not identify any more recent analyses that compared the benefits and costs of contractors versus federal employees performing work in the field offices, or for any other contracts we examined. Although the need to manage workload fluctuation and plan complexity is often cited by PBGC officials as the key reason for needing to use contractors, we found that agency-wide, the number and percentage of contractor staff appears to exceed the amount needed to address such fluctuations. PBGC has used contractors to perform a substantial portion of its core tasks beyond the numbers of workers needed to address marginal increases and decreases in key workload indicators, including the number of participants, number of plans, and assets (see fig. 5). Among the four main program departments we reviewed, BAPD, DISC, and CID officials all cited the need to manage workload fluctuations as the primary reason they use contractors to supplement their departments’ federal workforce. Officials particularly emphasized the speed with which the contractor workforce can be reduced at times of lower workload. In determining the need for contractors, BAPD and DISC officials said they consider both incoming work and work in process in their decisions. For example, BAPD’s workload is tied to the number of plans being trusteed and the number of participants in those plans. In DISC, the workload is tied to the number of troubled pension plans potentially requiring termination and trusteeship by PBGC. In CID, the workload is tied to the volume and complexity of assets from plans that PBGC has terminated and trusteed, as well as a number of other factors. While it may be true that the amount of contractor workers can more easily rise and fall with the workload than if all PBGC workers were federal employees, it appears that the extent of contracting is greater than the amount needed to respond to such fluctuation. For example, in the case of BAPD, between fiscal years 2005 and 2010, the number of terminated plans increased steadily, between 2 and 4 percent each year, and the number of participants in newly terminated plans increased between 2 and 20 percent of total cumulative participants. Meanwhile, over that same period, the proportion of contractor workers ranged between 72 and 80 percent of BAPD’s total workforce (see fig. 6). Officials from OIT, CID, and DISC also cited the need to acquire specific expertise as a primary reason for contracting. Officials from these departments said contractors are used particularly for certain types of IT, actuarial, legal, and investment work. For example, a senior OIT official told us it is general agency practice to have software design and development work done by contractors, as it is difficult to keep skilled software engineers on staff as government employees. In fiscal year 2010, OIT had 337 contractor workers providing software engineering services. Similarly, CID officials said they lack the in house expertise to manage investment funds due to the complexity of investment instruments. CID officials also told us that since PBGC’s inception, the agency has required the use of outside investment managers under contract to invest PBGC assets as a safeguard to prevent government employees from affecting private companies and the market, consistent with a 1977 OMB memorandum, and this has been reaffirmed by the PBGC Board of Directors and investment policy statements ever since. CID officials stated that, as a government corporation, PBGC may not have to comply with the OMB memo, but that its principles—including the use of outside investment managers—have been the foundation of PBGC’s investment management approach for the agency’s entire history. Beyond the rationales discussed above, officials across the four departments we reviewed noted that contracting is necessary, to some extent, because there are simply not enough federal employees to perform the work, mostly due to a cap on the number of federal employees that PBGC can have at any one time. The cap is established each year as the result of PBGC’s annual budget process where the agency requests from OMB a specific number of FTE staff. PBGC is then allotted a certain number of FTEs, which are then assigned to each department based on program needs. According to a PBGC Budget Department official, the agency assigns FTEs to each department by program activity, then the departments decide how to fill FTEs based on the extent to which activities are comprised of inherently or noninherently governmental functions, and the ability of their existing federal staff to perform the work. The official told us that, as of October 2010, the number of on-board FTEs—the actual number of positions filled by federal employees—was above the agency’s FTE allotment for fiscal year 2010 (955 filled versus 941 allotted.) Another temporary disincentive to hiring federal employees mentioned by some departmental officials was the imposition of additional review procedures on PBGC’s hiring process during a 12-month period between June 2008 and June 2009. The Office of Personnel Management (OPM) imposed these procedures after an evaluation found severe deficiencies in PBGC’s competitive recruitment process. After PBGC officials worked with OPM staff to review all phases of the process, including auditing selection certificates before extending job offers to candidates, the additional review procedures were lifted. Responding to the available allotment of FTEs and the lifting of OPM’s added review procedures, over the past 2 years, some departments have taken action to hire more federal employees to assume work previously performed by contractors. For example, BAPD officials told us that since fiscal year 2009, BAPD had taken steps to add new FTEs, hiring 12 new federal employees during this period. In addition, between fiscal years 2008 and 2010, BAPD requested a conversion of contractor dollars for 20 FTEs. BAPD officials told us that this shift helped fill the need for additional in-house knowledge of contracts in one of its divisions. In addition, DISC officials told us that federal employees could carry out the tasks performed by a particular actuarial contractor if sufficient FTEs were available to the department, but it had not requested any new FTEs in fiscal years 2009 or 2010. PBGC’s history of heavy reliance on contractors to carry out its operations, achieve its goals, and meet its agency mission, goes back to the mid-1980s. At that time, when faced with a significant influx of large pension plan failures, PBGC chose to award contracts for services rather than seek additional federal employees. Over time, PBGC continued contracting to address its expanding workload and quickly obtain necessary services. As of fiscal year 2010, nearly 80 percent of its total budget was spent on contracts (see fig. 7). Over time, such heavy reliance on contractors may be placing PBGC at risk of diminishing management control over contract activities and decreasing the level of expertise among its federal employees. In 2006, we convened a forum of government, industry, and academic participants to discuss federal acquisition challenges and opportunities. Subsequently, in 2007, the congressionally mandated Acquisition Advisory Panel issued its report based on its review of laws, regulations, and governmentwide acquisition policies regarding various aspects of contracting. Both of these groups noted how an increasing reliance on contractors to perform services for core government activities challenges the capacity of federal officials to supervise and evaluate the performance of these activities. In addition, some of our previous reports on contracting across various federal agencies—including the Department of Homeland Security and the Department of Defense—advised that long-term extensive reliance on contractors can diminish management control over contract activities. In guidance to agencies about how to better manage a workforce comprised of both contractors and federal employees, OMB also noted that agencies often lack adequate information on how contractors are deployed throughout their organizations, and that as a result, agencies risk underutilizing the full potential of their total workforce—both contract and federal. In light of PBGC’s extensive reliance on contractors, the agency may be at risk for the same types of problems mentioned earlier, particularly a lack of adequate management control and contract oversight—problems which could impede PBGC’s ability to manage increasing workloads, contractor costs, and program outcomes. As illustrated in figure 7, almost three- fourths of PBGC’s fiscal year 2010 budget was allocated to contracting. Moreover, our analysis of PBGC’s workforce data shows that based on the number of contractor workers being monitored, nearly two-thirds of its fiscal year 2010 workforce was comprised of contractor workers. But the actual total of contractor workers performing services is even higher, as these data do not include contractor workers providing services to the agency under some types of contracts. For example, in CID, services for managing assets are provided to PBGC under fixed price contracts by a team of investment managers supported by workers in numerous functional areas within the firms that were awarded contracts. The contracts for services managing assets apply a percentage to the investment portfolio value to determine fees paid to the contractor—the contractor is not required to provide the staffing hours or number of staff tasked to support deliverables. Additionally, PBGC’s extensive reliance on its contractor workforce may be placing the agency at risk of not building institutional knowledge among its federal workforce in those areas in which the agency has come to rely on contractors. This is of greatest concern for work that is central to the mission of the agency—work that if contractors are relied on too extensively, could result in the agency essentially ceding its core functions to its contractor workforce. Without taking action to address the potential effects of its extensive reliance on contracting, PBGC risks being unprepared to meet future workload changes amid ever-increasing financial liabilities. The agency’s contractor workforce performs an array of services, including core functions such as processing terminations of defined benefit plans, providing actuarial services, managing asset investments, and conducting IT-related activities. Although PBGC has not acknowledged or taken steps to address the potential risks of eroding expertise at an agency-wide level, such risks have been noted at the department level. For example, OIT officials told us that in fiscal year 2007, it conducted a risk assessment that identified certain deficiencies, including a competency gap between contractor and federal employees. To build and retain institutional knowledge and expertise, and to provide better guidance to and oversight of contractors, OIT made a concerted effort to shift some funds from contractors to hiring additional federal employees. As a result, between fiscal years 2007 and 2010, OIT increased the number of federal employees from 84 to 104, and decreased its contractor workforce from 390 to 360 contractor workers. However, other departments we interviewed had not conducted similar risk assessments or identified similar concerns, nor has a risk assessment been conducted on an agency-wide basis. Also, PGBC has not undertaken an analysis at an agency-wide level to better understand how services being performed under contract are supporting its mission and operations and whether contractor workforce skills are being used in an appropriate manner in coordination with the skills of federal employees. At the department level, some units within PBGC have examined the costs of using contractors to provide certain functions compared with federal employees. As a result, PBGC officials told us that between fiscal years 2007 and 2010, nine departments submitted requests to convert contract dollars for hiring 102 federal employees to do the same work. OMB approved 73 new FTEs during this period, due in part to PBGC lagging in bringing new employees on board. However, without accurate information on the type and extent of work being performed by contractors at agency-wide level, including how contract work is being distributed by function and location across the entire agency, PBGC risks diminishing its management control over the contracting decision-making process. In 2009, Congress enacted a new federal provision requiring most agencies to give greater consideration to using federal employees to perform functions currently performed by contractors (referred to as “insourcing”). OMB subsequently issued guidance on how to manage decisions to contract and help mitigate the effects of extensive reliance on contracting. Steps outlined in this guidance included (1) developing more strategic acquisition strategies, (2) conducting a pilot human capital analysis of one program where the agency had concerns about the extent of reliance on contractors, and (3) conducting a service contract inventory to allow better understanding of how contracted resources are distributed and to identify contracts that may involve inherently governmental functions. As noted in a November 2010 OMB memo, the inventory is a tool for assisting an agency in better understanding how services awarded under contract are being used to support its mission and operations, and whether the contractors’ skills are being utilized in an appropriate manner. An agency manager can gain insight into where, and the extent to which, contractors are being used to perform activities by analyzing how contractor resources are distributed by function and location across the agency and within its components. This insight is especially important for components with contracts whose performance may involve critical functions or core work that is closely associated with inherently governmental functions. Moreover, while the fiscal year 2010 inventories conducted by federal agencies were not required to include the number of contractor workers or the role the services play in achieving agency objectives, such information is required for the fiscal year 2011 inventories. As part of the 2011 inventory process, covered agencies are required to determine if contractors are being used in an appropriate and effective manner and if the mix of federal employees and contractors in the agency is effectively balanced, with priority consideration given to professional and management services and IT support services. As a government corporation, PBGC is not subject to the new insourcing requirements and is not required to comply with OMB’s guidance on conducting a service contract inventory. Nevertheless, conducting such an inventory, as outlined in the guidance, could offer PBGC a useful tool for enhancing the agency’s contracting performance by strengthening its management controls and building institutional knowledge, which is essential to identifying and mitigating the effects and potential risks of its extensive reliance on contracting. With respect to the service contract inventories in particular, OMB has noted that when used as part of a balanced workforce analysis, such inventories can help identify whether an agency has an overreliance on contracting in certain areas that would require increased contract management or rebalancing to ensure the agency is effectively managing risks and obtaining the best results for the taxpayer. Over the past 2 years, PBGC has adopted several new tools and practices to strengthen its contracting process, including developing a comprehensive procurement standard operating procedures manual and various reporting tools to help managers and staff make well-informed acquisition decisions and to improve contract oversight. In addition, PBGC has increased its use of competitively awarded contracts and fixed price contracts. In our view, use of competitively awarded contracts and these contract types has been shown to improve the contracting process by limiting the cost and performance risk assumed by the government. Over the past decade, both we and PBGC’s IG have made a number of recommendations to strengthen the agency’s contracting practices. In our 2000 report, we identified underlying management weaknesses regarding PBGC’s overall approach to selecting and managing contractors, as well as day-to-day contract administration activities, and we recommended that PBGC take action to address specific operational and procedural weaknesses identified in our review. In our 2008 report, we found that while PBGC had made efforts to improve its acquisition infrastructure, it had not developed a strategic approach to its contracting processes as envisioned in our 2000 report and we recommended that PBGC improve its contract management and develop practices to help ensure the accountability of Procurement Department staff. Since 2008, in response to these recommendations as well as various recommendations from its IG, PBGC’s Procurement Department has made several structural changes, and has adopted new tools and practices to strengthen its contract award and oversight processes. In 2009, the Procurement Department was reorganized into separate divisions: an Acquisition Division responsible for the awarding of contracts for goods and services and a Policy and Contract Administration Division responsible for the management of awarded contracts. The Acquisition Division is charged with ensuring the integrity of the pre- award contracting process, which includes acquisition planning, proposal evaluation, and contract award. The Policy and Contract Administration Division is charged with ensuring that all aspects of the contract are fulfilled after award, including oversight of the contractor’s performance, contract modifications, proper payment of contractor billings, and contract termination or closeout when work is completed. One official told us that before the reorganization, a single person would be responsible for both the pre-award and postaward activities on each contract, and that postaward activity often received less attention as a result. PBGC officials told us the level of contract oversight being provided by PBGC staff has improved now that administrative contracting officers are focusing exclusively on postaward activity and have dispensed with other duties. However, PBGC officials were unable to provide us with any measurements or quantitative evidence of this improvement. In addition, the Procurement Department secured approximately $1.8 million in the agency’s fiscal year 2009 and 2010 budgets for the hiring of additional procurement staff and to make awards to several support contractors. Procurement Department staffing has increased from 14 in February 2008 to 17 as of March 2011, with 1 additional position still being recruited. Budgetary resources were also provided for contractor support to assist with completing contract closeout work, reviewing postaward contract files, and operating the agency’s contract writing system. Funding has also been provided to hire a contractor to conduct a capital asset study that may be used to support a future funding request for a new contract writing system or the resources to make improvements to the existing one. There are several steps for awarding contracts at PBGC that can be categorized in terms of four key stages: acquisition planning, proposal solicitation, proposal evaluation, and contract award (see fig. 8). The basic structure of PBGC’s contracting process is based on the FAR, PBGC’s own regulations, and guidance from OMB. As a government corporation with unique responsibilities, PBGC is not required to comply with many of the laws, federal regulations, policies, and procedures that may apply to other federal agencies. For example, while PBGC’s contracting activities for certain functions, such as the insurer of defined benefit plans, may be subject to the FAR, PBGC contracting activities related to its role as trustee of terminated plans are not bound by the FAR. As a matter of policy, however, PBGC has decided to abide voluntarily by the FAR in procuring all goods and services. Since 2008, the Procurement Department has developed several new tools and practices designed to improve PBGC’s contracting process by fostering a closer working relationship with other PBGC departments. These new measures include requiring departments to submit advance procurement planning documents with realistic contract award milestones, share information on the progress of contract awards, provide estimates of acquisition planning needed for future contract awards, and adhere to a new standard operating procedures (SOP) manual for procurement so that contracting and agency staff carry out their responsibilities correctly (see table 2). PBGC staff have reacted positively to these new measures. For example, several contracting officers told us that requiring advance procurement plans to include realistic contracting process milestones was helpful and provided adequate lead time for the contract awards. The IG found the new SOP to be a useful “first step” toward improving procurement effectiveness, but maintained that PBGC leadership needs to develop ways to measure compliance with the new procedures and make corrections or adjustments. Also, the Procurement Director told us that before the Procurement Status chart was in place, the program departments complained they had little insight into how their contracting needs were being supported or whether the expected contract award dates would be met. With the Expiring Contracts chart, these two tools function as part of an integrated procurement data system that provides information to inform acquisition decisions and management. In our review of the eight contract files, we found that those contracts and related task orders awarded after the updated December 2009 SOP was issued showed a pattern of better compliance with documentation requirements and other controls and internal procedures compared with contract awards made before the updated SOP (see table 3). In response to the IG’s and our recommendations, the Procurement Department has also adopted several new tools and practices to strengthen contract oversight, including issuance of a new directive in December 2010 establishing uniform policies and procedures for the selection, appointment, training and oversight of contracting officer technical representatives (COTR) (see table 4). A COTR at PBGC is the person the contracting officer relies upon to monitor a contractor’s work, ensuring it meets all contract requirements before approving the payment of contractor billings. Before issuance of the December 2010 COTR directive, PBGC had no specific policy for contracting officers to ensure COTRs performed their responsibilities and sufficiently documented their actions in a COTR file. Some contracting officers told us this resulted in inconsistent documentation and minimal reviews of COTR files, leaving questions about whether the COTR was assuring that all contract requirements were met. One contracting officer explained that before the COTR directive, COTR status reports were submitted only on an ad hoc basis. Another contracting officer said the new directive was already having a positive effect because it had resulted in more communication between procurement staff and COTRs. PBGC also has established more rigorous certification and training requirements for its COTRs. The December 2010 COTR directive requires COTRs to be properly certified at the time of appointment, or within 6 months if a waiver is granted. Certification requires 40 hours of relevant training from a structured program that meets OMB requirements for a newly appointed COTR and a minimum of 40 additional hours of job- related continuous learning every 2 years that is job related. Previously, the requirement at PBGC was completion of COTR refresher training every 3 years. Since June 2009, the Procurement Department has placed a greater emphasis on COTR training by sponsoring “Acquisition Excellence” workshops, covering such topics as the new COTR directive, acquisition planning, and use of the contractor performance reporting system. The Procurement Director told us that the Procurement Department has a COTR nomination process in place to determine whether individuals have completed the required COTR training before being appointed, and that the COTRs’ on-going training is monitored through the annual COTR file review process which is conducted by a contractor. Our evaluation of the COTR file review documents provided by the Procurement Director showed that the contractor had completed 54 COTR file reviews as of May 2011, identifying 47 instances of lack of adequate documentation of the COTR’s certification or completion of the continuous learning needed to maintain this certification. In these instances, the Procurement Department sent written notification of these deficiencies to the COTRs, their immediate supervisors, and the contracting officers. The letter indicated that immediate corrective action was required to meet training and certification requirements and that the COTR was to notify the contracting officer within 30 days of the action taken to address any deficiency. New requirements to strengthen postaward contract oversight of contractor staff qualifications have also been adopted in response to a PBGC IG recommendation. In a September 2009 report, the IG recommended PBGC implement controls and procedures to ensure that required experience is verified and documented in personnel files for all contractor workers prior to their assignment to a PBGC contract. In response, PBGC has added an “Education and Experience Qualifications” clause to all contracts specifying contractor personnel qualifications in terms of education and/or experience, and has added a requirement for the COTR to review compliance with this new contract clause to the COTR appointment letter. Three of the eight contract files we reviewed were labor hour contracts where contractors must meet specific qualifications, and we found that the COTRs were conducting the compliance reviews as required in their COTR appointment letter, for all three. However, in one case, we found the “Education and Experience Qualifications” clause missing from the contract. When we brought this to the Procurement Department’s attention, officials acknowledged this had been omitted in error and told us the required clause would be added to the contract in a future modification. In addition, the IG found in a December 2007 report there was no formal system for measuring the COTR’s performance of contract monitoring duties and recommended PBGC officials collaborate on developing a COTR performance goal and objectives. In response, PBGC’s Procurement and Human Resources Departments worked together to develop new employee performance standards to more clearly establish the COTR’s responsibilities associated with effectively managing PBGC contracts. Beginning in fiscal year 2011, PBGC is requiring all staff who have been assigned COTR duties to have these performance standards added to their performance evaluations. When determining how to acquire needed goods and services, federal agencies—including PBGC—must determine whether it is appropriate to use competitive or noncompetitive procedures to award contracts, and the type of pricing arrangement, such as fixed price or cost reimbursement. These decisions are the principal means that PBGC has for allocating cost and performance risk between the agency and its contractors. With respect to various agencies’ contracting governmentwide, we have reported that the awarding of contracts without the benefits of competition or with contract types chosen without adequately considering the risks involved are unsound procurement and management practices. Conversely, the use of sound procurement methods improves the integrity of the contracting process. Contracting officers are required, with limited exceptions, to utilize full and open competition in soliciting offers and awarding federal government contracts. Competitive procedures for awarding contracts call for the issuance of a solicitation or request for proposals, the receipt of competing proposals, and the subsequent evaluation of these proposals against evaluation factors stated in the solicitation to be used as the basis for the award decision. In contrast, a noncompetitive contract award is made without permitting all prospective firms to submit competing proposals generally under an exception to full and open competition allowed by the FAR. Use of competitive contracting procedures thus encourages firms to offer their best proposals when competing for work in response to a solicitation issued by PBGC, thereby leveling the playing field for competitors and potentially reducing costs and protecting the interests of the agency. In our analysis of FPDS-NG data on PBGC contracting, we found that between fiscal years 2008 and 2010, the number of new contracts awarded competitively increased from 51 percent to 67 percent of all new contracts, and that the share of total contract obligations made on competitive contracts increased from 70 percent to 83 percent. Another issue related to competition is the exercise of options to continue services under an existing contract for a stated period of time. Options can be a useful tool to realize efficiencies in the contracting process, but they should be used appropriately. The FAR requires contracting officers to justify, in writing, the quantities or terms under the option, and the notification period for exercising the option among other things, and include this justification document in the contract file. Before exercising the option, the FAR also requires contracting officers to make a written determination that the exercise of the option is in accordance with the option’s terms and relevant FAR provisions. However, we found that the required justifications were missing for the award of option periods for two contracts and a task order awarded under a third contract that we reviewed. Without these justifications in the contract files, it is more challenging to determine the contracting officer’s rationale for inclusion of option periods and be assured that it is in the government’s best interest to extend the contract, rather than seek new competition for the additional work. In addition, agencies can choose from a number of different pricing arrangements or contract types to acquire goods and services from contractors. For example, contract types can be grouped into two broad categories: fixed price contracts, where the government agrees to pay a set price for goods or services regardless of the actual cost to the contractor; and cost reimbursement contracts, where the government agrees to pay all allowable costs incurred by the contractor regardless of whether the deliverable or service is completed. As with competition, use of fixed price contracts is another tool that can help ensure government contracts are structured to “minimize risk and maximize value” for the taxpayer. In many cases, fixed price contracts are well suited for achieving this goal because they provide the contractor with the greatest incentive for efficient and economical performance. In contrast, cost reimbursement and labor hour contracts leave the agency exposed to a higher risk for cost overruns due to the allocation of cost risk between the government and the contractor. Over the past decade, PBGC’s Procurement Department has made efforts to increase the use of fixed price contracting. In 2000, we reported that about 60 percent of PBGC’s active contracts involved labor hour pricing and recommended that, where appropriate, PBGC should utilize more fixed price contracts. Furthermore, the PBGC IG told us that the agency has been utilizing some contractors on a labor hour basis for many years and should have a good understanding and sense of how the work is being done so they could structure the statement of work differently to use a fixed price contract or something less risky than the current labor hour approach. Our analysis of FPDS-NG data found that PBGC has made some progress recently in its use of fixed price contracts. PBGC’s use of fixed price contracts increased from just under 85 percent of all new contracts in fiscal year 2008 to almost 91 percent in fiscal year 2010. In addition, the share of total contract obligations on new fixed price contracts at PBGC was 69 percent in fiscal year 2010, which is an increase from 50 percent in fiscal year 2008. PBGC’s procurement officials provided examples of its efforts to encourage departments to increase use of fixed price contracts over labor hour contracts. In 2010, the Procurement Department disagreed with a BAPD request to award a new labor hour contract for recurring actuarial services. Although a new 5-year labor hour contract was awarded for these services, BAPD officials also agreed to have a consultant conduct a study to determine if the services could be obtained more effectively under another contract type. The study, delivered to PBGC in December 2010, recommended that PBGC make incremental improvements to the current contracting approach and transition over time to a fixed price contract for these services if certain criteria such as accurate cost estimates and successful implementation of a performance-based approach are met. BAPD has agreed to comply. In another example, the Human Resources Department initially proposed a labor hour contract for support services claiming that the labor hours needed to perform the services could not be precisely estimated. Procurement Department officials disagreed, suggesting that the labor hours could be estimated based on hours regularly worked by the government employee who formerly performed the tasks. The Human Resources Department adopted this suggestion and switched to a 3-year fixed price contract for these services. In addition, one of the contracts included in our file review involved a contract that was part of a follow-on requirement for all of BAPD’s field benefit administration contracts that were awarded in 2009. These contracts had been identified by a PBGC internal study and an IG report as areas where PBGC should give stronger consideration to using fixed price contracts. All together, these contracts were valued at more than $150 million and had contract lengths of several years before they were expected to be competed for again. The IG noted in a 2004 report that these contracts had been repeatedly awarded as labor hour contracts since the early 1980s when the current field benefit administration structure was created. Similarly, an internal study conducted by PBGC in 2010 found that most statements of work for these contracts had been brought forward over the years with only slight updates in scope. Consistent with the more recent OMB guidance, the study recommended that data on the performance of contracts be accumulated and summarized to document the level of service performed over time and examined closely to allow PBGC to possibly restructure its statements of work (or objectives) to accommodate different contract types. To promote the use of fixed price contracting governmentwide, OMB issued guidance in October 2009 recommending that agencies collect historical data on costs incurred on cost reimbursement, time and materials, and labor hour contracts, and use the data in structuring future contracts under certain circumstances to a fixed price approach instead. However, in our limited review of contract files, we found little evidence that PBGC officials had made efforts to use experiences gained on past contracts to change contract type when recompeting. Three of the eight contracts we reviewed were labor hour contracts. In each of these files, we found justifications for the use of labor hour pricing, but we found no evidence of efforts by PBGC to apply past experience to inform future cost estimates and transition the work performed under these contracts to a fixed price basis. Among the cost reimbursement contracts we reviewed, we found that decisions regarding use of this contract type were not always documented, as required by the FAR. Under the FAR, a cost reimbursement contract is suitable only when circumstances do not allow the requirement to be sufficiently defined to allow for a fixed price contract or the uncertainties involved in contract performance do not permit costs to be estimated with sufficient accuracy to use a fixed price contract. However, in one contract file, we found no documentation to support the decision to use cost reimbursement pricing for four of the contract’s cost reimbursement task orders. Without such documentation, the contract file is incomplete and the reasoning used to support PBGC’s contracting process is not clearly justified. To achieve greater cost savings and better outcomes when agencies acquire services, Congress and the executive branch have encouraged greater use of performance-based contracting. The use of performance- based contracts to acquire services offers a number of potential benefits. Performance-based contracts can encourage contractors to be innovative and to find cost-effective ways of delivering services. Performance-based contracting also helps improve the agency’s internal controls over the contracting process by using performance metrics to assess contractor performance during contract monitoring. However, challenges to this method of contracting have been encountered governmentwide. Since 2008, PBGC has made progress in increasing its use of this method of contracting, and has implemented new guidance and training to help expand its use further. In addition, PBGC has increased its incorporation of performance metrics across various types of contracts. However, such metrics—whether part of a performance-based or other type of contract— are not required to be linked to PBGC’s mission and goals. Such linkage is important to ensuring contract work is well integrated into PBGC’s strategic plan, just as PBGC does with work performed in house. In 2008, we reported that PBGC had begun awarding more contracts using the performance-based contracting method as a means to achieve better contract outcomes. Since then, FPDS-NG data show that use of the performance-based method of contracting has continued to increase— from 7 of 378 new contracts in fiscal year 2008, to 49 of 404 new contracts in fiscal year 2010 (see fig. 9). But, this contracting method is still used in less than 15 percent of new contracts. The performance-based contracting method has been acknowledged as creating challenges for contract oversight and monitoring efforts at agencies governmentwide, which may be deterring its use. In our 2008 report, we noted that PBGC would likely face technical challenges similar to other agencies that have attempted to increase their use of this contracting method, such as deciding which contracts are appropriate for a performance-based approach and which outcomes to measure and emphasize. Other common barriers included fear of change, lack of understanding of performance-based contracting methods, and fear of loss of control over the contracting process. More recently, a May 2010 Procurement Department internal briefing report stated that performance- based contracting was still considered a technical challenge because of the contract oversight and monitoring efforts required. In addition, PBGC may not be adequately prepared across all departments to increase use of performance-based contracting due to management challenges. A recent study conducted by a consulting firm for PBGC, issued in December 2010, found that BAPD’s workforce lacked the technical and cultural readiness needed to implement performance-based contracting. It stated that BAPD—the program department responsible for generating contracts for administering benefit services performed at field offices—lacked a performance management framework that would enable it to effectively link the quality of contract outcomes with organizational performance and to establish appropriate incentive mechanisms. In 2007, PBGC officials stated that they had initiated an effort to utilize performance-based contracting for the field offices, but had to abandon the effort for reasons unrelated to the attempt to use this contracting method. This solicitation process, which spanned more than 2 years, involved numerous staff from various departments, and was one of the largest procurement efforts ever undertaken by PBGC. Had it been successful, it would have been a major step forward in the agency’s use of performance-based contracting at its field offices. However, as BAPD later reported, the strategy to issue a single request for proposals to encompass the work previously performed under all eight contracts created too much complexity when trying to evaluate the proposals. The solicitation was canceled in August 2008 and BAPD abandoned the effort to use performance-based contracting for these contracts. In our 2008 report, we recommended that PBGC provide increased guidance and training for staff on the use of the performance-based contracting method. Since then, PBGC has issued detailed guidance in its SOP, and has offered training focused specifically on this contracting method to staff, managers, and acquisition-related workforce. The Procurement Department’s SOP provides detailed guidance on the various elements of performance-based contracting, based on the FAR. It cites the FAR’s policy for acquiring services using performance-based contracting methods as the preferred method for acquiring services, provides definitions of terms associated with performance-based service acquisitions (PBSA), and outlines PBSA requirements to be included in the performance work statement (see table 5). According to the FAR, once an agency determines a contract should have a written acquisition plan, that plan must describe the strategies for implementing performance-based acquisition methods or must provide the rationale for not using those methods. PBGC has designated all contracts with an estimated value greater than $100,000 as those required to have written plans. However, PBGC allows departments to choose whether or not to use the performance-based method of contracting, and the SOP does not mention the FAR requirement to document the rationale for not using this acquisition method, even for large service contracts. Only when departments choose to use this method does the SOP provide detailed instructions on what is entailed. For example, the SOP instructs users that each performance requirement should have a performance standard and provides guidelines on the development of the summary of performance requirements, which is to document the desired outcomes, performance objectives, and performance standards developed for a performance-based contract. The ultimate goal is to describe the requirement in a way that allows a potential contractor to understand fully what is necessary to meet these standards, resulting in better performance by focusing on results rather than process. In addition, the SOP includes other key mechanisms that are critical to performance-based contracting. For example, the SOP provides information to assist users in developing the Quality Assurance Surveillance Plan, which specifies the surveillance schedule, methods, and performance metrics acquisition staff can use to assess the outcomes of contractor performance. The SOP provides commonly used assessment methods as well as guidelines on how to determine the most appropriate method for assessment. The SOP stresses that past performance is an important element of every evaluation and contract award. It also discusses remedies for reductions in fees when services rendered do not meet requirements of the contract. The SOP also includes a section on contract incentives for performance-based contracts and describes the flexibility of using different criteria to award fees to reflect changes in mission priorities. Incentives encourage contractors to develop innovative cost-effective methods of performance while maintaining the quality of services provided. Through proper monitoring, the agency can take steps to correct performance that does not meet requirements or to negotiate changes to award fees to reflect changes in the agency’s mission and objectives. To help address the barriers to using the performance-based contracting method that stem from fear of change, lack of understanding of performance-based contracting methods, or fear of loss of control over the contracting process, our 2008 report recommended that PBGC provide comprehensive training on performance-based contracting for PBGC’s Procurement Department staff, managers, and acquisition-related workforce. As of 2010, PBGC noted that the PBGC Training Institute provided a wide range of procurement-related training for Procurement Department personnel and COTRs, including training on performance- based contracting. In addition, Procurement Department officials indicated the department had incorporated training on performance-based contracting in its Acquisition Excellence Workshops. In October 2009, PBGC contracted with an outside educational firm to provide this training. As of May 2011, implementation of this training was still under way. In addition to increasing its use of performance-based contracts, we found that PBGC is using performance metrics—one of the key elements of a PBSA—in various types of contracts as an alternative way of expanding its performance-based approach to contracting. In 2008, we reported that most of PBGC’s contracts at that time lacked performance incentives and methods to hold contractors accountable, and we recommended that PBGC ensure that contracts measure performance in terms of outcomes. Since then, we found that PBGC has taken steps to increase its use of performance metrics, but that links to PBGC’s strategic goals and objectives are still lacking. We have long stressed the need for agencies to use performance metrics as an internal management control and to link metrics to agency goals as a way to ensure proper stewardship and accountability for government resources and for achieving effective and efficient program results. Our tool for internal controls describes standards for agencies to establish and monitor performance metrics and indicators by taking specific actions to assess data on performance outcomes, including comparing data against planned goals and ensuring performance factors being analyzed are linked to agency mission and objectives. Management-control activities such as use of performance measures to evaluate outcomes are applicable to all services that an agency uses to meet its goals and objectives. PBGC has implemented a management control concerning the use of performance measures linked to agency mission and goals with respect to its in house workforce, but not with respect to its contract work. PBGC’s Procurement Department’s SOP does not discuss the use of performance metrics in all contract types, only for performance-based service acquisitions, and—even with respect to performance-based contracts—the SOP does not require that the performance metrics be designed to link to specific agency goals. Nevertheless, we found some efforts in the program departments to increase the use of performance metrics. PBGC officials from two departments we spoke with noted that they had been increasing their use of performance metrics in various contract types, and we found evidence of this in our review of selected contract files. For example, our contract review included several cost plus fixed fee task orders that OIT awarded under a contract with multiple awards using a PBSA approach, and they all included specific performance metrics to assess contractor performance. We also reviewed two CID awards for firm fixed price contracts for management services for asset investment that did not use a PBSA method of contracting, but the contracts nevertheless had incorporated metrics for monitoring contractor performance against the agency’s investment benchmarks based on the contractors’ monthly performance reports. In addition, we reviewed a BAPD labor hour contract for services at a field office that did not use the PBSA method, but still included a matrix of performance metrics that had not been included in the previous contract for these services. The matrix provided specific descriptions of desired activity outcomes, required services, acceptable quality levels, quantified and measurable performance standards, and monitoring methods to assess contractor performance, and BAPD officials report quarterly on how the performance metrics are being met. However, these performance metrics were not linked to specific measurable agency objectives and goals. In contrast, PBGC’s policies do require linkage between performance metrics and agency goals for work performed by its in-house federal workforce. For example, one goal listed in PBGC’s Human Capital Strategic Plan, FY 2010-2014 is to develop processes and procedures based on OPM’s Performance Appraisal Assessment Tool to ensure that individual employee performance and accountability are linked to PBGC’s strategic goals. Similarly, PBGC’s most recent 5-year strategic plan includes performance metrics and targets which are used to assess the performance of its federal workforce toward achieving its strategic goals. While performance metrics developed to measure the performance of in- house employees would not be appropriate for measuring the performance of individual contractor workers—as they work for the contractor, not directly for PBGC—these performance metrics may be useful in helping to develop metrics for contract work at the contract level, especially in areas where comparable work is performed both in house and under contract. For example, in interviews with PBGC officials, we learned that in-house DISC actuaries perform the same work that is performed by the contractor, and that to a certain extent, both internal and external actuaries have their work measured using the same standard metrics. However, contracts we reviewed for work performed by external actuaries do not link performance metrics to specific agency goals. With nearly three-fourths of its budget allocated to contracts, PBGC relies heavily on contracting to achieve its corporate mission. We believe that PBGC needs to be more deliberative in making decisions to contract. Although we made previous recommendations in both 2000 and 2008 that PBGC include contract decision making in its strategic planning, PBGC continues to consider contracting as a supporting function to fulfilling its mission rather than a key element of its corporate-level strategic planning. To this end, we reiterate our prior recommendations in this area, which have yet to be implemented. In addition, extensive use of contractors over time may diminish PBGC’s management control over contracts and staff expertise with respect to critical mission activities. After steadily expanding its use of contracts over the past 20 years, with only occasional limited efforts to examine the cost effectiveness of this development, a reassessment of PBGC’s rationale for this arrangement is overdue. Once the decision to contract has been made, we applaud PBGC’s many recent changes to the contracting process which are intended to improve integrity, but implementation of these new measures is still under way and required documentation that would assure full implementation is lacking in some cases. Without full implementation of these new controls, PBGC may not be making well-informed decisions for efficient contract management, which ultimately place the agency’s assets at greater risk. PBGC has also made progress in implementing a performance-based approach to contracting. However, additional action is needed to fully implement past recommendations regarding incorporation of performance metrics linked to PBGC’s mission and goals. Unless contract metrics for work performance are linked to agency objectives—as PBGC currently does with its in-house workforce—the effectiveness of the contractors’ work in assisting PBGC to achieve its mission and goals is diminished. Federal regulations call on agencies to ensure that performance-based contracting methods are used to the maximum extent practicable, and we believe PBGC could be doing more to encourage greater use of this method of contracting and to include performance metrics linked to the agency’s mission and strategic goals in its major service contracts with an estimated value over $100,000. To improve PBGC’s performance in an environment of heavy contractor use, further efforts are needed to better integrate contract decision making and contract management into PBGC’s agency-level strategic planning process. While recognizing that OMB guidance is not binding for PBGC, to assist PBGC in reassessing its extensive reliance on contracting, we recommend that the Director of PBGC implement OMB guidance that calls on agencies to develop a service contract inventory, by function and location across the agency and within its departments, to identify the extent of its current reliance on contractors and enable a balanced workforce analysis. At a minimum, such reviews should capture the total dollar amount of service contract spending by function and the role services play in achieving agency objectives. Consistent with OMB guidance, PBGC should give priority consideration to functions that require increased management attention due to heightened risk of workforce imbalance; and undertake a risk analysis in areas identified as heavily reliant on contractors, including an evaluation of the costs and benefits of decisions to award work to contractors in such areas. In addition, to encourage expanded use of performance-based contracting with performance metrics linked to the agency’s mission and goals, we recommend that the Director of PBGC ensure that the rationale for not using a performance-based service acquisition approach is documented, consistent with the FAR; and ensure that the performance metrics for major service contracts are linked to specific corporate strategic goals to the maximum extent practicable. We obtained written comments on a draft of this report from PBGC, which are reproduced in appendix VI. PBGC also provided technical comments, which are incorporated into the report where appropriate. In addition, we provided a copy of the draft report to the Department of Labor for its comments. The Department of Labor did not provide written comments on our findings. In response to our draft report, PBGC generally concurred with our recommendations and outlined actions the agency has under way or plans to take to address each topic of concern. With respect to the first recommendation, PBGC agreed and noted that the agency will use internal systems for contracting to develop a sufficiently detailed service contract inventory to enable a better workforce analysis and assist in potentially rebalancing its workforce as challenges arise. With respect to the second recommendation, PBGC agreed and commented that it generally considers risks and costs in making contract decisions, but that it would conduct a more formal process as it relates to staffing and contracting. With respect to the third recommendation, PBGC agreed and noted that the agency has added a line to its advance procurement planning form to raise the issue in a deliberative manner prior to soliciting the contract. Finally, with respect to the fourth recommendation, PBGC agreed and noted that its management team understands the relationship between contracting and achieving the agency’s goals, and is comfortable with documenting that relationship. We are pleased to learn of the steps under way to address our recommendations and strengthen PBGC’s contracting process. Further monitoring will be required to ensure that the results of the service contract inventory and risk and cost analyses are used effectively to better integrate decisions on contracting and management of contracts into PBGC’s agency-level strategic planning process, and that the linkage between performance metrics for major service contracts to specific corporate strategic goals provides greater assurance that contract work is used effectively to support PBGC’s mission. As agreed with your staff, we will send copies of this report to the Secretary of Labor, the Director of PBGC, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact Barbara Bovbjerg at (202) 512-7215 or [email protected], or William Woods at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix VII. The FAR is the primary regulation for all federal executive agencies in their acquisition of supplies and services with appropriated funds. The Office of Management and Budget’s (OMB) Office of Federal Procurement Policy establishes policy guidelines for some sections of the FAR, such as the policy for using a performance-based approach to service contracting. The FAR provides contracting policies and procedures for, among other things, acquisition planning, and competing, awarding, and monitoring contracts; indicates a performance-based approach as the preferred acquisition method for services; provides a prohibition on contracting for services that constitute inherently governmental functions and offers lists of functions considered inherently governmental and functions not considered inherently governmental; and would be supplemented by policy letter, as proposed by Office of Federal Procurement Policy, to clarify when outsourcing for services is and is not appropriate and what functions are inherently governmental and must always be performed by federal employees. Memorandum expressed concern that agencies’ excessive reliance on contracts creates a risk that poorly designed contracts will not meet the needs of the federal government or the interest of taxpayers, and noted that the line between inherently governmental functions—those that must be performed by federal employees—and commercial activities that may be subject to contract performance—has been blurred; and directed OMB to lead a series of contracting-related efforts, including clarifying when outsourcing for services is and is not appropriate. Memorandum described two actions that OMB was imposing, based on the Presidential Memorandum of March 4, 2009, that required agencies to review existing contracts and acquisition practices and develop a plan to save 7 percent of baseline contract spending by the end of fiscal year 2011; and reduce by 10 percent the share of dollars obligated in fiscal year 2010 under new contract actions that are awarded with high-risk contracting authorities that pose special risks of overspending; examples cited included noncompetitive contracts, cost reimbursement contracts, and labor hour contracts. Memorandum discussed achieving the best mix of contractors and federal employees and required agencies to begin the process of developing and implementing policies, practices, and tools for managing the multisector workforce by adopting a framework for planning and managing the multisector workforce built on strategic human-capital planning; conducting a human-capital analysis of at least one program, project, or activity where the agency has concerns about the extent of reliance on contractors and reporting on the pilot by April 2010; and using guidelines that facilitate consistent and sound application of statutory requirements when considering insourcing as a tool to manage work. OMB Memorandum, Increasing Competition and Structuring Contracts for the Best Results (Oct. 27, 2009) Memorandum provided initial guidelines to help Chief Acquisition Officers and Senior Procurement Executives evaluate the effectiveness of their agency’s competition practices and processes for selecting contract types. The guidelines focused around three key questions: How is the agency maximizing the effective use of competition and choosing the best contract type for the acquisition? How is the agency mitigating risk when noncompetitive, cost reimbursement, or time and materials/labor hour contracts are used? How is the agency creating opportunities to transition to more competitive and lower- risk contracts? The guidelines also included a set of considerations to help agencies address each of these questions. Memorandum provided guidance for civilian agencies to augment and improve the skills of their acquisition workforce, which includes contract specialists, contracting officer’s technical representatives, and program and project managers. Required actions included the following mandates: Each civilian agency covered by the Chief Financial Officers Act must submit an annual Acquisition Human Capital Plan to OMB by March 31, 2010, that identifies specific strategies and goals for increasing both the capacity and capability of its respective acquisition workforce for the period ending in fiscal year 2014. Agencies must use the plan to address needs for an acquisition workforce in their annual budget submissions. Memorandum gave a status report on the federal contracting community’s actions toward meeting the President’s goal of saving $40 billion annually, reducing reliance on high-risk contracting, and achieving a more appropriate mix of in-house and contractor labor, including: Agencies’ fiscal year 2010 acquisition plans that identified a variety of strategies, such as new avenues for strategic sourcing; program terminations and reductions; use of online reverse auctions and electronic sealed bids; and more aggressive renegotiation of contracts. Initiatives were provided that intended to improve the acquisition workforce’s capability to manage high-risk contracts and to ensure use of the most appropriate contract type for each procurement. Memorandum provided guidance for agencies in preparing their initial service contract inventory for fiscal year 2010. The inventories should serve as a tool for assisting agencies in better understanding how their contracted services are being used to support mission and operations and ascertain whether the contractors’ skills are being utilized in an appropriate manner, including insight into the extent to which contractors are being used in performing activities by analyzing how contracted resources are distributed by function and location across an agency and within its components; and grant insight that is especially important for contracts whose performance may involve critical functions or functions closely associated with inherently governmental functions. Framework for Assessing the Acquisition Function at Federal Agencies, GAO-05-218G (Washington, D.C.: September 2005) Published in response to agencies’ increasing reliance on contractors and systemic contracting weaknesses we and other accountability organizations identified, the framework enables high-level assessments of an agency’s contracting function. It consists of interrelated cornerstones essential to an efficient, effective, and accountable contracting process, including: Organizational alignment and leadership that appropriately places the contracting function in the agency, with stakeholders having clearly defined roles and responsibilities; aligns contracting with the agency’s mission and needs; and organizes the contracting function. Policies and processes that are clear, transparent, and implemented consistently in the planning, award, administration, and oversight of contracting efforts. Human capital, which involves strategically thinking about attracting, developing, and retaining talent, and creating a results-oriented culture within the contracting workforce. Knowledge and information management that provides credible, reliable, and timely data to contracting process stakeholders, including the agency’s Procurement Department and program staff who decide which services to buy, project managers who receive the services, managers who maintain supplier relationships, contract administrators who oversee compliance, and the finance department that pays for the goods and services. This manual implements the FAR and other statutory requirements in PBGC to guide procurement activities and establishes basic uniform procedures for the internal operation of acquiring supplies and services within PBGC. This guidance provides a step-by-step process to assist managers in deciding whether to use contractors or government employees to perform the agency’s work and discusses the agency’s approach in applying the FAR’s guidance on inherently governmental functions. This guidance establishes uniform policies and procedures for the selection, appointment, training, and management of COTRs and TMs. PBGC has taken a number of steps to strengthen its contracting process in response to contract-related recommendations from previous GAO reports, as well as reports from PBGC’s Office of Inspector General (IG). The tables below provide a detailed summary of the recommendations from these reports and PBGC’s corresponding actions. To obtain examples of recent improvements to PBGC’s contracting processes and help illustrate the extent to which PBGC is ensuring the integrity of its contracting process, we selected a small judgmental sample of eight contracts for review. Two contracts were selected from each of PBGC’s four main program departments. These four departments, listed below, account for more than 70 percent of the agency’s contract obligations in fiscal year 2010 Benefits Administration and Payment Department (BAPD); Corporate Investments Department (CID); Department of Insurance Supervision and Compliance (DISC); and Office of Information Technology (OIT). To select specific contracts for review, we obtained a list of all active contracts from each of these four program departments, supplemented by data from PBGC’s procurement department and from the Federal Procurement Data System-Next Generation (FPDS-NG). In selecting contracts, we looked for the following characteristics contracts awarded relatively recently (if possible, in fiscal year 2009 or contracts for an ongoing activity (including some for actuarial services); contracts awarded for a large dollar amount; some contracts awarded to the same contractor that held the contract previously and some that changed to a different contractor; and proximity of the primary location where services provided under the contract are performed. Table 8 provides an overview of the attributes of the eight contracts we chose for our review based on these selection criteria. To conduct our review of contract files, we used a standardized data collection instrument organized around certain indicators of key management controls that we developed based on provisions of the Federal Acquisition Regulation (FAR), PBGC’s own internal policies and procedures for contracting (see app. I), and past GAO work. These indicators are summarized in table 9. We also used structured interview guides to obtain information from PBGC officials familiar with each contract’s award process and postaward monitoring. In conducting our review, we examined the documentation in the files for evidence that PBGC’s contracting processes were adhering to these key contracting process management controls. We then summarized the results of our review into categories reflecting the various stages of PBGC’s contracting process (see table 10). Appendix IV: Elements of a Performance- Based Service Acquisition Contract To the maximum extent practicable, describe the work in terms of the required results rather than either “how” the work is to be accomplished or the number of hours to be provided. Agencies should structure performance work statements in solicitations around the purpose of the work to be performed, that is, what is to be performed rather than how to perform it. For example, instead of telling the contractor how to perform aircraft maintenance or stating how many mechanics should be assigned to a crew, the solicitation, which is incorporated into the contract, should state that the contractor is accountable for ensuring that 100 percent of flight schedules are met or that 75 percent of all aircraft will be ready for flight. Include measurable performance standards (i.e., in terms of quality, timeliness, quantity, etc.). Performance standards should be set in terms of quality, timeliness, and quantity, among other things. Include the methods of assessing contractor performance against the performance standards. Describe how the contractor’s performance will be evaluated in a quality assurance plan. Include performance incentives where appropriate. When used, the performance incentives shall correspond to the performance standards set forth in the contract. Incentives should be used when they will induce better quality performance and may be either positive or negative, or a combination of both. In addition to the contacts named above, Margie Shields, Assistant Director; Ted Burik, Analyst-in-Charge; Matt Drerup; Najeema Washington; and Paul Wright made significant contributions to this report. Susan Aschoff, Gena Evans, Sheila McCoy, Mimi Nguyen, Ken Patton, Sylvia Schatz, Walter Vance, and Craig Winslow also made important contributions. Pension Benefit Guaranty Corporation: Improvements Needed to Strengthen Governance Structure and Strategic Management. GAO-11-182T. Washington, D.C.: December 1, 2010. Pension Benefit Guaranty Corporation: More Strategic Approach Needed for Processing Complex Plans Prone to Delays and Overpayments. GAO-09-716. Washington, D.C.: August 17, 2009. Pension Benefit Guaranty Corporation: Financial Challenges Highlight Need for Improved Governance and Management. GAO-09-702T. Washington, D.C.: May 20, 2009. Pension Benefit Guaranty Corporation: Improvements Needed to Address Financial and Management Challenges. GAO-08-1162T. Washington, D.C.: September 24, 2008. Pension Benefit Guaranty Corporation: Need for Improved Oversight Persists. GAO-08-1062. Washington, D.C.: September 10, 2008. Pension Benefit Guaranty Corporation: Some Steps Have Been Taken to Improve Contracting, but a More Strategic Approach Is Needed. GAO-08-871. Washington, D.C.: August 18, 2008. PBGC Assets: Implementation of New Investment Policy Will Need Stronger Board Oversight. GAO-08-667. Washington, D.C.: July 17, 2008. Pension Benefit Guaranty Corporation: A More Strategic Approach Could Improve Human Capital Management. GAO-08-624. Washington, D.C.: June 12, 2008. Pension Benefit Guaranty Corporation: Governance Structure Needs Improvements to Ensure Policy Direction and Oversight. GAO-07-808. Washington, D.C.: July 6, 2007. Private Pensions: The Pension Benefit Guaranty Corporation and Long- Term Budgetary Challenge. GAO-05-772T. Washington, D.C.: June 9, 2005. Pension Benefit Guaranty Corporation Single-Employer Insurance Program: Long-Term Vulnerabilities Warrant “High Risk” Designation. GAO-03-1050SP. Washington, D.C.: July 23, 2003. Pension Benefit Guaranty Corporation: Appearance of Improper Influence in Certain Contract Awards. T-OSI-00-17. Washington, D.C.: September 21, 2000. Pension Benefit Guaranty Corporation: Contract Management Needs Improvement. T-HEHS-00-199. Washington, D.C.: September 21, 2000. Pension Benefit Guaranty Corporation: Contracting Management Needs Improvement. GAO/HEHS-00-130. Washington, D.C.: September 18, 2000. Sourcing Policy: Initial Agency Efforts to Balance the Government to Contractor Mix in the Multisector Workforce. GAO-10-744T. Washington, D.C.: May 20, 2010. The Office of Management and Budget’s Acquisition Workforce Development Strategic Plan for Civilian Agencies. GAO-10-459R. Washington, D.C.: April 23, 2010. Defense Acquisitions: Further Actions Needed to Address Weaknesses in DOD’s Management of Professional and Management Support Contracts. GAO-10-39. Washington, D.C.: November 20, 2009. Civilian Agencies’ Development and Implementation of Insourcing Guidelines. GAO-10-58R. Washington D.C.: October 6, 2009. Federal Contracting: Observations on the Government’s Contracting Data Systems. GAO-09-1032T. Washington, D.C.: September 29, 2009. Department of Homeland Security: Better Planning and Assessment Needed to Improve Outcomes for Complex Service Acquisitions. GAO-08-263. Washington, D.C.: April 22, 2008. Department of Homeland Security: Progress and Challenges in Implementing the Department’s Acquisition Oversight Plan. GAO-07-900. Washington, D.C.: June 13, 2007. Highlights of a GAO Forum: Federal Acquisition Challenges and Opportunities in the 21st Century. GAO-07-45SP. Washington, D.C.: October 6, 2006. Improvements Needed to the Federal Procurement Data System-Next Generation. GAO-05-960R. Washington, D.C.: September 27, 2005. Framework for Assessing the Acquisition Function at Federal Agencies. GAO-05-218G. Washington, D.C.: September 2005. Contract Management: Opportunities to Improve Surveillance on Department of Defense Service Contracts. GAO-05-274. Washington, D.C.: March 17, 2005. Federal Procurement: Spending and Workforce Trends. GAO-03-443. Washington, D.C.: April 30, 2003. Contract Management: Guidance Needed for Using Performance-Based Service Contracting. GAO-02-1049. Washington, D.C.: September 23, 2002. Best Practices: Taking a Strategic Approach Could Improve DOD’s Acquisition of Services. GAO-02-230. Washington, D.C.: January 18, 2002. Internal Control Standards: Internal Control Management and Evaluation Tool. GAO-01-1008G. Washington, D.C.: August 2001. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 2003. High-Risk Series: An Update. GAO-01-263. Washington, D.C.: January 2001.
The Pension Benefit Guaranty Corporation (PBGC) insures the pension benefits of more than 44 million U.S. workers and retirees in more than 27,500 private defined benefit plans. In response to growing workloads over the last 20 years, PBGC has come to rely heavily on contractors to perform its work. With the influx of plan terminations during the recent economic downturn, GAO was asked to examine: (1) how PBGC decides between contracting for services and performing services in house; (2) the steps PBGC has taken to strengthen its internal controls over the contracting process; and (3) PBGC's implementation of a performance-based approach in its recent contracts. To conduct this study, GAO reviewed federal and PBGC contracting policies; interviewed PBGC officials and selected contractors; examined a small judgmental sample of eight recent contracts selected based on type, amount, and location; and assessed PBGC's actions in response to past GAO and PBGC Inspector General (IG) recommendations. PBGC's contracting decisions are based primarily on historical practice within each of its departments rather than strategic assessment. Nearly three-fourths of PBGC's budget is allocated to contractors, yet PBGC does not have a strategic agency-level plan for contracting. PBGC often justifies extensive use of contractors based on the need to manage fluctuating workloads; however, historical data appear to indicate that PBGC has more contractor workers than needed to respond to workload fluctuations. Some of its contractor use is justified based on needed expertise or lower cost. However, because PBGC does not routinely conduct cost-benefit or risk analyses as part of its contract decision-making process, the efficiency and effectiveness of its contracting is unknown, and PBGC's long-term extensive reliance on contractors may be placing the agency at risk of eroding management control in core functions. At the same time, PBGC has adopted new policies and procedures to improve contractor oversight and ensure that federal contracting requirements are met, addressing past GAO and PBGC IG recommendations in this area. For example, PBGC has issued new standard operating procedures and is conducting training for staff involved in the agency's contracting activities. In addition, PBGC has increased the use of competitive and fixed price contracts, which provide more integrity to the contracting process by limiting government cost and performance risk. In addition, PBGC has implemented new guidance and training to improve staff knowledge and understanding of performance-based contracting and has expanded its use. Between fiscal years 2008 and 2010, PBGC increased the use of performance-based contracts from 2 to 12 percent. PBGC also increased its incorporation of performance metrics across various types of contracts to ensure performance is measured in terms of outcomes. Thus, past GAO and IG recommendations in this area have been partially addressed. However, unlike work performed in house, PBGC does not require performance metrics for its contract work to be linked to agency mission and goals, which is important to ensuring such work is well integrated into its strategic plan. GAO recommends that PBGC improve its strategic approach to contracting by developing an inventory of contract resources, assessing risk in areas heavily reliant on contractors, documenting its consideration of performance-based contracting, and linking contractor performance to agency goals. PBGC agrees with our recommendations.
As federal agencies expand their use of information technology, they face an increasing challenge to protect the integrity, confidentiality, and availability of information that is vital to their missions. Like the nation as a whole, our government is becoming increasingly dependent on widely interconnected computer systems and the electronic data they maintain. These systems and data are essential to carry out critical operations, such as tax collections; safeguard billions of dollars in assets, such as military equipment and accounts receivable; and deliver basic services, such as social security payments and other benefits. Reliance on these systems and on electronic data is revolutionizing the way that agencies collect, process, store, and disseminate information. However, without effective controls, such reliance also can increase the risks of financial loss, unauthorized access to sensitive information, and devastating interruptions in service. To provide a governmentwide overview, this report summarizes the results of our reviews of information security at individual agencies and of similar assessments performed by others. The report also describes OMB’s oversight of federal agency practices regarding information security and identifies opportunities for improvement. We performed this review in response to a request from Senator John Glenn, Ranking Minority Member, Senate Committee on Governmental Affairs, that we examine a broad range of federal information security issues. Subsequently, Senator Ted Stevens, Committee Chairman, also expressed interest in these issues. Information security is a growing concern because the federal government, like the nation as a whole, is becoming increasingly dependent on computerized information systems and electronic records. These systems and records are fast replacing manual procedures and paper documents, which in many cases are no longer available as “backup” if automated systems should fail. The potential risks associated with reliance on electronic systems and records are exacerbated because more and more systems are being interconnected to form networks or are accessible through public telecommunication systems, making the systems themselves and the data they maintain much more difficult to protect from unauthorized users or outside intruders. All major agencies rely on computer systems to provide critical support for their operations, and even greater reliance is planned for the future. In addition, agencies are increasing their use of interconnected systems and electronically transmitted data in order to streamline operations, make federally maintained data more accessible, and reduce paperwork. Most notably, the Department of Defense has a vast information infrastructure that includes 2.1 million computers, 10,000 local networks, and 100 long-distance networks. The majority of the information maintained on Defense’s computers is sensitive but nonclassified data essential to daily operations, such as commercial transactions; payroll, personnel, and health records; operational plans; and weapons systems maintenance records. In addition, Defense uses the Internet, a global network interconnecting thousands of computer networks, to exchange electronic mail, log on to remote computer sites, and obtain files from remote locations. Civilian agencies are also increasingly reliant on interconnected systems, including the Internet, and on electronic data. The following examples illustrate just a few of the ways that agencies are expanding their use of information technology to support critical operations. Law enforcement officials throughout the United States and Canada rely on the Federal Bureau of Investigation’s National Crime Information Center computerized database for access to sensitive criminal justice records on individual offenders. According to the Bureau’s fiscal year 1997 budget submission, the system is available to 78,000 authorized users and processes an average of about 2 million transactions daily. The Internal Revenue Service (IRS), which relies on computers to process and store millions of taxpayer records, views electronic filing of tax returns as fundamental to its future operations. The number of individual income tax returns filed electronically increased from 4.2 million in 1990 to about 14.8 million for the first 3 and a half months of 1996. IRS goals include significantly increasing the number of electronically filed returns and eventually eliminating paper returns for a large segment of filers. The Customs Service relies on automated systems to process entry declarations, which totaled over 39 million in fiscal year 1994 and led to payment of over $20 billion in duties. Although many entry declarations are submitted as paper documents, a growing number are submitted electronically. The Department of Agriculture is reducing the use of paper food stamp coupons through its electronic benefits transfer program. Under the program, individual recipients’ monthly benefits are recorded in a central computer file. Individuals then use “credit card” type cards with secret personal identification numbers to draw on these benefits and pay for their groceries. During fiscal year 1995, about 630,000 households participated in the electronic benefit transfer food stamp program. According to the Federal Electronic Benefits Transfer Task Force, the program could potentially cover over 10 million households. Medicare part B claims that were submitted and processed electronically jumped from 36 to 72 percent between 1990 and 1994, and further increases are likely. Medicare part B covers physician services, outpatient hospital care, medical supplies, and other health benefits, such as emergency ambulance service. The program cost $60 billion in fiscal year 1994, and, according to OMB, costs are expected to double over the subsequent 7 years. Unfortunately, the same factors that are so important to streamlining federal operations—interconnected, often widely-dispersed systems; readily accessible information; and paperless processing—are also factors which increase the vulnerability of these operations and data. Specifically, the threats to agency systems and the potential for harm have increased because the move to more interconnected systems has provided greater numbers of individuals access to extensive databases of information through widely distributed networks of computers; agencies are placing greater reliance on electronic records, in some cases eliminating paper records; and intruders, including criminals, are becoming more skilled at defeating security techniques designed to protect computer systems and electronic information. When systems are not adequately protected the potential for malicious and criminal acts is enormous. For example, by obtaining access to data files, an individual could make unauthorized changes for personal gain, such as diverting payments or reducing amounts owed on debts. Similarly, an individual could obtain sensitive information about business transactions or individuals, which could then be sold or used for malicious purposes. By obtaining access to computer programs, an individual could make unauthorized changes to these programs, which in turn could be used to access data files or to process unauthorized transactions, such as improper payments. Also, an intruder could eliminate evidence of unauthorized activity, thus, significantly reducing the likelihood that such activity would ever be detected. Further, in an inadequately protected network environment, an agency’s operations could be sabotaged from remote locations by altering or destroying critical data and programs, or by introducing malicious code, such as viruses, to damage or congest system operations. Significant damage could also occur as a result of accidental errors and deletions by authorized users. Regardless of the individual user’s intent, in today’s high-speed, highly automated, and interconnected computing environment, thousands of transactions could be erroneously processed or enormous amounts of data could be destroyed or disclosed before an agency detected the damage. In addition to access control risks, computer facilities and electronic media can be damaged or otherwise rendered unusable by fires, floods, contamination, and other manmade and natural disasters. If an agency does not have adequate contingency plans and preparations for such unexpected events, it may be forced to suspend critical operations or it could lose data and software that are difficult and costly, or even impossible, to replace. The need to protect sensitive federal data maintained on automated systems has been recognized for years in various laws and in federal guidance. The Privacy Act of 1974, as amended; the Paperwork Reduction Act of 1980, as amended; and the Computer Security Act of 1987 all contain provisions requiring agencies to protect the confidentiality and integrity of the sensitive information that they maintain. The Computer Security Act (Public Law 100-235) defines sensitive information as “any information, the loss, misuse, or unauthorized access to or modification of which could adversely affect the national interest or the conduct of Federal programs, or the privacy to which individuals are entitled under the Privacy Act, but which has not been specifically authorized under criteria established by an Executive Order or an Act of Congress to be kept secret in the interest of national defense or foreign policy.” The adequacy of controls over computerized data is also addressed indirectly by the Federal Managers Financial Integrity Act (FMFIA) of 1982 (31 U.S.C. 3512(b) and (c)) and the Chief Financial Officers (CFO) Act of 1990 (Public Law 101-576). FMFIA requires agency managers to annually evaluate their internal control systems and report to the President and the Congress any material weaknesses that could lead to fraud, waste, and abuse in government operations. The CFO Act requires agency CFOs to develop and maintain financial management systems that provide complete, reliable, consistent, and timely information. Under the act, major federal agencies annually issue audited financial statements. In practice, such audits generally include evaluating and testing controls over information security. In accordance with the Paperwork Reduction Act of 1980 (Public Law 96-511), OMB is responsible for developing information security policies and overseeing agency practices. In this regard, OMB has provided guidance for agencies in OMB Circular A-130, Appendix III, “Security of Federal Automated Information Resources.” Since 1985, this circular has directed agencies to implement an adequate level of security for all automated information systems that ensures (1) effective and accurate operations and (2) continuity of operations for systems that support critical agency functions. The circular establishes a minimum set of controls to be included in federal agency information system security programs and requires agencies to review system security at least every 3 years. Responsibility for developing technical standards and providing related guidance for sensitive data belongs primarily to the National Institute of Standards and Technology (NIST), under the Computer Security Act. OMB, NIST, and agency responsibilities regarding information security were recently reemphasized in the Information Technology Management Reform Act of 1996. Our objectives were to (1) provide a general overview of the adequacy of federal information security at major federal agencies based on reported information, (2) identify and categorize the most significant information security weaknesses reported, (3) identify the general causes of reported weaknesses, and (4) assess OMB’s efforts to oversee agency information security practices. To accomplish these objectives we analyzed the results of our evaluations of computer-related controls at five major agencies since June 1993. These agencies included the Internal Revenue Service and the U.S. Customs Service, which are both part of the Department of the Treasury; the Department of Education; the Department of the Army; and the Department of Housing and Urban Development. We performed most of these assessments as part of our financial statement audits at these agencies. While such audits focus on the security of the data supporting the financial statements, they include evaluations and tests of general controls that affect a significant segment of the agencies’ computerized operations. A list of GAO reports and testimonies that address the adequacy of information security at federal agencies is provided at the end of this report. We supplemented reviews of our own audits with an analysis of 149 other reports on major federal agencies to determine if information security weaknesses had been reported and, if so, what types of weaknesses were reported. The reports we reviewed resulted from independent audits by agency inspectors general issued between September 1992 through March 1996, and from agency self assessments required under FMFIA for fiscal years 1994 and 1995. The agencies covered included the Departments of Agriculture, Defense, Education, Energy, Health and Human Services, Housing and Urban Development, Justice, Labor, Transportation, the Treasury, and Veterans Affairs; the General Services Administration; the National Aeronautics and Space Administration; the Social Security Administration; and the Office of Personnel Management. Together, our analyses covered the 15 major departments and agencies that are responsible for spending or safeguarding the largest amounts of federal resources. In total, these agencies accounted for over 98 percent of all federal outlays during fiscal year 1995. We based our analyses almost exclusively on reported findings. Although we spoke with inspector general audit managers at several agencies to clarify information that had been reported, we did not assess the quality or completeness of any of the inspector general audits or agency self assessments covered by our survey. To augment information included in reports on individual agencies, we met with members of the steering committee of the Federal Computer Security Managers Forum, an information-sharing group established by NIST, and we reviewed various OMB and NIST documents, as well as related laws. To obtain information on OMB’s oversight efforts, we met with officials from OMB’s Office of Information and Regulatory Affairs (OIRA), Office of Federal Financial Management, and Resource Management Office branches responsible for overseeing programs at 11 of the 15 agencies included in our review. In addition, we met with senior information resource management officials and security program managers at five agencies to discuss their interactions with OMB and other agencies responsible for providing guidance and assistance regarding information security issues. These five agencies are the Departments of Agriculture, Health and Human Services, Treasury, and Transportation and the Office of Personnel Management. Our review was performed in Washington, D.C., from July 1995 through May 1996 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Acting Director of OMB or his designee. OMB’s Deputy Director for Management provided written comments on a draft of this report. These comments are discussed in the “Agency Comments and Our Evaluation” section of chapter 5 and are reprinted in appendix I. Recent audits show that weak information security is a serious governmentwide problem that is putting major federal operations at risk. Between September 1994 and April 30, 1996, serious weaknesses were reported for two-thirds of the agencies covered by our review, and for half of these agencies the weaknesses had been reported for at least 5 years. A fundamental cause of these weaknesses is that agencies have not implemented security programs that provide a systematic means of assessing risk, implementing effective policies and control techniques, and monitoring the effectiveness of these policies and techniques. Of the 15 agencies included in our review, serious information security control weaknesses were reported for 10 from September 1994 through April 1996. The two most commonly reported weakness indicate fundamental deficiencies in the ability of agencies to protect federal information and the continuity of federal operations. The first was poor access control, which increases the risk that an individual or group could inappropriately modify or disclose sensitive data or computer programs for purposes such as personal gain or sabotage. The second most commonly reported weakness was inadequate disaster planning, which increases the risk that an agency will not be able to satisfactorily recover from an unexpected interruption in critical operations. Many of the identified weaknesses have remained uncorrected for years. Of the 10 agencies with reported weaknesses, FMFIA reports for 5 showed that the problems had remained uncorrected for 5 years or longer. Examples of reported problems include the following: Estimates by the Department of Defense indicate that attacks on unclassified computer systems and networks are a serious and growing threat to our national security, including Defense’s ability to execute military operations and protect sensitive information. Defense data indicate that Defense may have experienced as many as 250,000 attacks in 1995 and that the number of attacks is doubling each year. Successful attacks by outside intruders have shut down systems and corrupted sensitive data. However, estimates based on tests conducted since 1992 showed that less than 1 percent of attacks on Defense’s systems were detected and reported. Although no summary costs have been developed, Defense officials estimate that the cost of such incidents is at least tens of millions of dollars per year. During our audit of the IRS’ fiscal year 1995 financial statements, we found that, as reported since 1993, controls over sensitive information were inadequate. Although corrective actions are under way, as detailed in previous reports, IRS could not ensure that the confidentiality and accuracy of taxpayer data were protected and that the data were not manipulated for purposes of individual gain. Specifically, (1) controls did not prevent users from unauthorized access to sensitive programs and data files, (2) numerous users were allowed powerful access privileges that could allow circumvention of existing controls, and (3) security reports used to monitor and identify unauthorized access to the system were cumbersome and virtually useless to managers for monitoring activity. In addition, back-up and recovery plans were inadequate to provide reasonable assurance that IRS service centers could recover from disasters.In June 1994, we reported a variety of computer-related control weaknesses at the Customs Service, including that thousands of internal and external users had inappropriate access to critical and sensitive programs and data files. In May 1995, the Department of the Treasury Inspector General reported that despite attempts to correct the problem, the weaknesses continued to exist.In June 1994 and June 1995, we reported that controls over the Department of Education’s Federal Family Education Loan Program (FFELP) did not adequately protect sensitive data files, application programs, and systems software from unauthorized access, change, or disclosure. These controls are critical to Education’s ability to safeguard FFELP assets, maintain sensitive loan data, and ensure the reliability of financial management information about the program. The Department reported that FFELP had $77 billion in outstanding loan guarantees as of September 30, 1994. The Department of Health and Human Services (HHS) first reported the lack of a formal, well-coordinated system security program in its Administration for Children and Families in its fiscal year 1990 FMFIA report. In December 1995, HHS reported that the Administration still had not implemented fundamental computer security program elements such as risk assessments and independent reviews of contingency plans for sensitive systems supporting this $17 billion dollar per year program. The Department of Justice first recognized automated data processing security as a weakness in 1985. Although Justice reported in February 1996 that it has made departmentwide security improvements, it also reported that some components had not completed and tested continuity of operations plans, developed policies for computer and telecommunications security, or conducted required risk assessments of component computer systems. In March 1995, the Department of Agriculture’s Inspector General reported that controls over access to computer software programs and data were inadequate to prevent unauthorized activity at the Department’s National Finance Center. The Center processes billions of dollars in payments and sensitive information for itself and other agencies, including payroll, retirement savings, administrative and travel payments, and property management information. In March 1995, the Office of Personnel Management Inspector General reported that federal retirement program assets were “highly vulnerable to loss or misuse” because of electronic data processing weaknesses, primarily excessively broad user access privileges, related to systems that maintained 2.1 million annuitant files and generated $36 billion in benefit payments during fiscal year 1994. Serious information security weaknesses may also exist for some of the five agencies for which no weaknesses were reported. This is because audit reports at one agency specifically stated that computer-related controls had not been reviewed as part of the audit. Also, audit managers at two other agencies said that their computer audit capabilities were limited, and they could not readily determine what, if any, work they or their contractors had performed in this area. For the 10 agencies with serious reported weaknesses, auditors made 90 new recommendations for specific corrective actions in reports issued from September 1994 through May 1996. In addition, these reports referred to numerous recommendations made in prior years that had not yet been fully or effectively implemented. Although most agencies have reported actions initiated or planned to correct their weaknesses, a recurring condition reported in GAO, inspector general, and FMFIA reports is that agency actions, while resulting in some improvement, are not completed promptly and do not adequately address identified problems. Recent audits at IRS, Education, and Customs all found that, while some improvements had been made, corrective actions at those agencies had been repeatedly delayed or were incomplete. As with Defense, the costs of agencies’ information security weaknesses cannot be determined because agencies generally do not keep summary records of security violations or account for the cost of responding to such violations. In addition, due to poor controls and lack of user awareness, it is possible that many violations are not being detected or reported. A well designed and managed security program with senior-level support is essential for ensuring that an agency’s controls are appropriate and effective on a continuing basis. In this regard, managing information security is similar to managing risks associated with other aspects of agency operations. The program should establish a process and assign responsibilities for systematically (1) assessing risk, (2) promoting user awareness of security issues, (3) developing and implementing effective security policies and related control techniques, (4) monitoring the appropriateness and effectiveness of these policies and techniques, and (5) providing feedback to managers who may then make adjustments as needed. Such a program can provide senior officials a means of managing information security risks and the related costs rather than just reacting to individual incidents. Without a well designed and managed program, security controls may be inadequate; responsibilities may be unclear, misunderstood, and improperly implemented; and controls may be ineffective or inconsistently applied. Such conditions generally result in insufficient protection of sensitive or critical resources and, conversely, may result in disproportionately high expenditures for controls over low-risk resources. Individual audit reports describe varying causes for specific control weaknesses at individual agencies. However, in our reviews of information security controls, we found that the major underlying factor was lack of a well managed information security program with senior management support. For example, in May 1996, we reported that Defense had not established a comprehensive computer security program and had not assigned responsibility for ensuring that such a program was implemented. As a result, Defense information security policies were dated, inconsistent, and incomplete; user awareness was insufficient; and security personnel were inadequately trained. Similarly, in August 1995, we reported that IRS had no proactive, independent information security group that was systematically deployed to review the adequacy and consistency of security over IRS’ computer operations. Instead, IRS was addressing information security issues on a reactive basis. In June 1995, we reported that information security weaknesses at Education resulted from the Department’s overall weak security administration and failure to develop and implement key policies and procedures. Several of the inspector general audit reports that we reviewed also indicated that agency managers were not taking the steps needed to ensure that controls had been implemented and were operating properly. To gain an additional perspective on the causes of poor controls, we met with selected members of the steering committee of the Federal Computer Security Managers Forum, an information-sharing group established by NIST. These officials said that additional support from senior management would allow them to establish more effective programs. According to forum members, a lack of management support can result in inadequate resources devoted to information security, a situation that limits the ability of security program managers to address security needs proactively. A number of factors can contribute to the perception of a lack of senior management support for information security. First, as with other types of internal controls, senior managers may view security efforts as impediments to the efficient accomplishment of the agency’s mission. This is because security controls cost money to implement and monitor, and, generally, they diminish the ease with which systems and data can be accessed and updated. In addition, some senior managers may be unaware of the full range of threats and vulnerabilities that must be considered when determining what level of information security is adequate. Others may not have the data they need to make informed decisions. As a result, they may want to adopt information technology for new applications without adequately considering the related risks, or they may be unwilling to strengthen security over existing procedures. A comprehensive security program can help senior managers maintain an appropriate balance between operational efficiency and security by systematically and continually fine-tuning policies and control procedures through a risk assessment, monitoring, and feedback cycle. As agency systems become more interconnected and open to large numbers of outside users and as more sophisticated technical controls become available, the effort needed to manage agency systems and monitor the effectiveness of related controls will become more complex and more time-consuming. The benefits of better service and lower processing costs should far outweigh the cost of these additional security efforts. However, it will be important for senior managers to recognize the security challenges involved and to help their organizations successfully meet these challenges. OMB has participated in a variety of efforts to develop governmentwide policies regarding federal information security, and it recently issued an updated version of its central guidance to agencies on minimum automated information security program requirements. OMB has also monitored agency efforts to address recognized security weaknesses or potential weaknesses related to individual agency programs or systems. However, OMB has not proactively attempted to identify and address the underlying causes of these problems, which often are rooted in the design and management of an agency’s overall information security program. In addition, the depth and scope of OMB’s monitoring efforts have varied significantly from one agency to another. Although security program management is primarily the responsibility of agency managers, under the Paperwork Reduction Act, OMB is charged with overseeing the use of federal information resources, including providing direction and overseeing the “privacy, confidentiality, security, disclosure, and sharing of information.” OMB oversees and guides agency operations through its three statutory offices, which are primarily responsible for setting policy, and its five Resource Management Offices (RMO), which are primarily responsible for examining agency budget issues and overseeing agencies implementation of governmentwide management policies. The Office of Information and Regulatory Affairs (OIRA) is the statutory office responsible for establishing governmentwide information resource management policies, including those related to information security, and assisting the RMOs in overseeing agency implementation of these policies. OIRA’s information security oversight efforts are conducted primarily by its Information Policy and Technology Branch, which employs 10 individuals who regularly deal with governmentwide information resource management issues. Three of these individuals have routinely spent a significant amount of their time on information security issues. Over the last few years the Branch has participated in various projects to address cross-cutting information security issues as part of its overall responsibility to establish information resource management policies. These include efforts to (1) develop federal policies on the use of cryptography, (2) define the federal role regarding the security of the national information infrastructure, (3) assist the General Services Administration in developing telecommunications security requirements, and (4) explore security issues related to electronic commerce. However, the Branch’s most basic and comprehensive accomplishment regarding federal agency security practices was developing an updated version of OMB Circular A-130, Appendix III, “Security of Federal Automated Information Resources.” Issued in February 1996, the revised Appendix III is intended to clarify guidance to agencies on managing information security as they increasingly rely on open and interconnected systems. Like the previous version, issued in 1985, the new appendix establishes a minimum set of controls that are to be included in federal automated information security programs. These include assigning responsibility for security, developing a system security plan, screening and training individual users, assessing risk, planning for disasters and contingencies, and reviewing security safeguards at least every 3 years. However, unlike the previous version, the revised appendix recognizes that all federal computer systems require some level of protection, not just systems judged to be “sensitive” by agency managers. It also requires agencies to clearly define responsibilities and expected behavior for all individuals with access to automated systems and to implement security incident response and reporting capabilities. In developing the revised appendix, OIRA obtained significant input from agency managers, NIST, and the Computer System Security and Privacy Advisory Board, including written comments from over 27 organizations and individuals, before issuing the final version. Comments on the exposure draft of the revised Appendix III indicate that it is generally considered to be a valuable and necessary update to this central federal policy document that recognizes the increasingly open and interconnected computer systems that support agency operations. The senior information resource managers and security program managers that we met also generally agreed that OIRA had done a good job of developing and communicating guidance regarding information security and responding to their individual requests for clarification of guidance. To assist in overseeing agency practices regarding information resource management, including security, analysts in OIRA’s Information Policy and Technology Branch communicate frequently with RMO program examiners to (1) help ensure that the examiners are aware of high-risk or problem areas that affect the agency programs and (2) provide technical assistance to the RMOs, sometimes at the request of individual examiners. The Branch also attempts to maintain an understanding of agency practices through informal discussions with agency personnel and participation in various conferences and meetings. For example, the Branch’s primary information security policy analyst estimates that he has made six to eight presentations at individual agencies per year and numerous presentations at professional conferences and meetings, such as those of the Computer System Security and Privacy Advisory Board. He has also routinely participated in the Federal Computer Security Managers Forum, which is sponsored by NIST and meets approximately every 4 to 6 weeks. At Forum meetings, he has the opportunity to talk directly with the individuals who are responsible for administering agency security programs. However, OIRA does not systematically monitor agency compliance with OMB information security guidance or assess the effectiveness of agency information security management practices that are fundamental elements in the agencies’ ability to effectively deal with information security risks and identified weaknesses. The most recent effort to methodically gain a relatively detailed overview of agency practices was completed in 1992. That effort involved a series of visits at each of 28 agencies by a team of OMB, NIST, and National Security Agency representatives. According to a January 1992 letter to the Director of OMB from the Computer System Security and Privacy Advisory Board, the visits were enthusiastically received and resulted in greater awareness on the part of senior officials, which, in turn, resulted in increased management support for agency computer security programs. In addition, the visits resulted in proposals for improving federal information security, most of which were incorporated in OMB’s February 1996 revision of Circular A-130, Appendix III. Despite the apparent success of the 1992 visits, Information Policy and Technology Branch officials said that they have no plans to repeat the effort because it was very resource intensive. They said that as a result, no systematic visits to agencies were currently planned and that any future efforts along this line would address a range of information resource management concerns in addition to security. Engaging the services of contractors on a limited basis would be one means by which OMB could supplement its staff resources and periodically take a closer look at individual agency practices. Information Policy and Technology Branch officials told us that OMB has not customarily used contractors to assist in carrying out its oversight responsibilities. At GAO, we have found that engaging contractors to assist on individual projects can be a cost-effective means of expanding our ability to review agency operations, especially in areas such as information security where very specific and often highly technical expertise may be needed. We met with branch chiefs and program examiners responsible for examining programs at 11 of the 15 agencies covered by our review and found that their attention to information security varied. Examiners for all but one agency said that they considered information security during their examination of agency budgets and programs to some extent, although examiners for eight agencies said that they only did so when it had been highlighted by agency management or in audit reports as a problem. These considerations were generally limited to monitoring agency progress in correcting recognized problems and did not involve examining an agency’s information security program or the effectiveness of agency security practices in general. For example, the RMO branches overseeing the Departments of Agriculture and Education and the Office of Personnel Management all said that they had paid special attention to security issues associated with certain systems or facilities because weaknesses had been recently reported. The program examiners and their branch chiefs said that information security is usually not closely examined because it is only one of many issues demanding their attention. The number of program examiners responsible for each agency varied from about 5 for the Department of Education to about 30 for the Department of Defense. There were a few cases where known problems were receiving virtually no attention from the RMOs. Most notably, the representative that we spoke with about the branches that oversee the Department of Defense said that the program examiners there almost never considered problems related to information systems, including security, because such issues did not seem to have a significant budget impact compared to other issues and programs. He emphasized that due to the Department of Defense’s size and variety of programs, the Defense examiners had to be very selective in deciding which items merited examination. Also, a long-standing problem regarding a lack of disaster recovery planning at the Department of Veterans Affairs (VA) appeared to have prompted little interest from the RMO branch responsible for overseeing the Department, although other security issues were considered. Officials in several branches indicated that they were becoming increasingly sensitized to the significance of information security due to recent operational issues within their agencies. For example, the VA Branch Chief said that VA’s efforts to streamline its processes by accessing needed information in other agencies’ systems had raised a number of concerns about the security of shared data and the related legal requirements. Similar concerns were expressed by the branches overseeing system modernization projects at the Department of Agriculture and the Health Care Financing Administration because these projects would result in increased accessibility of sensitive information on individuals. Despite the increasing importance of information security, few of the program examiners said that they had any significant experience or expertise in dealing with information systems or related security issues. Most said that due to their lack of expertise, they depended largely on OIRA to help them understand the issues and assess related agency actions. Most of the branches said they had good working relationships with OIRA, as well as the other statutory offices within OMB, and that when they needed technical assistance, it was available. Also, some branches had informally designated an individual with some experience in examining systems-related issues to review these issues and to serve as a resource for other examiners in the branch. Two of the branches we visited each had a relatively experienced individual to assist in the branch’s examinations. These individuals were very familiar with their agencies’ information processing operations and appeared to have performed a much more comprehensive review of information security than had been performed by other branches. OMB provides no formal training to the RMO program examiners regarding information systems management and related security issues. Each summer, OMB provides several days of seminars on issues of interest to examiners. However, only a few hours are devoted to topics handled by OIRA, including information resource management issues, such as system development issues and security. Officials in the Information Policy and Technology Branch believe that ad hoc on-the-job learning is more effective in increasing the expertise of program examiners than a more formal program of training or awareness sessions would be. This is because the examiners can be overwhelmed by the volume of information available to them, and they are more likely to absorb information that is immediately useful. However, one branch chief said that there are few on-the-job learning opportunities regarding security issues because his branch devotes little attention to such issues. To effectively oversee and influence any activity, it is essential to have meaningful, reliable, and routinely available information on the operations being examined. However, the documented information that OMB routinely obtains on the design and effectiveness of agency information security programs varies significantly in quality, quantity, and usefulness. Officials in OIRA’s Information Policy and Technology Branch said that they routinely obtain annual internal control assessments required under the Federal Managers’ Financial Integrity Act (FMFIA) and strategic information resource management plans. Since 1985, OMB Circular A-130, Appendix III, has directed agencies to review their sensitive systems at least every 3 years, certify the adequacy of security safeguards, and include identified weaknesses in the agencies’ annual reports on internal controls required by FMFIA. Also, the Computer Security Act requires each agency to include a summary of its information security plan in its strategic information resource management plan that it submits annually to OMB. However, these documents vary significantly in level of detail and were often of little value for oversight purposes. Our review found that the FMFIA reports tended to contain very cursory information that made it difficult to precisely understand the nature of the weakness reported. Similarly, most of the security program summaries were very brief, and, in most cases, they only described very general agency goals and policies, with little information on the effectiveness of the program or on planned improvements. Further, the reporting formats varied considerably among agencies. The RMO branches that we met with said that they attempted to obtain whatever information was available on the programs they examined, in addition to the agency budget documents that were the starting point for their examinations. However, most RMO examiners said that they did not routinely seek out information on or review agency security programs and that any investigation of security issues that they made was almost always prompted by issues raised by management or auditors. Most examiners said that they relied primarily on inquiries of agency officials and related documentation in examining agency programs, including any security issues that they were aware of. However, they also said that they used audit reports, usually issued by agency inspectors general and by GAO. Several examiners noted that such audit reports were useful both in providing them an independent assessment of agency operations and in strengthening their ability to encourage agency actions. Also, several of the branches said that their examinations benefitted from good working relationships with agency inspector general officials, who would alert them to key inspector general reports and other issues. We found that, for the most part, at least one examiner in each of the branches we met with was familiar with the information security weaknesses that had been reported in inspector general, GAO, and FMFIA reports for their agencies. However, some examiners were unaware of related detailed reports that had been issued on these weaknesses. Until recently, independent audits of information security practices were performed largely at the discretion of inspector general offices and GAO and in response to congressional interest. As a result, OMB analysts and examiners could not rely on such reports being routinely available. However, program examiners at some agencies said that they have begun to review annual audits performed under the CFO Act as a means of monitoring agency control weaknesses, including those related to information security. These audits are discussed further in chapter 4 of this report. Two relatively new developments can serve to improve and facilitate OMB’s ability to oversee and influence the effectiveness of agency information security programs. One is an expansion of independent information security reviews prompted by financial statement audits required under the CFO Act. Another is the recently established CIO Council, which can serve as a forum for addressing governmentwide information security issues and raising security awareness. Although Inspector General offices and GAO have reviewed information security at federal agencies on a selective basis for decades, audits performed under the CFO Act promise to make such independent audit information more routinely available at all major agencies. Generally, CFO Act audits are required to include an evaluation of the auditee’s internal controls, including information security controls. Such evaluations can assist OMB and the Congress in their oversight roles and serve as useful tools for agency managers. In the early 1990s, selected segments of federal operations became subject to annual financial statement audits by agency inspector general offices under the CFO Act. In 1994, this audit requirement was extended to all major federal entities by the Government Management Reform Act (Public Law 103-356). As a result, the percentage of federal expenditures that is audited has been steadily growing, and, by fiscal year 1997, about 98 percent will be covered by such audits. The primary responsibility for monitoring information security programs rests with agency managers who must routinely assess their programs and adjust policies and practices as needed. However, independent audits, such as the CFO Act audits, can be useful to OMB because they provide an objective evaluation that may identify weaknesses that were overlooked by agency self assessments. For example, IRS did not report its information security weaknesses in its annual FMFIA report until after independent audits had identified the weaknesses. Although the reviews of computer security controls associated with CFO Act audits pertain to financial management systems, they usually cover a significant portion of each agency’s operations. This is because program and financial systems often are supported by common data centers and communications networks that are subject to the same general controls. For example, personnel responsible for making needed changes to software are likely to follow the same set of procedures for controlling such changes regardless of whether they pertain to a financial or nonfinancial system. Similarly, the adequacy of a disaster recovery plan for a large data center is likely to affect the security of all of that center’s operations—both financial and nonfinancial. Also, program management systems often are the source of many detailed financial transactions and, therefore, are included in the auditor’s review. However, there are significant aspects of some agencies’ operations involving sensitive computerized data that are not likely to be covered by financial statement audits. Examples include medical records and certain types of data supporting law enforcement operations. For this reason, it is important for OMB, as well as agency managers, to coordinate their reviews of CFO Act audit reports and their reviews of other information security assessments, such as self assessments conducted in accordance with FMFIA and OMB Circular A-130. When viewed together, these audits and assessments may provide a more comprehensive view of agency information security and allow OMB and agency officials to identify gaps in review coverage. The awareness and use of CFO Act audit reports as a means of identifying information security weaknesses varied among the OMB analysts and examiners that we spoke with. This is understandable since audits of many agency programs have not been required until recently, and the routine availability of annual financial audit reports is relatively new. OIRA officials told us that they had not viewed these reports as a source of information on agency compliance with federal policies, because they did not realize that information security reviews were generally included in financial statement audits. However, they said that in the future, they would obtain CFO audit reports from OMB’s Office of Federal Financial Management, where they are routinely received from agencies. The awareness of RMO program examiners was mixed. Most were aware of the CFO audit reports that affected the programs they were responsible for examining. However, a few were unaware of significant information security problems that had been reported. Another recent development that can facilitate OMB’s oversight is the recently established CIO Council. The Council, established in July 1996 through Executive Order, is intended to be “the principal interagency forum to improve agency practices on such matters as the design, modernization, use, sharing, and performance of agency information resources.” In this regard it is to support implementation of the Paperwork Reduction Act of 1995 and the Information Technology Management Reform Act of 1996. It is chaired by OMB’s Deputy Director for Management, and its membership includes CIOs from all major federal agencies. The senior information resource managers that we spoke with and officials at OIRA agreed that the Council would be an appropriate forum for addressing information security issues and raising awareness governmentwide. However, officials at two agencies expressed their opinions that to be effective, the Council must take an active role in addressing problems, such as security, and go beyond just promoting awareness and sharing information. With the support of the CIO Council and OMB, CIOs at individual agencies can raise the awareness of senior program officials to information security risks and serve as an important link between technical staff, who understand technical system and telecommunications vulnerabilities, and program managers, who understand the vulnerabilities associated with program activities, such as the risks of making inappropriate payments or inappropriately disclosing personal data on individuals. In addition, the CIOs can work together to identify and initiate efforts that benefit all of their agencies. Such efforts could include developing training programs, identifying best practices, and establishing interagency teams to review information security programs in multiple agencies. While agencies are moving toward greater reliance on computers and electronic data to improve operations, recent reports indicate that many are not adequately addressing the associated risks. Most importantly, these agencies have not instituted security programs that are the foundation for ensuring that specific control techniques are appropriately selected and effectively implemented. The potential risks and related management challenges will increase as reliance on networked systems and electronic data increases and as more sophisticated control techniques become available. For this reason, it is important that OMB and agencies move promptly to increase senior management awareness of this problem and institute effective programs for managing these risks. Implementing effective information security programs is primarily the responsibility of managers at individual federal agencies, since they are the most familiar with program risks and they have the ability to bring resources to bear where they will be most effective. However, OMB is responsible for overseeing these activities. OMB could strengthen its ability to fulfill this role if (1) it obtained more concise and meaningful information on the design of agency security programs and (2) RMO program examiners—the individuals with the most detailed understanding of agency operations—were more familiar with information security issues and did not have to depend as much on OIRA’s limited staff for assistance. To improve its oversight capability, it is important that OMB capitalize on every opportunity to leverage its resources and take advantage of all available information on agency information security practices. Some opportunities, including the increased number of annual financial statement audit reports and the recently established CIO Council, are already emerging as potential aids in overseeing and improving agency information security programs. However, there are additional steps that OMB can take to ensure that these opportunities are exploited and to increase the expertise of its staff. In this regard, we recommend that the Director of OMB take the following actions: Advocate and promote the CIO Council’s adoption of information security as one of its top priorities and development of a strategic plan for (1) increasing awareness of the importance of information security, especially among senior agency executives, and (2) improving information security program management governmentwide. Initiatives that the CIO Council should consider incorporating in its strategic plan include developing information on the existing security risks associated with nonclassified systems currently in use; developing information on the risks associated with evolving practices, such as Internet use; identifying best practices regarding information security programs so that they can be adopted by federal agencies; establishing a program for reviewing the adequacy of individual agency information security programs using interagency teams of reviewers; ensuring adequate review coverage of agency information security practices by considering the scope of various types of audits and reviews performed and acting to address any identified gaps in coverage; developing or identifying training and certification programs that can be shared among agencies; and identifying proven security tools and techniques. Direct the Office of Information and Regulatory Affairs, the Office of Federal Financial Management, and the Resource Management Offices to (1) supplement their current reviews of audit reports to include reviewing audits conducted under the CFO Act in order to identify any findings related to information security and (2) use this information, in conjunction with reports on agency self assessments, to assist in proactively monitoring the scope of such reviews and the effectiveness of agency information security practices. Encourage the development of improved sources of information with which to monitor compliance with OMB’s guidance and the effectiveness of agency information security programs. This could include engaging assistance from private contractors or others with appropriate expertise, such as federally funded research and development centers. Direct the Office of Information and Regulatory Affairs to develop and implement a program for increasing program examiners’ understanding of information security management issues so that they can more readily identify and understand the implications of information security weaknesses on agency programs. In written comments on a draft of this report, OMB agreed that information security is an important management issue and stated that certain of the report’s recommendations are meritorious. In particular, OMB said that it will encourage the CIO Council to adopt information security as one of its top priorities and that it will review (1) the training and related materials provided to program examiners and (2) the availability of improved sources of information. However, OMB disagreed with the report’s tone, which it characterized as suggesting “that OMB has not been dedicating sufficient resources in the past to overseeing the agencies’ information security activities, and that therefore OMB in the future should dedicate more of its resources to this objective.” In addition, OMB stated its concern that the report overemphasizes OMB’s role and that this could distract federal agencies from their responsibilities as the primary managers of federal information security. We agree that agency managers are primarily responsible for information security. Our audit efforts related to information security over the past few years have focused almost exclusively on individual agency practices, and we have made dozens of related recommendations to agency officials. Thirty products resulting from this work and containing these recommendations are listed at the end of this report. The results of this work led us to identify a pattern of governmentwide information security weaknesses. In light of the pattern of weaknesses that we have identified and the increasing importance of information security in virtually every aspect of federal operations, OMB has a vital leadership role to play in promoting and overseeing agency security practices. This role was recently reemphasized in the Information Technology Management Reform Act of 1996 and in revisions to the Paperwork Reduction Act, which together explicitly outline OMB’s responsibilities for overseeing agency practices regarding information privacy and security. Information security has become a consideration in the management of virtually every major federal program and in billions of dollars in annual information technology investment decisions. For these reasons, we believe that information security, as well as other information management issues, merits a high priority relative to other budget and management issues. In this regard, our recommendations are focused primarily not on increasing the amount of OMB resources but on increasing the impact of OMB’s current resources by taking advantage of newly available audit information, discussed in chapter 4, and by expanding staff expertise. These actions, at a minimum, are needed to help address growing concerns over the adequacy of federal information security. We also believe that periodic oversight reviews of agency information security programs would be beneficial but that such reviews could be performed by interagency teams under the auspices of the OMB-chaired CIO Council, as we suggest in chapter 4.
Pursuant to a congressional request, GAO provided a general overview of the adequacy of information security at 15 major federal agencies, focusing on: (1) recent reviews and self-audits of information security at these agencies; (2) the most significant information security weaknesses and their causes; and (3) the Office of Management and Budget's (OMB) oversight of federal agency practices and opportunities for improvement. GAO found that: (1) recent audits and reviews indicate that weak information security is a serious governmentwide problem, with serious weaknesses reported for over two-thirds of the agencies reviewed; (2) commonly reported weaknesses include information access control problems and inadequate disaster planning; (3) at half of the agencies reviewed, information security problems remained uncorrected for 5 years or longer; (4) many agencies lack a well-managed information security program with senior management support; (5) although OMB has improved federal information security guidance and its monitoring of agency efforts to address identified weaknesses, the scope and depth of its oversight efforts varies considerably among agencies; (6) information that OMB obtains on federal information security programs varies significantly in terms of the quality, quantity, and usefulness of the information; (7) OMB could use expanded requirements under the Chief Financial Officers Act to further monitor agencies' information security programs and weaknesses; and (8) the recently established Chief Information Officers (CIO) Council can serve as a forum for addressing governmentwide information security issues.
In May 1985, the Secretary of Agriculture established EEP to address, in part, continuing declines in U.S. agricultural exports and to pressure foreign nations to reduce trade barriers and eliminate trade-distorting practices. Subsequently, the Food Security Act of 1985 (P.L. 99-198, Dec. 23, 1985) specifically authorized EEP as an export subsidy program. The program was reauthorized by the Food, Agriculture, Conservation, and Trade Act of 1990, which extended EEP through 1995. From May 1985 to May 1994, FAS awarded bonuses valued at $7.1 billion (in constant 1993 dollars) to EEP exporters to sell mainly bulk commodities, such as wheat or rice. To qualify for EEP funding, proposed commodities and countries must be approved under an interagency process. FAS receives oral and written recommendations for countries and commodities to target under EEP; most of the recommendations come from trade associations and from within FAS. Recommendations are also submitted by importing countries, exporters, U.S. and foreign government officials, and other members of the U.S. agricultural community. EEP regulations outline four criteria to be used, among other things, by FAS in determining if commodities and countries proposed for EEP participation meet the program’s objectives. How will the proposal contribute to furthering trade policy negotiations with foreign competitor nations that use unfair trade practices? How will the proposal contribute toward developing, expanding, or maintaining U.S. agricultural export markets? What will be the impact on countries that do not subsidize their agricultural exports? What is the cost of the proposal compared to the expected benefits? FAS recently changed the emphasis in its review of EEP proposals from furthering trade policy negotiations to market development. According to FAS, the implementing legislation for the GATT Uruguay Round agreement made furthering trade policy negotiations with competitor nations less significant. If FAS recommends approving the proposal, the proposal must then be approved by the Department of Agriculture’s Under Secretary for Farm and International Trade Services and by the interagency Trade Policy Review Group. The Group includes representatives from agencies with an interest in foreign trade issues. Once a proposal is approved, FAS issues invitations for bids specifying the targeted country or countries, the commodity, the maximum quantity of the commodity eligible for a bonus, the eligible buyers, and the other terms and conditions of the sale. Exporters can then bid for an EEP bonus award. First, exporters must negotiate a sales price with an eligible buyer in the target country. After determining what bonus amount is needed to close the gap between the going price for the commodity in the targeted country (world price) and the U.S. price, the competing exporters then submit this information to FAS as bids. Next, FAS reviews the bids to determine if the price and bonus amounts are within FAS’ acceptable ranges. FAS calculates the prevailing price for the commodity in the target market using various information sources. FAS rejects bids proposing prices that undercut the world price it calculated for the commodity as well as those proposing bonus amounts that exceed the difference between the world price and the U.S. market price. FAS then awards bonuses starting with the lowest bonus amount requested per unit of the commodity and proceeds to the next highest bonus amount until the quantity of the commodity eligible for EEP bonuses is exhausted. To assess whether providing EEP bonuses to foreign-owned exporters is consistent with program goals and objectives, we researched the legislative and regulatory history of the program to identify (1) the objectives of the program and (2) the intended role of exporters in the program. We also interviewed FAS headquarters officials to discuss those issues and whether changes to EEP contained in legislation recently passed by Congress would alter the role of exporters in the program. To assess whether restricting foreign-owned exporters from participation would adversely affect EEP, we obtained and analyzed fiscal year 1992 FAS data on EEP bids and awards for eight commodities. Fiscal year 1992 data were used because they were the most current and complete fiscal year data available at the start of our review. We also obtained and analyzed data from FAS on exporters participating in the program from May 1985 to May 1994. We did not verify the accuracy of data obtained from FAS. Because there is no standard definition of what constitutes a foreign- or domestic-owned firm, we used the location of company headquarters and parent company headquarters to categorize exporters as foreign- or domestic-owned. If the company was headquartered outside the United States or if it was the U.S. subsidiary of a company headquartered outside the United States, we classified the exporter as foreign-owned. We then used these data to determine (1) the extent to which foreign-owned exporters bid for and received EEP bonuses and (2) the quantity of EEP commodities exported by these foreign-owned companies on a commodity- and country-specific basis. We also reviewed economic literature regarding the relationship between the number of bidders and the extent of competition. To identify FAS’ internal controls for detecting unauthorized diversions of EEP shipments, we reviewed EEP regulations and FAS written guidelines and procedures on controls over EEP shipments. We also interviewed officials from FAS headquarters in Washington, D.C., and the Agricultural Stabilization and Conservation Service in Kansas City, Missouri, about features of the control system. To assess the adequacy of the controls, we initially tested the controls by reviewing 25 judgmentally selected EEP shipments. The shipments reviewed were selected to cover the various commodities exported under EEP and to provide a mix of foreign- and domestic-owned exporters. On the basis of our preliminary results, we expanded our testing by randomly selecting 100 shipments from the 3,356 shipments that occurred under EEP during fiscal year 1992. During our testing, we compared data provided by exporters on EEP shipments with data maintained by Lloyd’s Maritime Information Services, Inc., on the movement of marine vessels. We did our review from July 1993 to September 1994 in accordance with generally accepted government auditing standards. We received written comments on a draft of this report from the FAS Administrator. They are summarized on page 13 and presented in full in appendix II. FAS’ award of EEP bonuses to foreign-owned corporations is consistent with program objectives set forth in the Food, Agriculture, Conservation, and Trade Act of 1990. These objectives are to “discourage unfair trade practices by making U.S. agricultural commodities competitive.” The nationality of an exporter’s ownership is not germane to the pursuit of these objectives, since both foreign- and domestic-owned EEP exporters act as intermediaries in the program’s sales of U.S. agricultural commodities in overseas markets. Exporters help ensure that U.S. agricultural commodities compete on the world market by negotiating sales and prices with potential foreign buyers and by arranging for commodity deliveries to foreign buyers. The 1990 statute does not preclude foreign-owned exporters from receiving cash payments or commodities under the program as long as such payments serve the stated purpose of discouraging unfair foreign trade practices by making the prices of U.S. agricultural commodities competitive. In addition, the statute does not make a distinction regarding the treatment of domestic- and foreign-owned exporters under the program. Pending changes to EEP resulting from the implementation of the GATT Uruguay Round agreement are unlikely to alter the role of exporters in the program, according to FAS officials. In April 1994, U.S. officials joined delegates from more than 100 other countries in signing the GATT Uruguay Round agreement. The agreement, among other things, requires participating developed countries to reduce their subsidies for agricultural exports by 36 percent in budgetary outlays and reduce the quantities of subsidized exports by 21 percent. The agreement also prohibits member nations from introducing or reintroducing subsidies for agricultural products that were not subsidized during the 1986 to 1990 base year period. In December 1994, Congress enacted implementing legislation for the Uruguay Round agreement (P.L. 103-465, Dec. 8, 1994). The legislation extended EEP through 2001 and refocused EEP so that it would not be limited to countries where the United States faces unfair foreign trade practices. While the Uruguay Round agreement established annual ceilings on the use of subsidies, it did not prohibit the use of agricultural export subsidies. Therefore, the Clinton administration recommended, and Congress agreed, that it was necessary to maintain EEP and other U.S. agricultural subsidy programs as a means of inducing other nations to negotiate further reductions on the use of agricultural export subsidies. According to FAS officials, the implementing legislation allows EEP to be used to export U.S. agricultural commodities to a greater number of countries. FAS officials we spoke with did not yet know how the change in EEP’s objectives would affect the program’s operation. However, they did not anticipate changes being made to the role of exporters in the program. Eliminating foreign-owned exporters from EEP participation could impair competition for EEP bonuses, which could ultimately lead to higher subsidies being paid for each unit of commodity exported under the program. In addition, our analysis of EEP award data suggested that restricting foreign-owned exporters from EEP participation could significantly lower the amount of barley malt, barley, and wheat exported under EEP unless the extent of foreign-owned exporter participation could be replaced by domestic-owned exporters. However, we could not determine whether domestic-owned exporters could easily replace foreign-owned exporters in the program. Currently, foreign-owned exporters receive a substantial portion of EEP bonuses—over 39 percent—as shown in table 1. It is important to note that of the 38 exporters we classified as foreign owned, 36 are the U.S. subsidiaries of parent companies located outside of the United States. Many of these U.S. subsidiaries have a substantial presence in the United States. For example, the Pillsbury Company, which is the subsidiary of a British firm, is headquartered in Minnesota and employs 8,000 workers throughout the United States. (See app. I for a complete listing of EEP exporters participating in the program from May 1985 to May 1994 and their ownership classification.) Eliminating foreign-owned exporters from the program would reduce the number of bidders for EEP bonuses. The economic studies we reviewed suggested that eliminating potential bidders from participating in EEP would reduce competition for EEP bonuses. Reduced competition among a smaller pool of bidders for EEP bonuses could lead to payment of larger EEP bonuses per unit of commodity subsidized under the program. FAS officials hold a similar view. They explained that strong competition for bonuses should result in smaller bonus awards as exporters vie for a fixed amount of EEP bonuses. These smaller awards per unit of export should allow FAS to subsidize a greater quantity of EEP commodities since lower bonus payments per unit of export enable FAS to subsidize more exports with available EEP funds. Our analysis of bidding activity by exporters during fiscal year 1992 for eight commodities showed that foreign-owned exporters submitted over one-third of the bids for bonus awards. Foreign-owned exporters were particularly active bidders for wheat and barley malt bonuses, submitting 44 and 72 percent, respectively, of the bids for those commodities during fiscal year 1992. Foreign-owned exporters received a significant share of the winning bids, with foreign-owned exporters being more important for some commodities than others. As shown in figure 1, foreign-owned exporters accounted for about 79 percent of the quantity of barley malt sold under EEP during fiscal year 1992. As with barley malt and barley, a major portion (50 percent) of the quantity of wheat sold under EEP during fiscal year 1992 was exported by foreign-owned exporters. This is significant because wheat exports have overshadowed all other commodities in the EEP program. During fiscal year 1992, bonuses for wheat shipments accounted for about 84 percent of all EEP funds. Given the number of variables that affect whether an exporter participates in and receives bonuses under EEP, we could not determine if domestic-owned exporters could easily replace foreign-owned exporters in the program. For example, FAS does not know whether the domestic-owned exporters currently participating in the program would bid for the volume of EEP commodities currently exported by foreign-owned exporters. Domestic-owned exporters would still need to meet FAS’ price and bonus thresholds for EEP bonuses. FAS also does not know to what extent domestic-owned exporters not currently participating in EEP would enter into the program and compete successfully for EEP bonuses. Currently, exporters must provide FAS with documentation showing their experience in selling at least a minimal amount of the targeted commodity during the previous 3 calendar years to qualify for EEP participation. FAS issued a proposed rule on January 18, 1995, that would eliminate this requirement. According to FAS officials, some exporters have complained that the experience requirement prevented them from otherwise qualifying for program participation. FAS officials told us that eliminating the experience requirement should increase the number of exporters eligible to participate in the program. However, they stated that the number of additional exporters that would actually receive bonuses under the program and the extent of their participation are not known. FAS has only a limited ability to detect unauthorized diversions of EEP shipments. Unauthorized diversions occur when commodities do not arrive at the destination country and, instead, are sent to another country. Unauthorized diversions of EEP shipments are both illegal and counter to the current targeting aspects of the program. Internal FAS controls to detect unauthorized diversions primarily consisted of examining exporter-provided documentation to determine if EEP commodities arrived at the destination country. However, information the exporters provided was not reliable or accurate in some cases. While FAS is attempting to improve its monitoring of EEP shipments, key limitations hinder its ability to verify that shipments were not diverted. The possibility of unauthorized diversions of EEP shipments has long concerned Congress. The Food, Agriculture, Conservation, and Trade Act of 1990, which prohibits such diversions, requires exporters to maintain proof that EEP commodities arrived at the intended destination. The act also requires FAS to ensure that the agricultural commodities arrived at the intended destination country as provided for in the EEP agreement. FAS relied primarily on information supplied by exporters to monitor for possible unauthorized diversions. FAS required EEP exporters to provide bills of lading to document the export of EEP commodities. FAS also required exporters to provide documentation showing the receipt of EEP commodities in the intended destination countries. FAS officials told us that their staff then compared the certificates of entry to the bills of lading to monitor for possible diversions of EEP shipments and to ensure that EEP bonuses were paid only for commodities that actually had arrived at the intended destination. Our review of individual EEP shipments showed that exporters did not always provide reliable and accurate information regarding the arrival of EEP commodities in destination countries. To assess the reliability of documents submitted by exporters, we first reviewed the documentation provided by exporters in support of 25 EEP shipments made in fiscal year 1992. During our review of the 25 shipments, we found discrepancies that led us to question the accuracy and validity of the documentation provided by the exporters. For example, we compared the information on the bills of lading to the certificates of entry and found that one exporter had provided certificates of entry showing the arrival of the ship in the destination country before the cargo loading date shown on the bills of lading. We then expanded our analysis to include a review of 100 randomly selected fiscal year 1992 shipments. Although we did not find any discrepancies between the bills of lading and the certificates of entry upon our review of the 100 shipments, we did find 6 shipments for which the exporters had submitted questionable or inaccurate information. We used an on-line data service, known as SeaData, subscribed to by FAS, to verify the accuracy of the certificates of entry. FAS had been testing and using the SeaData system, which is maintained by Lloyd’s Maritime Information Services, Inc., since January 1992 to obtain information on the movement of commercial trading vessels worldwide. We found six cases in which SeaData had reported that the vessels shown on the certificates of entry had been in different areas of the world and had not visited the ports or countries shown on the certificates of entry. At our request, FAS contacted the exporters for the six shipments and verified that five of the shipments had been taken off the vessel shown on the bill of lading and loaded onto another vessel for delivery to the target country. It also verified that the certificates of entry did not list the vessel from which the EEP commodity had actually been unloaded in the destination country. Instead, the certificates of entry showed the name of the vessel that the EEP commodity had been transferred from. The remaining case was not resolved because the exporter was unable to supply additional documentation to support the arrival of the EEP commodity in the destination country. FAS subsequently notified exporters of the need to provide further documentation whenever EEP commodities are transferred from one vessel onto another for delivery to the target country. Given that five of the six discrepancies identified in our random sample were resolved, we would not expect many of the 3,356 shipments to have unresolvable discrepancies. Any unauthorized diversion of EEP shipments undermines the targeting aspect of the program. According to FAS, EEP’s targeting aspect was intended to (1) demonstrate a direct response to subsidized competition; (2) minimize the impact on foreign competitor nations that do not subsidize their agricultural exports; and (3) provide a more focused and, therefore, effective use of EEP funds. By targeting markets where foreign nations are providing subsidized exports, EEP is intended to pressure subsidizing foreign nations to eliminate the use of subsidies and other trade-distorting practices. Although the United States has made progress in obtaining foreign competitor nations’ commitment to reduce the use of agricultural export subsidies, FAS officials told us that EEP is still necessary to induce foreign competitor nations to negotiate further reductions. As a result, any unauthorized diversions of EEP shipments reduce the program’s effectiveness as a trade policy tool. FAS plans to use SeaData to strengthen its ability to ensure that unauthorized diversions of EEP shipments do not occur. FAS officials told us that they will randomly select EEP shipments and use SeaData to verify the accuracy of the data provided by the exporters. However, SeaData has some significant limitations. The SeaData system provides information on ship movement but not on whether commodities were unloaded from the ship in the ports it visited. In addition, the SeaData system does not provide data on ship movement in certain parts of the world. For example, the SeaData system cannot be used to verify whether ships bound for some ports in the former Soviet Union arrived as shown on the exporter’s certificate of entry. FAS officials told us they were exploring other methods of verifying the arrival of EEP commodities in the destination countries. They said that random on-site inspections of EEP shipment arrivals were not feasible because of resource constraints and because some foreign countries would not allow U.S. government officials physical access to their ports. However, they said they were considering more cost-effective alternatives to on-site inspections. For example, FAS staff may be able to perform on-site reviews of documents maintained by some large EEP buyers in foreign countries. The Foreign Agricultural Service provided written comments on a draft of this report. It said that FAS had recently shifted the emphasis of its review of EEP proposals from the impact on furthering trade policy negotiations to market development. FAS said that the shift in emphasis was in accordance with the implementing legislation for the GATT Uruguay Round agreement. FAS pointed out that the draft report did not acknowledge that it had been testing and using the SeaData system for over a year before making it available to GAO. FAS provided some additional information on its efforts to obtain reliable third-party sources of information that could be used to verify the quantity of commodity discharged at the destination port. Lastly, FAS said that one of the EEP exporters shown in the draft report as being foreign-owned was currently owned by a U.S. company. Where appropriate, FAS’ comments have been incorporated into the text of the report. The complete text of FAS’ comments, along with our specific responses, is included in appendix II. We are sending copies of this report to the Secretary of Agriculture and other interested parties. Copies will be made available to others on request. The major contributors to this report are listed in appendix III. Please contact me at (202) 512-4812 if you have any questions concerning this report. N.P. N.P. AG Processing, Inc. AG Processing, Inc. N.P. N.P. ConAgra, Inc. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. Ferruzzi Finanziaria, S.p.A. N.P. N.P. N.P. N.P. Boro Hall International, Ltd. N.P. N.P. Brown Swiss Cattle Breeders Association of the U.S.A. CAM S.A. ConAgra, Inc. N.P. N.P. N.P. N.P. Cargill, Inc. N.P. N.P. N.P. N.P. N.P. N.P. Erly Industries, Inc. N.P. N.P. ConAgra, Inc. (continued) ConAgra, Inc. ConAgra, Inc. ConAgra, Inc. Connell Company, Inc. N.P. N.P. Dreamstreet Holsteins, Inc. N.P. N.P. N.P. N.P. Dolphin Reefer Lines Co., Ltd. Foster’s Brewing Group Limited N.P. N.P. Euro-Maghrib, Inc. N.P. N.P. N.P. N.P. N.P. N.P. Norfoods, Inc. N.P. N.P. N.P. N.P. N.P. N.P. Great West Holdings, Inc. Canada Malting Company, Ltd. N.P. N.P. (continued) N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. C. Itoh & Company Limited N.P. N.P. N.P. N.P. Italgraini S.p.A. Goldman Sachs Group Limited Partnership Cargill, Inc. L & M Food Group Limited N.P. N.P. Louis Dreyfus et Cie, S.A. N.P. N.P. North Star Universal, Inc. Marshall Durbin Food Corp. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. (continued) Oleo Trading, S.A. N.P. N.P. ConAgra, Inc. Salomon, Inc. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. Rosscape, Inc. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. Toshoku, Ltd. Farmlands Industries, Inc. N.P. N.P. N.P. N.P. N.P. N.P. N.P. N.P. Mars, Inc. Mitsui and Company, Ltd. N.P. N.P. N.P. N.P. (continued) N.P. N.P. Place Vendome Nominees, Ltd. Place Vendome Nominees, Ltd. N.P. N.P. N.P. = No parent company indicated in the source data. The following are GAO’s comments on FAS’ letter dated March 20, 1995. 1.The report was amended to show that FAS now emphasizes market development in its review of EEP proposals. 2.We changed the report to recognize FAS’ earlier use of the SeaData system. 3.We acknowledged in our draft report that FAS routinely examined the bills of lading and other documents it receives to monitor for possible diversions. However, we believe that additional information is needed to show what was actually received at the export destination. We encourage FAS to continue its efforts to identify additional sources of information that will allow it to monitor for possible diversions of EEP shipments. 4.Appendix I and the corresponding statistics used in this report were modified to reflect the change in the ultimate parent company for Tradigrain. Kane A. Wong, Assistant Director Harry Medina, Evaluator-in-Charge Gerhard C. Brostrom, Reports Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the participation of foreign-owned companies in the Foreign Agricultural Service's (FAS) Export Enhancement Program (EEP). GAO found that: (1) foreign exporters' participation in EEP is consistent with the program's basic objectives of discouraging other countries' unfair trade practices and increasing the competitiveness of U.S. agricultural commodities; (2) exporters help achieve these objectives by facilitating U.S. agricultural product sales in targeted countries; (3) restricting foreign exporters' EEP participation could reduce the effectiveness of the program; (4) eliminating foreign-owned exporters would reduce the number of bidders for EEP bonuses, which would reduce competition and result in higher program costs; (5) it is unclear whether domestic-owned exporters could easily replace foreign-owned exporters; and (6) FAS ability to detect unauthorized diversions of EEP shipments, consisting mainly of checking exporters' documents which may be unreliable or inaccurate, will be affected by limitations in the database.
In 1980, the Comprehensive Environmental Response, Compensation, and Liability Act created the Superfund program to clean up highly contaminated hazardous waste sites. Under the act, EPA is authorized to compel the parties responsible for the contamination to perform the cleanup. EPA may also pay for the cleanup and attempt to recover the cleanup costs from the responsible parties. When EPA pays for the cleanup, the work is conducted by a private contractor who is directly hired by EPA, another federal entity, or a state. Superfund contractors study and design cleanups, as well as manage and implement cleanup actions at sites on the National Priorities List (EPA’s list of the nation’s worst hazardous waste sites) or at sites where there are immediate threats from hazardous wastes. In our 1998 report on contractor cleanup spending, we reported that for remedial action cleanups managed by EPA, about 71 percent of the costs charged by cleanup contractors was for the subcontractors who physically performed the cleanups—such as earthmoving and constructing treatment facilities. The remaining 29 percent went to the prime contractors for professional work, such as construction management and engineering services, and associated travel, overhead, and administrative costs and fees. For the purpose of this report, contractor cleanup work includes all Superfund spending for the study, design, and implementation of cleanups. The remaining Superfund spending is classified as cleanup support, which includes both site-specific and non-site-specific support. Site-specific support consists of Superfund activities linked to a specific hazardous waste site, such as supervising cleanup contractors and conducting site analyses. Non-site-specific support consists of activities related to the overall Superfund program, rather than a specific site, and includes activities such as financial management and policy development. The share of total Superfund expenditures for contractor cleanup work declined from about 48 percent in fiscal year 1996 to about 42 percent in fiscal year 1998. Over the same period, spending for site-specific support increased from about 16 percent of total Superfund expenditures to about 18 percent. Finally, the non-site-specific expenditures also increased from about 36 percent to over 39 percent. (See fig. 1.) As the figure shows, the share of Superfund expenditures used for contractor cleanup work decreased between fiscal year 1996 and fiscal year 1997, and again in fiscal year 1998. EPA officials could not explain these changes in detail because they had not analyzed Superfund costs in this manner and were unaware of this decline until we presented the results of our analysis. Similarly, EPA officials were unaware of, and therefore did not have an explanation for, the changes in the other cost categories shown in figure 1 above. The actual expenditures for contractor cleanup work, site-specific support, and non-site-specific support for fiscal years 1996 through 1998 are shown in table 1. Over the 3-year period of our analysis, the mix of spending for contractor cleanup work, site-specific support, and non-site-specific support varied substantially among EPA’s regions and headquarters units. (See fig. 2.) As shown in figure 2, the mix among contractor cleanup work, site-specific support, and non-site-specific support is substantially different between headquarters and the regions. This difference can be expected because headquarters functions are more related to administration and management, while the regions have primary responsibility for overseeing the implementation of cleanups. However, our analysis also identified substantial variation among the regions in the mix of their expenditures. Specifically, expenditures for contractor cleanup work ranged from a low of 42 percent in EPA’s Kansas City region to a high of 72 percent in EPA’s Boston and New York regions. Site-specific support spending ranged from a low of 12 percent in EPA’s New York region to a high of 29 percent in EPA’s Kansas City region. Non-site-specific support ranged from a low of 14 percent to a high of 30 percent among EPA’s regions. These differences in the relative shares of expenditures among these categories—more than double in some instances—raise questions about the factors underlying them. We discussed these variations with EPA headquarters officials. However, because EPA does not analyze Superfund expenditures in this manner, they did not have an explanation for the specific factors underlying these regional differences and whether they warrant action. We also examined EPA’s Superfund personnel costs because they account for a significant share of all Superfund support costs. In total, over the last 3 years, about 21 percent of EPA’s Superfund personnel expenses have been for site-specific functions and 79 percent for non-site-specific functions. As shown in figure 3, this breakdown varies substantially between regional personnel spending and headquarters personnel spending. Over the 3-year period of our analysis, Superfund personnel spending totaled about $722 million. Of this, about $547 million was for regional personnel spending, and the remaining $175 million was for headquarters personnel spending. Over this period, the breakdown between site-specific and non-site-specific personnel spending within the individual units (headquarters and each of the regions) remained relatively constant from year to year. However, we found that there was variation among the regions. For example, site-specific personnel spending for the 3-year period ranged from a low of 22 percent in one region to a high of about 33 percent in another region—a 50-percent difference between the lowest and highest regions. Because EPA headquarters does not analyze Superfund personnel costs in terms of the amount of site-specific and non-site-specific spending, the meaning of these differences is unclear. In 1996, EPA implemented improvements to its Superfund accounting system to better track Superfund expenditures. EPA expected that these improvements would help it compile more detailed cost information to support the agency’s efforts to recover costs from responsible parties and to improve internal tracking of Superfund financial data for management purposes. These improvements introduced over 100 categories to account for the activities that are paid for with Superfund money. Some of the categories capture activities that are site-specific, such as monitoring and supervising cleanups conducted by private parties, while other categories capture activities that are more administrative, such as maintaining automated data processing systems. We found that Superfund spending is not evenly distributed among all the activity categories. Three of the more than 100 categories accounted for over 60 percent of all Superfund support costs (both site-specific and non-site-specific). These three categories are defined by EPA as follows: General support and management—includes all activities associated with managing and evaluating costs for site characterization. Also includes the general support activities required to operate and maintain the Superfund program. Activities include, but are not limited to, the following contractual services: establishing, maintaining, and revising automated data processing systems, and conducting special studies to help determine programmatic direction in future years. General enforcement support—includes all activities associated with managing and evaluating the enforcement program. Activities include, but are not limited to, the following contractual services: establishing, maintaining, and revising automated data processing systems, and conducting special studies to help determine programmatic direction in future years. Remedial support and management—includes all activities associated with managing and evaluating the remedial program. Figure 4 shows EPA’s spending for non-site-specific and site-specific support. EPA’s non-site-specific spending was more concentrated in these three administrative categories than its site-specific spending. Specifically, about 78 percent of EPA’s non-site-specific spending was in the three administrative categories, compared to only 25 percent of the site-specific spending. Given the concentration of non-site-specific spending under these three categories, we conducted a detailed analysis of 1 year’s (fiscal 1997) non-site-specific spending under these three administrative categories for three EPA regions and the three headquarters offices that had the highest amount of Superfund spending—the Office of Administration and Resources Management, the Office of Enforcement and Compliance Assurance, and the Office of Solid Waste and Emergency Response. For the three regions, most of the non-site-specific spending was on personnel items—such as management, administrative, and secretarial support—and general support activities, such as financial management, facility management, public affairs, and contract management. We found that some of this spending represented cost allocations to the Superfund program, while other spending was more directly related to specific program activities. For example, in all three regions we found that some of the non-site-specific costs had been allocated to the Superfund program for its share of expenses, such as the regional administrator’s management, clerical, and administrative costs, regional motor pool expenses, and computer equipment and service costs. We also identified a few instances in which non-site-specific expenditures were more directly related to implementing cleanups, such as expenditures on annual physical examinations for staff who conduct field work at hazardous waste sites. Among the headquarters units, the Office of Administration and Resources Management had non-site-specific Superfund expenditures for items such as rent, information management, and facilities operations and maintenance. The Office of Enforcement and Compliance Assurance had non-site-specific expenditures for items such as overall program direction; policy development; and budgetary, financial and administrative support. This Office also incurred expenses for criminal investigations and for activities such as field sampling and laboratory and forensic analyses in support of criminal cases. These expenses were recorded as non-site-specific to protect the confidentiality of ongoing criminal investigations. The Office of Solid Waste and Emergency Response had non-site-specific expenditures for personnel functions, such as developing national strategy programs, technical policies, regulations and guidelines, and for providing program leadership for such activities as community involvement, program planning and analysis, contract management, information management, and human and organizational services. This Office also incurred non-site-specific expenditures for contracted functions such as worker training, analytic support for EPA’s contract laboratory program, and information management support. We also analyzed EPA’s spending for site-specific support activities for fiscal years 1996 through 1998. We found that about $184 million of the site-specific spending was in the three administrative categories. About $542 million was in the other more than 100 categories, for activities such as developing information for enforcement cases, overseeing cleanups at federal facilities, conducting site analyses and studies, overseeing private party cleanups, conducting laboratory analysis, and supervising cleanup contractors. EPA regularly monitors and performs analyses of Superfund spending. These analyses, however, do not examine the breakdown of Superfund expenditures in terms of contractor cleanup work, site-specific support, and non-site-specific support. The Director of the Superfund office responsible for resources and information management provided a summary of the activities EPA undertakes to manage Superfund spending, including: monitoring whether regions and units obligate funds at the expected rate and in accordance with the agency’s operating plan; conducting midyear reviews that focus on program accomplishments, contracts and grants, and resources management; reviewing contract management issues in all regions on a 3-year cycle; and monitoring inactive contracts to identify and deobligate funds that are no longer needed. EPA’s 1996 memorandum announcing improvements to its Superfund accounting system stated that one of the main benefits of the improvements would be to enable managers to more precisely account for site-specific and non-site-specific costs. The memo also stated that Superfund financial and programmatic managers would be able to track financial trends more accurately due to the increased level of financial detail now available in the accounting system. However, when we discussed our analyses with EPA officials, they told us that they do not perform the types of analyses we conducted. During the course of our work, we noted that another federal agency that deals with the cleanup of hazardous wastes—the Department of Energy—has been analyzing its costs using a functional cost reporting system since 1994. This system breaks costs down into functional categories—mission-direct and several categories of support costs, including site-specific support and general support. While not identical to the categories we used in our analyses, Energy’s functional cost categories are similar. In essence, Energy’s system compares the share of costs in the different categories among the agency’s operating units. If a unit’s costs in any given category vary significantly from the other operating units’, those costs are further analyzed to determine whether the differences are appropriate or whether they indicate areas for improvement. Department of Energy financial officials stated that the functional cost reporting system has resulted in support costs receiving increased attention by management and has been a helpful tool that has contributed to support costs declining faster than other costs—from 45 to 43 percent of total costs between fiscal years 1994 and 1997. Detailed analyses of expenditure trends over time and among regions and headquarters units can be a valuable tool in identifying potential cost savings. While EPA’s Superfund accounting system contains the data necessary to perform such analyses, EPA has not done so, even though tracking site-specific and non-site-specific costs more accurately was one of the major benefits anticipated when the 1996 system improvements were made. Given the variation in spending shares for contractor cleanup work, site-specific support, and non-site-specific support among EPA’s regional and headquarters units, we believe that conducting such analyses would be a valuable tool in helping the agency to ensure that its Superfund resources are being used as wisely as possible. In order to better identify opportunities for potential cost savings, we recommend that the Administrator, EPA, require the Assistant Administrator for Solid Waste and Emergency Response to expand the monitoring of Superfund expenditures to regularly analyze the breakdown of expenditures in terms of contractor cleanup work, site-specific spending, and non-site-specific spending. These analyses should compare such spending shares among EPA’s regional and headquarters units, and significant differences should be further analyzed to identify the underlying causes and to determine whether cost-saving corrective actions are warranted. We provided EPA with copies of a draft of this report for its review and comment. In a letter from EPA’s Acting Assistant Administrator for Solid Waste and Emergency Response, EPA disagreed with our characterization that EPA’s activities fall into three groups—contractor cleanup costs, site-specific support, and non-site-specific support—and stated that this division gives the erroneous impression that site-specific and non-site-specific support do not contribute substantially to the achievement of cleanups. We do not believe that our categorization of Superfund costs leads to this impression. In fact, the first paragraph of the report explicitly states that EPA undertakes a number of activities, both site-specific and non-site-specific, that support cleanups, including supervising cleanup contractors, compelling private parties to perform cleanups, and performing management and administrative activities. Furthermore, the body of the report provides numerous examples of the purposes served by both site-specific and non-site-specific spending. We believe that these examples demonstrate that many of the site-specific and non-site-specific support activities contribute to the achievement of cleanups. The purpose of our analyses was to disaggregate Superfund expenditures to provide more detailed information on the specific functions served by this spending. This analytic method can be used (and is being used by the Department of Energy) to identify cost category differences among operating units that can lead to potential cost savings. Our report does not attempt to define or determine which expenditures are “cleanup activities,” but rather to describe the purposes for which Superfund money has been expended. According to EPA, cleanup response spending includes “lab analysis, engineering and technical analyses, project manager salaries, State/Tribal activities, community involvement activities, and oversight of responsible parties and many other activities necessary to achieve cleanups.” We agree that these activities support the cleanup of sites, as stated in this report. However, when these support costs are aggregated into the larger category of cleanup response, it is unclear what share of these costs are for work related to specific sites, as opposed to general program expenditures. EPA also stated that our analyses failed to recognize Superfund appropriations used by other federal agencies. In fact, our analyses included Superfund expenditures by other federal agencies, and these expenditures were included under our site-specific and non-site-specific spending categories, as appropriate. The only substantial expenditures excluded from our review were made by the Agency for Toxic Substances and Disease Registry, because these expenditures are made directly by that agency and are not reported in EPA’s Superfund accounting system. EPA further stated that our analyses did not account for the expenditures private parties make to clean up Superfund sites that are the result of EPA’s enforcement expenditures. We did not analyze private parties’ expenditures to clean up hazardous waste sites because our focus was on federal Superfund expenditures. However, as part of our work for this assignment, we found that more than half of EPA’s fiscal year 1997 enforcement expenditures was for management and administrative activities. Notwithstanding EPA’s concerns as discussed above, the agency agreed to consider analyzing Superfund spending in terms of site-specific and non-site-specific obligations and expenditures, as we recommended. The full text of EPA’s comments is included as appendix II. We conducted our review from September 1998 through April 1999 in accordance with generally accepted government auditing standards. See appendix I for our scope and methodology. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to other congressional committees with jurisdiction over the Superfund program, and to the Honorable Carol M. Browner, Administrator, Environmental Protection Agency. We will also make copies available to others upon request. If you have any further questions about this report, please call me at (202) 512-6111. Major contributors to this report are listed in appendix III. To determine the share of annual Superfund spending for contractor cleanup work, site-specific support, and non-site-specific support for fiscal years 1996 through 1998, we obtained information from the Environmental Protection Agency’s (EPA’s) Integrated Financial Management System (IFMS). Using the IFMS information, we classified the cleanup support activities into spending for site-specific support and non-site-specific support for these fiscal years. We confirmed this classification with Office of Comptroller officials. In order to give a complete representation of cleanup support activities, we made one adjustment to the analyses included in our prior reports. Specifically, we included the costs for EPA personnel who supervise the cleanup contractors into the category for site-specific support. In our two prior reports, we had included these personnel in the contractor cleanup work category as EPA’s accounting system does. This change has the effect of reducing the percentage of contractor cleanup work by about 1 percent from the level we had previously reported. To determine what activities were carried out with EPA’s cleanup support spending, particularly its non-site-specific spending, we used the IFMS information. We categorized the spending by EPA’s budget action codes, which provided general activity descriptions for Superfund spending under the more than 100 action codes. To obtain more specific information for EPA’s non-site-specific spending, we selected three regional offices—Philadelphia, Chicago, and Kansas City—for sampling. Among EPA’s regions, the first two had the highest non-site-specific spending and the third had the lowest, based on fiscal year 1997 data, which was the most recent information for which we had a breakdown of total support spending at the time we made our selection. We also selected the three EPA headquarters units—the Office of Solid Waste and Emergency Response, the Office of Administration and Resources Management, and the Office of Enforcement and Compliance Assurance—with the highest levels of Superfund spending. We interviewed cognizant officials from the three regional offices and three headquarters units about the particular activities conducted under the various budget action codes for the non-site-specific spending, and obtained greater detail on the uses of this spending. In a 1995 report on the IFMS, we found instances of inaccurate and incomplete data in the system. While we did not consider these instances to be representative of the overall integrity of the IFMS data, we recommended that EPA conduct statistical testing of the data, which EPA has done. During the course of our current review, officials of EPA’s Office of Inspector General told us that in their opinion the IFMS has not led to any material misstatements in EPA’s 1996 and 1997 annual financial statements and that they believed that the IFMS information was reliable for the purposes of our review. Finally, in discussing spending activities with officials from EPA’s regional offices and headquarters units, we did not identify any material variations between the IFMS information and the underlying detailed records. To ascertain how EPA monitors and analyzes its regions’ and headquarters units’ spending of Superfund resources, particularly for contractor cleanup work, site-specific support, and non-site-specific support, we met with EPA headquarters officials. These officials included representatives from EPA’s Office of Solid Waste and Emergency Response—which is responsible for the Superfund program—and the Office of the Chief Financial Officer. We also obtained copies of pertinent documents describing EPA’s monitoring and analysis procedures and related reports. In addition, we met with Department of Energy officials and obtained documentation on their Functional Cost Reporting System. Richard P. Johnson, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Environmental Protection Agency's (EPA) Superfund Program expenditures, focusing on: (1) the relative shares of Superfund expenditures for contractor cleanup work, site-specific support, and non-site-specific support; (2) the activities carried out with the EPA's cleanup support spending, particularly its non-site-specific spending; and (3) EPA's efforts to monitor and analyze how its regions and headquarters units spend Superfund resources, particularly the distribution of expenditures among contractor cleanup work, site-specific support, and non-site-specific support. GAO noted that: (1) over the last 3 years, the share of total Superfund expenditures for contractor cleanup work was about 45 percent in fiscal year 1998; (2) over this period, expenditures for non-site-specific support were about 38 percent, whereas those for site-specific support were about 17 percent; (3) however, GAO found substantial variation among EPA's regions in the shares of their expenditures devoted to each of these cost categories; (4) for example, spending for non-site-specific support ranged from a low of 14 percent in EPA's Boston region to 30 percent in EPA's San Francisco region; (5) EPA spends its support funds predominately on administrative activities; (6) although EPA classifies its Superfund expenditures into over 100 separate activity categories, GAO found that over 60 percent of all Superfund support expenditures (both site-specific and non-site-specific) were accounted for by three activities--general support and management, general enforcement support, and remedial support and management; (7) moreover, almost 80 percent of EPA's non-site-specific spending was concentrated on these three administrative activities; (8) for the three regions that GAO reviewed in detail, these non-site-specific expenditures were primarily personnel expenses for activities such as management, administrative and secretarial support, financial management, public affairs, and contract management; (9) for the three headquarters units that GAO reviewed in detail, this spending was on items such as rent, information management, facilities operations and maintenance, program and policy development, and budgetary, financial, and administrative support; (10) EPA monitors the Superfund spending of its regions and headquarters units in several ways, including tracking whether funds are obligated at the expected rate and in compliance with the approved operating plan, and monitoring program accomplishments; (11) however, EPA does not monitor or analyze the expenditures of its regions and units in terms of the relative shares of contractor cleanup costs, site-specific support costs, and non-site-specific support costs; and (12) conducting such analyses would provide EPA with an additional tool to identify potential cost savings in Superfund spending.
The Navy can maintain a 12-carrier force for less cost than that projected in the Bottom-Up Review (BUR) and the Navy’s Recapitalization Plan by using one of several options that consider cost and employment levels. The least expensive investment option that also maintains employment levels at or above minimum levels authorizes building the CVN-76 in fiscal year 1995 and then transitions to a conventional carrier construction program. This option costs approximately 25 percent less than the BUR and the Navy’s Recapitalization Plan options. Building CVN-76 in fiscal year 1995, as proposed by the BUR, the Navy’s Recapitalization Plan, and other options in our report (see table 1.1), stops the downward trend in Newport News Shipbuilding employment at about the minimum sustaining level of 10,000 employees. Options to delay building the carrier result in a continuing decline to about 7,500 employees. However, in the long term the employment levels in the BUR and the Navy’s Recapitalization Plan also fall below 10,000 employees. In addition, options that include building CVN-76 in fiscal year 1995 require building carriers sooner than they are needed for force structure purposes and therefore incur expenses sooner than necessary. Moreover, the option to build nuclear carriers at the historical rate of one every 3 years maintains stable employment levels but costs about 40 percent more than options in the BUR and the Navy’s Recapitalization Plan. Options for using carriers for their full service lives (options 1A and 1B) are less expensive than those in the BUR and the Navy’s Recapitalization Plan, especially if the force transitions to a conventional carrier construction program. However, in the near term, the employment levels fall below the Navy’s estimated critical minimum sustaining level of 10,000 employees. Since affordability of the future force is an important concern, a transition to constructing conventionally powered carriers would save the largest amount of investment resources (see table 1.1). A conventional carrier force structure would require less budget authority funding and fewer outlays than any force structure that continues to require building nuclear aircraft carriers. Costs are lower because all major cost elements—procurement, midlife modernization, and inactivation costs—are lower for a conventional carrier than for a nuclear carrier. Throughout the 1960s and most of the 1970s, the Navy pursued a goal of creating a fleet of nuclear carrier task forces. The centerpiece of these task forces, the nuclear-powered aircraft carrier, would be escorted by nuclear-powered surface combatants and nuclear-powered submarines. In deciding to build nuclear-powered surface combatants, the Navy believed that the greatest benefit would be achieved when all the combatant ships in the task force were nuclear powered. Nonetheless, the Navy procured the last nuclear-powered surface combatant in 1975 because this vessel was so expensive. More recently, relatively new and highly capable nuclear-powered surface combatants have been decommissioned because of the affordability problems facing the Navy. Affordability is an important, but not the only, criterion when comparing nuclear and conventional carriers. Important factors also include operational effectiveness, potential utilization, and other intangibles. Flexibility of operations, such as the ability to steam at high speeds for unlimited distances without refueling; increased capacity for aviation fuel; increased capacity for other consumables, such as munitions; and the higher speeds of the advanced nuclear carrier over conventional carriers are some of the factors that need to be considered when evaluating nuclear- and conventionally powered carriers. Other considerations include the availability and location of homeports and nuclear-capable shipyards for maintenance and repairs and other supporting infrastructure, such as for training; the effect of out-of-homeport maintenance on the amount of time personnel are away from their homeport; and the disposal of nuclear materials and radioactively contaminated materials. These issues and others will be addressed in our upcoming review on the cost-effectiveness of conventional versus nuclear carriers and submarines as mandated by the congressional conferees on the Defense Appropriations Act for 1994. Department of Defense (DOD) officials partially concurred with the results of our report. DOD agreed that affordability is an important, but not the only, criterion when comparing nuclear and conventional carriers. DOD stated that other factors, including operational effectiveness and potential utilization, need to be considered when comparing nuclear and conventional carriers. We agree, and these issues will be examined as part of our upcoming review of the cost-effectiveness of conventional versus nuclear carriers and submarines. DOD noted that we did not examine the impact of alternative investment strategies on the Newport News Shipbuilding nuclear carrier industrial base, nuclear construction skills and vendors, or the need to preserve the base. We noted those limitations to the report’s scope in our draft. Our report does reflect the employment levels resulting from the investment options, and the Navy’s comments on the likely effects of those employment curves are in our report. DOD also noted that our report compares only the investment-related cost of a nuclear-powered carrier with that of a conventionally powered carrier and not the operating and support component of total life-cycle costs, including the fuel cost. DOD stated that the potential requirement to build additional logistics support ships must be considered in the decision to build and operate a conventionally powered carrier force. As we noted in the draft report, our analysis focused on the investment-related costs of alternative procurement profile strategies. Although outside the scope of this review, we have estimated the operating and support costs of a nuclear carrier and a conventional carrier of the general type used in our investment analysis (see table 1.2). The annualized life-cycle cost of a modern fleet oiler is about $19.6 million. A recent Center for Naval Analyses study suggests that the conventional carrier’s incremental support requirements would be less than one fleet oiler per carrier. We have not verified this data. Our upcoming review will examine in greater detail the life-cycle costs of nuclear and conventional carriers, considering the incremental fuel-driven demand of conventional carriers for additional logistics support ships. The objective of the BUR strategy is to maintain a 12-carrier force, maintain the industrial base at NNS, avoid cost increases associated with a delay in construction, and preserve carrier force size flexibility. Under the BUR, the Navy would purchase CVN-76 in fiscal year 1995 consistent with a sustaining rate strategy but would shift to a replacement rate strategy beginning with CVN-77. The Navy’s Recapitalization Plan transfers resources from the Navy’s infrastructure and savings from a smaller fleet to fund the Navy’s protected major procurement accounts, including the carrier program, in order to maintain the BUR force structure and/or critical industrial capabilities. Under the Navy’s recapitalization strategy, the Navy would buy CVN-76 in fiscal year 1995 but would defer CVN-77 until fiscal year 2002 and then shift to a sustaining rate strategy of one carrier every 4 years. The BUR and the Navy’s Recapitalization Plan were analyzed to determine the effects of their strategies on the carrier force structure, financial investment requirements, and the Newport News Shipbuilding total employment level. In addition, we analyzed eight alternatives for structuring a 12-carrier force to achieve one of the following objectives: 1. Maximize budgetary savings through a carrier replacement rate strategy. This approach maximizes the carriers’ useful service lives and builds new carriers when actually needed to sustain force levels. (See the analysis and discussion of alternatives 1A and 1B.) 2. Maximize the stability of Newport News Shipbuilding (NNS) employment through a sustained rate construction and refueling/complex overhaul program. This approach requires forgoing useful service life by accelerating inactivations to maintain a sustained rate production program. (See the analysis and discussion of alternatives 2A and 2B.) 3. Optimize budgetary savings and employment level stability. This approach optimizes the service lives of nuclear carriers and provides a stable employment base. (See the analysis and discussion of alternative 3.) 4. Delay building the new carrier to defer near-term outlays and reduce overall carrier program costs. The new starts for a nuclear carrier force were planned for fiscal years 1998 and 2000 and fiscal year 2002 for a conventional carrier force. (See the analysis and discussion of alternatives 4A, 4B, and 4C.) The following discusses our analyses of DOD’s and the Navy’s baseline force structure plans and the options we developed based on the four planning objectives and force structure investment strategies. We analyzed each option’s impact on force structure and the trade-offs between budgetary requirements and overall employment levels at NNS. Under the BUR’s baseline force structure option to support a 12-carrier force (i.e., 11 active carriers and 1 operational reserve/training carrier), CVN-76 is funded in fiscal year 1995, necessitating the early retirement of the U.S.S. Kitty Hawk (CV-63). After CVN-76 the Navy plans to procure new carriers when needed to maintain force levels. This approach results in fluctuating intervals of 2 to 7 years for the construction of new carriers, but maximizes the notional 50-year service life of current and planned nuclear-powered carriers. To sustain their full 50-year service life, nuclear carriers will be refueled after approximately 23 years of service. (See fig 2.1.) Figure 2.2 shows that this option halts the rapid decline in employment at NNS at just above the 10,000-employee level— the minimum level needed to sustain the shipyard’s viability, according to the Navy. If scheduled CVN construction is delayed, the Navy stated it would, at a minimum, have to expand the number of regular overhauls at NNS and take action to preserve the nuclear component and shipbuilding industrial base. The BUR option provides a near-term solution to the employment level decline, although it may be difficult for the shipyard to economically administer the drastic shifts in the employment levels at the yard between fiscal years 1998 and 2033. Substantial declines in employment at NNS are projected to bottom out in fiscal years 1998, 2004, 2014, 2024, and 2033. The drastic decline beginning in fiscal year 2010 reduces the workforce by about 13,000, dropping total employment below the minimum level. Although DOD believes that this option is cost-effective, it totals over $4.2 billion in the short term (fiscal years 1995-99), and its cost over the long term (fiscal years 1995-2035) totals more than $56 billion. Only one option, which reduces the service life of nuclear carriers to 37 years, has larger outlays than the BUR baseline force model (see discussion of alternative 2A). The Navy’s Recapitalization Plan was developed to fulfill the requirements of the BUR. This plan calls for funding CVN-76 in fiscal year 1995 and building new nuclear carriers in 4-year intervals beginning in fiscal year 2002, as shown in figure 2.3. The plan requires that some assets be retired early to buy newer equipment. The U.S.S. Kitty Hawk (CV-63) will be retired 3 years before the end of its projected service life to maintain the 12-carrier force level when CVN-76 enters the fleet. To sustain the 4-year build interval, five other carriers will be retired early: the U.S.S. Enterprise (CVN-65) will be inactivated 2 years early, the U.S.S. Dwight D. Eisenhower (CVN-69) and the U.S.S. Carl Vinson (CVN-70) will be retired 3 years before the end of their projected service lives, and the U.S.S. Nimitz (CVN-68) and the U.S.S. Theodore Roosevelt (CVN-71) will be decommissioned 4 years early. The Navy will prematurely incur large inactivation costs, currently estimated at almost $1 billion each, for the early inactivations of these Nimitz-class carriers. The plan maintains approximately the same employment level at NNS as the BUR baseline force structure option through fiscal year 2001 (see fig. 2.4). Between fiscal years 2010 and 2034, the plan maintains an average total employment level above the projected level for the BUR option. Except for declines in total employment in fiscal years 2003-5, 2017-18, and 2029-31, this option maintains shipyard employment between 15,000 and 23,000 after fiscal year 2001 due to the consistent 4-year construction interval. Although the outlays are slightly lower than those in the BUR option in the near term (1995-99) due to a 1-year delay in CVN-77, the outlays for the mid-term (fiscal years 1995-2015) and long term (fiscal years 1995-2035) are higher than those in the BUR option due to the consistent 4-year new construction interval and the additional premature inactivations of Nimitz-class carriers. Total outlays for fiscal years 1995-2035 total almost $59 billion, about $2.5 billion higher than the cost in the BUR option. Using this force structure option, the Navy builds a new carrier only to replace a carrier that has to be inactivated at the end of its service life (see fig. 2.5). The U.S.S. Independence (CV-62) is the last carrier to be decommissioned before the end of its service life to maintain a 12-carrier force level when the U.S.S. United States (CVN-75) enters the force. All Nimitz-class carriers will use their entire projected 50-year service lives, which will require that each receive a nuclear refueling complex overhaul at 23 years. This option’s construction schedule leads to a variable build interval; construction starts may be anywhere from 3 to 10 years apart. Construction for CVN-76 begins in fiscal year 1999, and the ship will replace the U.S.S. Kitty Hawk (CV-63) in fiscal year 2006. Figures 2.5 and 2.6 show that although the Navy receives the full value of its carrier force investment, workforce management is complicated by several short-term surges in total employment and then large drop-offs because of the varying build intervals. Those changes in employment levels are similar to those in the BUR baseline force option, although the drop-off between fiscal years 1996 and 2000 under this option is much more drastic, with the employment level falling below 10,000. The workload gap could be filled by having the government direct other work to the shipyard or reschedule delivery of work under contract. Employment at the shipyard improves under this option in the mid- and long terms. Between fiscal years 2001 and 2015, the total employment level at NNS is generally at a higher level than in the BUR option. After fiscal year 2020, this option’s total employee level has fewer major shifts over the remaining 15 years of the period we analyzed than the BUR option. Since new ship construction and inactivations occur only when needed under this option, money is not outlaid prematurely for procurement and major investment costs. Outlays are less than half of those incurred under the BUR option for fiscal years 1995-99 but are only $161 million less than those between fiscal years 1995 and 2035 because, in the long term, the BUR maintains a similar replacement rate new carrier construction strategy. Outlays for this option in the long term are higher then those in the options delaying CVN-76’s construction start to fiscal years 1998 and 2000; however, in the near term, this option requires over $530 million less outlays than the option that builds CVN-76 in fiscal year 1998 due to the additional 1-year delay in CVN-76’s construction start. The government will receive the full value of its investment in aircraft carriers under this option because both conventional and nuclear carriers will remain in the active fleet until the end of their expected service lives (see fig. 2.7). Nimitz-class nuclear carriers receive nuclear refuelings and complex overhauls after 23 years and are inactivated at the end of their 50-year service lives. Conventional carriers remain active for 45 years, entering the service life extension program after 30 years of service. After fiscal year 1994, only the U.S.S. Independence (CV-62) is inactivated before the end of its projected service life so that the U.S.S. United States (CVN-75) can be commissioned into the fleet in fiscal year 1998. This early inactivation will allow the Navy to maintain the 12-carrier force level, and carriers will only be built to replace others. The next carrier, CVA-76, is programmed to begin construction in fiscal year 2000 at NNS, and new construction start intervals would fluctuate between 3 and 10 years, similar to the BUR baseline force structure option. Figure 2.8 shows that this fluctuating new construction start rate results in a total employee level profile similar to that in the BUR option. During the near-term period of fiscal years 1995-99, the employment level under this option ranges from 7,500 to 10,000 compared with 11,000 and 15,000 under the BUR option. The decrease in the employment level could be mitigated by other shipyard work being directed by the government to NNS or by bidding for projects in the commercial shipbuilding market, such as liquified natural gas tankers or cruise ships. Since this option requires new ship construction and decommissioning only when needed, major procurement and investment costs are not incurred prematurely. Therefore, this option has the lowest value of outlays in the long term. Outlays for this option are over $2 billion less between fiscal years 1995 and 2015 and $6.5 billion less between fiscal years 1995 and 2035 than the option that transitions to conventional carrier construction with CVA-77. Also, this option’s outlays are approximately one-third less than those for the BUR baseline force structure option for fiscal years 1995-2015 and approximately 37 percent less than those between fiscal years 1995 and 2035. This option emphasizes maximizing the stability of NNS’ employment level through a sustained rate of new carrier construction, regardless of cost (see fig. 2.9). New nuclear carrier construction starts begin in fiscal year 1995 at a historical rate of every 3 years. All nuclear carriers receive their nuclear refuelings and complex overhauls but are retired early, after approximately 37 years. Conventional carriers in the fleet, the U.S.S. Independence (CV-62), the U.S.S. Kitty Hawk (CV-63), and the U.S.S. Constellation (CV-64), are retired before the end of their expected service lives as well. The benefit of this option is that NNS could sustain a workforce averaging over 20,000 employees with very few shifts in the overall employment level (see fig. 2.10). Employment levels remain above those under the BUR option throughout the 1995 to 2035 time frame. Constructing new nuclear carriers every 3 years is extremely expensive, and the outlays are significantly greater than those in the BUR baseline force structure option in the near term (fiscal years 1995-99), mid-term (fiscal years 1995-2015), and long term (fiscal years 1995-2035). This option requires more outlays because maintaining a 12-carrier force level at this construction rate requires the Navy to retire all of its carriers early, most with 25 percent of their service life remaining. Therefore, the Navy will need to fund costly nuclear carrier inactivations prematurely. This option procures 14 carriers between fiscal years 1995 and 2035, compared with 10 carriers under the BUR plan. This investment strategy represents the long-term investment implications of building carriers at historical rates to protect the carrier shipbuilding industrial base and employee levels. To support a sustained-rate construction program, the Navy would need to inactivate eight Nimitz-class nuclear carriers prematurely with 20 percent of their useful service life remaining. The new conventional carrier construction start is programmed for fiscal year 2000, and the follow-on conventional carriers have construction starts every 3 years. (See fig. 2.11.) No nuclear carriers are built after the completion of the U.S.S. United States (CVN-75). The nuclear capabilities at NNS would be sustained through a series of nuclear refuelings and complex overhauls of the Nimitz-class carriers through fiscal year 2024, some or all of the decommissioning work of the nuclear carrier fleet, and other nuclear repair and maintenance work. None of the remaining conventionally powered carriers would be decommissioned early except for the U.S.S. Independence (CV-62) to maintain a 12-carrier force when the U.S.S. United States (CVN-75) is brought into service in fiscal year 1998. NNS will have a severe drop-off in its workload between fiscal years 1996 and 2000 (see fig. 2.12) unless other work is directed to the shipyard. Consolidating all Atlantic Coast-based nuclear shipbuilding and overhaul work at NNS would help maintain nuclear capabilities and help mitigate the severe drop-off in the workload. Between fiscal years 2000 and 2014, the employment level at the shipyard averages about 17,500 employees, and between fiscal years 2015 and 2025 the employment level averages about 22,000 employees. In fiscal year 2026, the shipyard’s workforce level drops below 15,000 employees and does not return to the 15,000-employee level until fiscal year 2027. Due to the frequent new construction starts and the earlier decommissioning of the Nimitz-class nuclear carriers, this option costs approximately $8 billion more in the long term (fiscal years 1995-2035) than the conventional replacement rate strategy. During the near-term period (fiscal years 1995-99) this option still costs less than the conventional carrier option that builds CVA-77 in fiscal year 2002 because this option delays the new construction start and cancels the construction of CVN-76. Maximizing the NNS employment levels through a high-production rate is a very costly approach to maintaining a carrier force level in the long term, and the value of the total outlays is higher during this period than in any other conventional option. However, this option is still $11.5 billion less than the BUR option over the long term. This option is consistent with DOD’s plan to request funding for CVN-76 in fiscal year 1995. The next ship, however, would be a new design conventional carrier as shown in figure 2.13. The BUR report recommended the deferment of the advance procurement funding beyond fiscal year 1999 for the carrier after CVN-76 pending the completion of an evaluation of alternative aircraft carrier concepts for the next century, including the conventional carrier force option. Under this option, the construction start for CVA-77 is in fiscal year 2002. New starts for follow-on conventional ships are at 4-year intervals, which would support a sustained rate production program at NNS. The employment level under this option is projected to have fewer extreme increases and drop-offs than in the BUR plan. Nuclear carriers currently in the fleet will have 45- to 48-year service lives, requiring all of them to undergo nuclear refuelings and complex overhauls. Both the U.S.S. Independence (CV-62) and the U.S.S. Kitty Hawk (CV-63) will be inactivated 6 and 3 years, respectively, before the end of their estimated service lives. The plan requires that the U.S.S. John F. Kennedy (CV-67) remain in the active fleet 5 years longer than currently planned.This longer service life may be feasible for the ship in its new role as the reserve/training carrier because it will have a reduced tempo of operations, resulting in a reduced amount of “wear and tear.” This option maintains the workforce at NNS above the 10,000-employee level throughout fiscal years 1995-2035. The shipyard maintains a very stable employment level after fiscal year 2006—the workforce fluctuates between approximately 15,000 and 20,000 employees in fiscal years 2006-7, with only one significant drop in employment in fiscal year 2015. After fiscal year 2027, the employment level ranges between 11,900 and 16,500. (See fig. 2.14.) Since this option requires building CVN-76 in fiscal year 1995, the near term outlays are similar to those in the BUR baseline option. However, in the mid-term (fiscal years 1995-2015) and long term (fiscal years 1995-2035), the outlays are approximately 25 percent less than those in the BUR option. These savings could help reduce the Navy’s Recapitalization Plan projected annual funding shortfall of $3.5 billion in fiscal years 1999 and beyond. If the construction start for the next nuclear carrier—CVN-76—is delayed 3 years to fiscal year 1998, the Navy could maintain a 12-carrier force and maximize the service lives of its nuclear carriers. (See fig. 2.15.) All nuclear carriers will be refueled and overhauled, extending each carrier’s service life over 23 years to its full 50-year service life. This option creates fewer drastic shifts in the overall employment level than the BUR option because it has a new carrier construction start rate of every 4 to 5 years compared with the BUR rate of 3 to 7 years. Two conventional carriers, the U.S.S. Kitty Hawk (CV-63) and the U.S.S. Constellation (CV-64), are retained in the active fleet for several years longer than projected in the BUR option and are inactivated closer to or at the end of their projected useful lives. This alternative also retains the U.S.S. John F. Kennedy (CV-67) in the fleet 7 years past the BUR option’s plan. This ship, in its new role as the reserve/training carrier, will have a reduced tempo of operations and thus a reduced amount of wear and tear. Other carriers are replaced when required to meet force structure needs. Under this option, NNS’ employment level drops to around 7,500 employees and remains below the critical 10,000-employee level for about 3 years. As shown in figure 2.16, overall employment is more stable during fiscal years 2005 through 2034 than under the BUR option. Increased stability in shipyard employment requires fewer adjustments to the workforce over time. Compared to the BUR option, this option’s employment troughs are significantly smaller in fiscal years 2004, 2018, and 2025-26. The Navy could mitigate the employment decline in fiscal year 1998 by redirecting other shipbuilding and maintenance work to the yard, or, as the BUR suggested, by rescheduling the delivery of carriers under contract, overhauls, and other work. DOD’s financial investment requirement for this option is less than in the BUR option for the near term (fiscal years 1995-99), mid-term (fiscal years 1995-2015), and long term (fiscal years 1995-2035). The difference in outlays from fiscal years 1995 to 1999 for this option are approximately $1.6 billion less than the BUR option. Under this option, the Navy generally retains each nuclear carrier to the end of its useful 50-year service life and therefore will need to refuel each nuclear carrier after 23 years (see fig. 2.17). Two conventional carriers, the U.S.S. Kitty Hawk (CV-63) and U.S.S. Constellation (CV-64), are retained in the active fleet to the end of their expected service lives. Also, the U.S.S. John F. Kennedy (CV-67) will remain in the active fleet for a total of 50 years, 7 years longer than projected in the BUR option. This should be feasible, since the carrier will have a reduced tempo of operations as the reserve/training carrier. Only two nuclear carriers are retired before the end of their useful service lives—the U.S.S. Enterprise (CVN-65) 1 year early and the U.S.S. Nimitz (CVN-68) 2 years early. In addition, this option builds new carriers to replace carriers that are at the end of their service lives, which will lead to a stable new construction start rate every 4 to 5 years. DOD considered delaying the construction of CVN-76 until fiscal year 2000. However, the BUR concluded that, as a result of the delay, existing contracts would not be completed until the mid-1990s, and a lack of subsequent orders would threaten NNS’ viability by 1997. NNS will need to fill in a large gap in workload between fiscal years 1996 and 2001. The shipyard does have the capability to construct nuclear submarines and other surface ships and therefore could complete other types of shipyard work to compensate for the drop-off in workload. The shipyard will begin the nuclear refueling complex overhaul of the U.S.S. Nimitz (CVN-68) in fiscal year 1998 while it completes construction work on the U.S.S. United States (CVN-75), scheduled for commissioning in fiscal year 1998. This work will enable NNS to sustain a nuclear-capable workforce. Figure 2.18 shows that the overall employment level at NNS is at or below the critical 10,000-employee level in fiscal years 1996-2001. This option does not have as large a drop-off in the projected total workforce beginning in fiscal year 2014 than either the BUR option, in which employment level drops below 10,000, or the option to start construction of CVN-76 in fiscal year 1998. The financial outlays required for this option are less than any of the nuclear carrier force structure options for the near term (fiscal years 1995-99) and long term (1995-2035). In the near term, the outlays are less than half of those required for the BUR option because of the delay in the construction start of CVN-76. Using this option the Navy would not build a nuclear carrier before the transition to a conventional carrier construction program in fiscal year 2002, with the start of CVA-76. This option provides a 7-year design period, sustains a steady new carrier construction start interval of 3-1/2 years, and fully utilizes the service lives of almost all of the conventional carriers in the fleet. (See fig. 2.19.) The delay in the construction start enables several conventional carriers in the active force to remain in service longer than in the BUR plan. This option also provides for longer service lives for most carriers currently in the active fleet than under the Navy’s Recapitalization Plan. The U.S.S. Kitty Hawk (CV-63) and U.S.S. Constellation (CV-64) remain active slightly beyond their estimated notional lives, enabling these ships to complete a last deployment within their last maintenance cycle. The U.S.S. John F. Kennedy (CV-67) is programmed for a 50-year service life because of its reduced tempo of operations as the reserve/training carrier. Nimitz-class nuclear carriers remain in the fleet for 47 to 50 years. This option requires all Nimitz-class nuclear carriers to undergo nuclear refuelings and complex overhauls. As shown in figure 2.20, deferring construction of the next carrier until fiscal year 2002 results in continuing near-term declines in employment levels at NNS. The only carrier program work expected in the shipyard during that time period is the completion of construction of the U.S.S. United States (CVN-75) and the nuclear refueling complex overhaul of the U.S.S. Nimitz (CVN-68), which begins in fiscal year 1998. NNS would need other work to bring levels above the critical 10,000-employee level between fiscal years 1996 and 2001. After this period, employment levels average from 15,000 to 20,000 persons through fiscal year 2024. This option requires fewer outlays than any other option we examined except for option 1B’s (conventional carrier replacement rate) long-term estimate. The reduction in outlays is a result of delaying the construction start of the next aircraft carrier until fiscal year 2002, building conventional carriers that have a much lower procurement cost, and retaining carriers longer in the active fleet. The near-term outlays (fiscal years 1995-99) are approximately 35 percent of the BUR option’s outlays for the same period. In the long term (fiscal years 1995-2035), this option will save almost $19 billion in outlays over the amount projected to be spent for the BUR option. This option costs approximately $4.5 billion less in the long term than the option that begins conventional carrier construction with CVA-77.
GAO reviewed the Navy's aircraft carrier program, focusing on: (1) the budget implications of the options for meeting the Department of Defense's Bottom-Up Review (BUR) force structure requirement for 12 carriers; and (2) each option's effect on the shipbuilding contractor's employment levels. GAO found that: (1) there are several available options for maintaining the 12-carrier force at less cost than projected in the BUR and the Navy Recapitalization Plan; (2) the least expensive option maintains employment levels at or above minimum levels, authorizes building the proposed nuclear carrier in fiscal year 1995, switches to construction of conventionally powered carriers in later years, and would cost 25 percent less than the BUR and Navy Recapitalization Plan options; (3) options to build CVN-76 in fiscal year 1995 would stop the downward trend in employment at about the minimum sustaining employment level of 10,000 employees, require building carriers sooner than they are needed for force structure purposes, and incur expenses sooner than necessary; (4) options to delay building the carrier would result in a continuing decline in employment to about 7,500 employees; (5) in the long term the employment levels in the BUR and the Navy Recapitalization Plan also fall below 10,000 employees; (6) the option to build nuclear carriers at the historical rate of one every 3 years maintains stable employment levels but costs about 40 percent more than the options in the BUR and the Navy Recapitalization Plan; (7) options for using carriers for their full service lives are less expensive than those in the BUR and Navy Recapitalization Plan, but in the near term the employment levels fall below the minimum sustaining level; (8) a transition to constructing conventionally powered carriers would save the largest amount of investment resources; and (9) criteria for comparing nuclear and conventional carriers include affordability, operational effectiveness, potential utilization, availability of homeports and shipyards for maintenance, and supporting infrastructure such as training and disposal of nuclear materials.
SSA administers three major federal programs. OASI and DI, together commonly known as Social Security, provide benefits to retired and disabled workers and their dependents and survivors. In fiscal year 2001, SSA provided OASI retirement benefits totaling more than $369 billion to over 38 million individuals and DI benefits of more than $59 billion to 6.8 million individuals. These benefits are paid from trust funds that are financed through payroll taxes paid by workers and their employers and by the self-employed. The third program, SSI, provides income for aged, blind, or disabled individuals with limited income and resources. In fiscal year 2001, 6.7 million individuals received almost $28 billion in SSI benefits. SSI payments are financed from general tax revenues. To administer these programs, SSA must perform certain essential tasks. It must issue SSNs to individuals, maintain earnings records for individual workers by collecting wage reports from employers, use these records and other information to determine the amount of benefits an applicant may receive, and process benefit claims for all three programs. To meet its customer service responsibilities, SSA operates a vast network of offices distributed throughout the country. These offices include approximately 1,300 field offices, which, among other things, take applications for benefits; 138 Offices of Hearings and Appeals; and 36 teleservice centers responsible for SSA’s national 800 number operations.The agency’s policy is to provide customers with a choice in how they conduct business with SSA. Options include visiting or calling a field office, calling SSA’s toll-free number, or contacting SSA through the mail or the Internet. To conduct its work, SSA employs almost 62,000 staff. In addition, to make initial and ongoing disability determinations, SSA contracts with 54 state disability determination service (DDS) agencies under authority of the Social Security Act. Although federally funded and guided by SSA in their decision making, these agencies hire their own staff and retain a degree of independence in how they manage their offices and conduct disability determinations. Overall, SSA relies extensively on information technology to support its large volumes of programmatic and administrative work. The process for obtaining SSA disability benefits under either DI or SSI is complex, and multiple organizations are involved in determining whether a claimant is eligible for benefits. As shown in figure 1, the current process consists of an initial decision and as many as three levels of administrative appeals if the claimant is dissatisfied with SSA’s decision. Each level of appeal involves multistep procedures for evidence collection, review, and decision making. Generally, a claimant applies for disability benefits at one of SSA’s 1,300 field offices across the country. If the claimant meets certain nonmedical program eligibility criteria, the field office staff forward the claim to the DDS. DDS staff then obtain medical evidence about the claimant’s impairment and determine whether the claimant is disabled. Claimants who are initially denied benefits can appeal by requesting the DDS to reconsider its initial denial. If the decision at the reconsideration level remains unfavorable, the claimant can request a hearing before a federal administrative law judge at an SSA hearings office and, if still dissatisfied, a review by SSA’s appeals council. After exhausting these administrative remedies, the individual may file a complaint in federal district court. The agency’s ability to continue providing Social Security benefits over the long term is strained by profound demographic changes. The baby boom generation is nearing retirement age. In addition, life expectancy has increased continually since the 1930s, and further increases are expected. This increase in life expectancy, combined with falling fertility rates, mean that fewer workers will be contributing to Social Security for each aged, disabled, dependent, or surviving beneficiary. Beginning in 2017, Social Security’s expenditures are expected to exceed its tax income. By 2041, without corrective action, experts expect the combined OASI and DI trust funds to be depleted, leaving insufficient funds to pay the current level of benefits. Unless actions are taken to reform the social security system, the nation will face continuing difficulties in financing social security benefits in the long term. Over the past few years, a wide array of proposals has been put forth to restore Social Security’s long-term solvency, and in December 2001, a commission appointed by the president presented three alternative proposals for reform. This solvency problem is part of a larger and significant fiscal and economic challenge facing our aging society. The expected growth in the Social Security program (OASI and DI), combined with even faster expected growth in Medicare and Medicaid, will become increasingly unsustainable over time, compounding an ongoing decline in budget flexibility. Absent changes in the structure of Social Security and Medicare, there would be virtually no room for any other budget priorities in future decades. Ultimately, restoring our long-term fiscal flexibility will involve reforming existing federal entitlement programs and promoting the saving and investment necessary for robust long-term economic growth. The disability determination process is time-consuming, complex, and expensive. Individuals who are initially denied benefits by SSA and appeal their claim experience lengthy waits for a final decision on their eligibility, and questions have been raised about the quality and consistency of certain disability decisions. Since 1994, SSA has introduced a wide range of initiatives intended to address long-standing problems with its disability claims process. However, the agency’s efforts, in general, have not achieved the intended result, and the problems persist. Because SSA’s DI and SSI programs are expected to grow significantly over the next decade, improving the disability determination process remains one of SSA’s most pressing and difficult challenges requiring immediate and sustained attention from the new commissioner. Additionally, in redesigning its disability decision-making process, SSA still needs to incorporate into its eligibility assessment process an evaluation of what is needed for an individual to return to work. We have recommended developing a comprehensive return-to-work strategy that focuses on identifying and enhancing the work capacities of applicants and beneficiaries. SSA’s complex disability claims process has been plagued by a number of long-standing weaknesses that have resulted in lengthy waiting periods for claimants seeking disability benefits. For example, claimants who wish to appeal an initial denial of benefits frequently wait more than 1 year for a final decision. We have reported that these long waits result, in part, from complex and fragmented decision-making processes that are laden with many layers of reviews and multiple handoffs from one person to another. The cost of administering the DI and SSI programs reflects the demanding nature of the process. Although SSI and DI program benefits account for less than 20 percent of the total benefit payments made by SSA, they consume nearly 55 percent of annual administrative resources. In addition to its difficulties in processing claims, SSA has also had difficulty ensuring that decisions about a claimant’s eligibility for disability benefits are accurate and consistent across all levels of the decision- making process. For example, our work shows that in fiscal year 2000, about 40 percent of applicants whose cases were denied at the initial level appealed this decision and about two-thirds were awarded benefits. This happens in part because decision makers at the initial level use a different approach to evaluate claims and make decisions than those at the appellate level. The inconsistency of decisions at these two levels has raised questions about the fairness, integrity, and cost of SSA’s disability programs. In 1994, SSA laid out a plan to address these problems, yet that plan and three subsequent revisions in 1997, 1999, and 2001 have yielded only limited success. The agency’s initial plan entailed a massive effort to redesign the way it made disability decisions. Among other things, SSA planned to develop a streamlined decision-making and appeal process, more consistent guidance and training for decision makers at all levels of the process, and an improved process for reviewing the quality of eligibility decisions. In our reviews of SSA’s efforts after 2 years, 4 years, and again in 2001, we found that the agency had accomplished little. In some cases, the plans were too large and too complex to keep on track, and the results of many of the initiatives that were tested fell far short of expectations. Moreover, the agency was not able to garner consistent stakeholder support and cooperation for its proposed changes. Despite the overall disappointing progress, the agency did experience some successes. For example, it conducted a large training effort to improve the consistency of decisions, which agency officials believe resulted in 90,000 eligible individuals’ receiving benefits 500 days sooner than otherwise might have been the case over a 3-year period. In addition, the agency issued formal guidance in a number of areas intended to improve the consistency of decisions between the initial and appellate levels. Overall, however, significant problems persist and difficult decisions remain. For example, SSA is currently collecting final data on the results from an initiative known as the Prototype, which was implemented in 10 states in October 1999. Although interim data indicated that the Prototype resulted in more awards at the initial decision level without compromising accuracy, it also indicated that the number of appeals would increase. This, in turn, would result in both higher administrative and benefit costs and lengthen the wait for final decisions on claims. As a result, SSA decided that the Prototype would not continue in its current form. Recently, SSA announced its “short-term” decision to revise some features of the Prototype to improve disability claims processing time while it continues to develop longer-term improvements. It remains to be seen whether these revisions will retain the positive results from the Prototype while also controlling administrative and benefit costs. Even more pressing in the near term is the management and workload crisis that SSA faces in its hearings offices. The agency’s 1999 plan included an initiative to overhaul operations at its hearing offices to increase efficiency and significantly reduce processing times at that level; however, this nationwide effort not only has failed to achieve its goals but, in some cases, has made things worse. The initiative has suffered, in part, from problems associated with implementing large-scale changes too quickly without resolving known problems. As a result, the average case- processing time slowed and backlogs of cases waiting to be processed approached crisis levels. We have recommended that the new commissioner act quickly to implement short-term strategies to reduce the backlog and develop a long-range strategy for a more permanent solution to the backlog and efficiency problems at the Office of Hearings and Appeals. According to SSA officials, they have recently made some decisions on short-term initiatives to reduce the backlogs and streamline the process, and they are preparing to negotiate with union officials regarding some of these planned changes. Finally, SSA’s 1994 plan to redesign the claims process called for the agency to revamp its existing quality assurance system. However, because of disagreement among stakeholders on how to accomplish this difficult objective, progress in this area has been limited. In March 2001, a contractor issued a report assessing SSA’s existing quality assurance practices and recommended a significant overhaul to encompass a more comprehensive view of quality management. We agreed with this assessment and recommended that SSA develop an action plan for implementing a more comprehensive and sophisticated quality assurance program. Since then, the commissioner has signaled the high priority she attaches to this effort by appointing to her staff a senior manager for quality who reports directly to her. The senior manager is responsible for developing a proposal to establish a quality-oriented approach to all SSA business processes. The manager is currently assembling a team to carry out this challenging undertaking. The disappointing results of some of these initiatives can be linked, in part, to slow progress in achieving technological improvements. As originally envisioned, SSA’s plan to redesign its disability determination process was heavily dependent upon these improvements. The agency spent a number of years designing and developing a new computer software application to automate the disability claims process. However, SSA decided to discontinue the initiative in July 1999, after about 7 years, citing software performance problems and delays in developing the software. In August 2000, SSA issued a new management plan for the development of the agency’s electronic disability system. SSA expects this effort to move the agency toward a totally paperless disability claims process. The strategy consists of several key components, including (1) an electronic claims intake process for the field offices, (2) enhanced state DDS claims processing systems, and (3) technology to support the Office of Hearing and Appeals’ business processes. The components are to be linked to one another through the use of an electronic folder that is being designed to transmit data from one processing location to another and to serve as a data repository, storing documents that are keyed in, scanned, or faxed. SSA began piloting certain components of its electronic disability system in one state in May 2000 and has expanded this pilot test to one more state since then. According to agency officials, SSA has taken various steps to increase the functionality of the system; however, the agency still has a number of remaining issues to address. For example, SSA’s system must comply with privacy and data protection standards required under the Health Information Portability and Accountability Act, and the agency will need to effectively integrate its existing legacy information systems with new technologies, including interactive Web-based applications. SSA is optimistic that it will meet its scheduled date for achieving a paperless disability claims process—anticipated for the end of 2005—and has taken several actions to ensure that its efforts support the agency’s mission. For example, to better ensure that its business processes drive its information technology strategy, SSA has transferred management of the electronic disability strategy from the Office of Systems to the Office of Disability and Income Security Programs. In addition, SSA hired a contractor to independently evaluate the electronic disability strategy and recommend options for ensuring that the effort addresses all of the business and technical issues required to meet the agency’s mission. According to an agency official, SSA is currently implementing the contractor’s recommendations. As SSA proceeds with this new system, however, it is imperative that the agency effectively identify, track, and manage the costs, benefits, schedule, and risks associated with the system’s full development and implementation. Moreover, SSA must ensure that it has the right mix of skills and capabilities to support this initiative and that desired end results are achieved. Overall, SSA is at a crossroads in its efforts to redesign and improve its disability claims process. It has devoted significant time, energy, and resources to its redesign initiatives over the last 7 years, yet progress has been limited and often disappointing. SSA is not the only government agency to experience difficulty in overhauling or reengineering its operations. According to reengineering experts, many federal, state, and local agencies have failed in similar efforts. Frequent leadership turnover, constraints on flexibility posed by laws and regulations, and the fact that government agencies often must serve multiple stakeholders with competing interests all constrain progress. Yet, it is vital that SSA address its claims process problems now, before the agency experiences another surge in workload as the baby boomers reach their disability-prone years. To date, the focus on changing the steps and procedures of the process or changing the duties of its decision makers has not been successful. Given this experience, it may be appropriate for the agency to undertake a new and comprehensive analysis of the fundamental issues impeding progress. Such an analysis might include reassessing the root causes contributing to its problems and would encompass concerns raised by the Social Security Advisory Board, such as the fragmentation and structural problems in the agency’s overall disability service delivery system. The outcome of this analysis may, in some cases, require legislative changes. The number of working-age beneficiaries of the DI and SSI programs has increased by 61 percent over the past 10 years. We have reported that as the beneficiary population has grown, numerous technological and medical advances, combined with changes in society and the nature of work, have increased the potential for some people with disabilities to return to, or remain in, the labor force. Also, legislative changes have focused on returning disabled beneficiaries to work. The Americans with Disabilities Act of 1990 supports the premise that people with disabilities can work and have the right to work, and the Ticket to Work and Work Incentives Improvement Act of 1999 increased beneficiaries’ access to vocational services. Indeed, many beneficiaries with disabilities indicate that they want to work, and many may be able work in today’s labor market if they receive needed support. In 1996, we recommended that SSA place a greater priority on helping disabled beneficiaries work, and the agency has taken a number of actions to improve its return-to-work practices. But even with these actions, SSA has achieved poor results in this arena, where fewer than 1 in 500 DI beneficiaries and few SSI beneficiaries leave the disability rolls to work. Even in light of the Ticket to Work Act, SSA will continue to face difficulties in returning beneficiaries to work, in part owing to weaknesses, both statutory and policy, in the design of the DI program. As we have reported in the past, these weaknesses include an either/or disability decision-making process that characterizes individuals as either unable to work or having the capacity to work. This either/or process produces a strong incentive for applicants to establish their inability to work to qualify for benefits. Moreover, return-to-work services are offered only after a lengthy determination process. Because applicants are either unemployed or only marginally connected to the labor force at the time of application and remain so during the eligibility determination process, it is likely that their skills, work habits, and motivation to work deteriorate during this wait. Thus, individuals who have successfully established their disability may have little reason or desire to attempt rehabilitation and work. Unlike some private sector disability insurers and foreign social insurance systems, SSA does not incorporate into its initial or continuing eligibility assessment process an evaluation of what is needed for an individual to return to work. Instead of receiving assistance to stay in the workforce or return to work—and thus to stay off the long-term disability rolls—an individual can obtain assistance through DI or SSI only by proving his or her inability to work. And even in its efforts to redesign the decision- making process, SSA has yet to incorporate into these initiatives an evaluation of what an individual may need to return to work. Moreover, SSA has made limited strides in developing baseline data to measure progress in the return-to-work area. In June 2000, we reported that many of SSA’s fiscal year 2001 performance measures were not sufficiently results oriented, making it difficult to track progress. SSA’s fiscal year 2002 performance plan shows that SSA has begun to incorporate more outcome-oriented performance indicators that could support their efforts in this area. Two new indicators, in particular, could help SSA gauge progress: the percentage increase in the number of DI beneficiaries whose benefits are suspended or terminated owing to employment and the percentage increase in the number of disabled SSI beneficiaries no longer receiving cash benefits. However, SSA has not yet set specific performance targets for these measures. Nevertheless, SSA has recently stepped up its return-to-work efforts. For example, it has (1) established an Office of Employment Support Programs to promote employment of disabled beneficiaries; (2) recruited 184 public or private entities to provide vocational rehabilitation, employment, and other support services to beneficiaries under the Ticket to Work Program; (3) raised the limit on the amount a DI beneficiary can earn from work and still receive benefits to encourage people with disabilities to work; (4) funded 12 state partnership agreements that are intended to help the states develop services to increase beneficiary employment; and (5) completed a pilot study on the deployment of work incentive specialists to SSA field offices and is currently determining how to best implement the position nationally. While these efforts represent positive steps in trying to return people with disabilities to work, much remains to be done. As we have recommended previously, SSA still needs to move forward in developing a comprehensive return-to-work strategy that integrates, as appropriate, earlier intervention, including earlier and more effective identification of work capacities, and the expansion of such capacities by providing essential return-to-work assistance for applicants and beneficiaries. Adopting such a strategy is likely to require improvements to staff skill levels and areas of expertise, as well as changes to the disability determination process. It will also require fundamental changes to the underlying philosophy and direction of the DI and SSI programs, as well as legislative changes in some cases. Policymakers will need to carefully weigh the implications of such changes. Nevertheless, we remain concerned that the absence of such a strategy and accompanying performance plan goals may hinder SSA’s efforts to make significant strides in the return-to-work area. An improved return-to-work strategy could benefit both the beneficiaries who want to work and the American taxpayer. The SSI program is the nation’s largest cash assistance program for the poor. In fiscal year 2000, the program paid 6.6 million low-income aged, blind, and disabled recipients $31 billion in benefits. During that year, newly detected overpayments and outstanding SSI debt totaled more than $3.9 billion. In 1997, after several years of reporting on specific instances of abuse and mismanagement, increasing overpayments, and poor recovery of outstanding SSI debt, we designated SSI a high-risk program. The SSI program poses a special challenge for SSA because, unlike OASI and DI, it is a means-tested program; thus, SSA must collect and verify information on income, resources, and recipient living arrangements to determine initial and continuing eligibility for the program. Our prior work, however, shows that SSA has often placed a greater priority on quickly processing and paying SSI claims with insufficient attention to verifying recipient self-reported information, controlling program expenditures, and pursuing overpayment recoveries once they occur. In response to our high-risk designation, SSA has made progress in coordination with Congress to improve the financial integrity and management of SSI, including developing a major SSI legislative proposal with numerous overpayment deterrence and recovery provisions. Many of these provisions were incorporated into the Foster Care Independence Act, which was signed into law in December 1999. The act directly addresses a number of our prior recommendations and provides SSA with additional tools to obtain applicant income and resource information from financial institutions; imposes a period of ineligibility for applicants who transfer assets to qualify for SSI benefits; and authorizes the use of credit bureaus, private collection agencies, interest levies, and other means to recover delinquent debt. SSA also obtained separate legislative authority in 1998 to recover overpayments from former SSI recipients currently receiving OASI or DI benefits. The agency was previously excluded from using this cross-program recovery tool to recover SSI overpayments without first obtaining debtor consent. As a result of this new authority, SSA has recently begun the process of recovering overpayments from Social Security benefits of individuals no longer on the SSI rolls. The agency has also issued regulations on the use of credit bureaus and drafted regulations for wage garnishments. We have been told that the draft regulations are currently under review by the new commissioner and by the Office of Management and Budget. In addition to establishing the new legislative authorities, SSA has initiated a number of internal administrative actions to further strengthen SSI program integrity. These include using tax refund offsets for delinquent SSI debtors, an action that SSA said resulted in $61 million in additional overpayment recoveries last year. SSA also uses more frequent (monthly) automated matches to identify ineligible SSI recipients living in nursing homes and other institutions. As of January 2001, SSA’s field offices were also provided on-line access to wage, new-hire, and unemployment insurance data maintained by the Office of Child Support Enforcement. These data are key to field staff’s ability to more quickly verify employment and income information essential to determining SSI eligibility and benefit levels. SSA also increased the number of SSI financial redeterminations that it conducted, from about 1.8 million in fiscal year 1997 to about 2.2 million in fiscal year 2000. These reviews focus on income and resource factors affecting eligibility and payment amounts. SSA estimates that by conducting more redeterminations and refining its methodology for targeting cases most likely to have payment errors, it prevented nearly $600 million in additional overpayments in fiscal year 1999. SSA’s Office of Inspector General (OIG) has also increased the level of resources and staff devoted to investigating SSI fraud and abuse; key among the OIG’s efforts is the formation of Cooperative Disability Investigation teams in 13 field locations. These teams are designed to identify fraud and abuse before SSI benefits are approved and paid. Finally, in response to our prior recommendation, SSA has revised its field office work credit and measurement system to better reward staff for time spent thoroughly verifying applicant eligibility information and developing fraud referrals. If properly implemented, such measures should provide field staff with much-needed incentives for preventing fraud and abuse and controlling overpayments. SSA’s current initiatives demonstrate a stronger management commitment to SSI integrity issues and have the potential to significantly improve program management; however, our work shows that SSA overpayments and outstanding debt owed to the program remain at high levels. A number of the agency’s initiatives—especially those associated with the Foster Care Independence Act—are still in the early planning or implementation stages and have yet to yield results. In addition, at this stage, it is not clear how great an effect the impact of SSA’s enhanced matching efforts, online access tools, and other internal initiatives has had on the agency’s ability to recover and avoid overpayments. The same is true for the agency’s efforts to improve the accuracy of SSI eligibility decisions. SSA also has not yet addressed a key program vulnerability—program complexity—that is associated with increased SSI overpayments. In prior work, we have reported that SSI living arrangement and in-kind support and maintenance policies used by SSA to calculate eligibility and benefit amounts were complex, prone to error, and a major source of overpayments. We also recommended that SSA develop options for simplifying the program. Last year, SSA’s policy office issued a study that discussed various options for simplifying complex SSI policies. Although SSA is considering various options, it has not moved forward in recommending specific cost neutral proposals for change. We believe that sustained management attention is necessary to improve SSI program integrity. Thus, it is important that SSA move forward in fully implementing the overpayment deterrence and recovery tools currently available to it and seek out additional ways to improve program management. Accordingly, we have a review under way that is aimed at documenting the range of SSI activities currently in place; their effects on program management and operations; and additional legislative or administrative actions, or both, necessary to further improve SSA’s ability to control and recover overpayments. A particular focus of this review will be to assess remaining weaknesses in SSA’s initial and ongoing eligibility verification procedures, application of penalties for individuals who fail to report essential eligibility information, and overpayment recovery policies. Among federal agencies, SSA has long been considered one of the leaders in service delivery. Indeed, for fiscal year 2001, SSA reported that 81 percent of its customers rated the agency’s services as “excellent,” “very good,” or “good.” SSA considers service delivery one of its top priorities, and its current performance plan includes specific goals and strategies to provide accurate, timely, and useful service to the public. However, the agency faces significant challenges that could hamper its ability to provide high-quality service over the next decade and beyond. Demand for services will grow rapidly as the baby boom generation ages and enters the disability-prone years. By 2010, SSA expects worker applications for DI to increase by as much as 32 percent over 2000 levels. Determining eligibility for disability benefits is a complex process that spans a number of offices and can take over a year to complete. As we have observed earlier in this statement, SSA already has trouble managing its disability determination workload; adding additional cases without rectifying serious case processing issues will only make things worse. Furthermore, by 2010, SSA projects that applications for retirement benefits will also increase dramatically—by 31 percent over the 2000 levels. SSA’s ability to provide high-quality service delivery is also potentially weakened by challenges regarding its workforce. First, SSA’s workforce is aging, and SSA is predicting a retirement wave that will peak in the years 2007 through 2010, when it expects about 2,500 employees to retire each year. By 2010, SSA projects that about 37 percent of its almost 62,000 employees will retire. The percentage is higher for employees in SSA’s supervisory or managerial ranks. In particular, more than 70 percent of SSA’s upper-level managers and executives (GS-14, GS-15, and SES level) are expected to retire by 2010. Second, SSA will need to increase staff skills to deal with changing customer expectations and needs. SSA’s staff will need to obtain and continually update the skills needed to use the most current technology available to serve the public in a more convenient, cost effective, and secure manner. At the same time, some aspects of SSA’s customer service workload will likely become more time consuming and labor intensive, owing primarily to the growing proportion of SSA’s non-English speaking customers and the rising number of disability cases involving mental impairments. Both situations result in more complex cases that require diverse staff skills. SSA has a number of workforce initiatives under way to help it prepare for the future. For example, as we recommended in 1993, and as required by law, SSA developed a workforce transition plan to lay out actions to help ensure that its workforce will be able to handle future service delivery challenges. In addition, recognizing that it will shortly be facing the prospect of increasing retirements, SSA conducted a study that predicts staff retirements and attrition each year, from 1999 to 2020, by major job position and agency component. SSA also began to take steps to fill its expected leadership gap. We have long stressed the importance of succession planning and formal programs to develop and train managers at all levels of SSA. As early as 1993, we recommended that SSA make succession planning a permanent aspect of its human resource planning and evaluate the adequacy of its investments in management training and development. SSA created three new leadership development programs to help prepare selected staff to assume mid- and top-level leadership positions at the agency. Overall, many of the efforts being made today are consistent with principles of human capital management, and good human capital management is fundamental to the federal government’s ability to serve the American people. For this reason, we have designated strategic human capital management a high-risk area across the federal government. However, SSA is taking these human capital measures in the absence of a concrete service delivery plan to help guide its investments. We recommended as long ago as 1993 that SSA complete such a plan to ensure that its human capital and other key investments are put to the best use.In 1998, the agency took a first step by beginning a multiyear project to monitor and measure the needs, expectations, priorities, and satisfaction of customer groups, major stakeholders, and its workforce. In 2000, SSA completed a document that articulates how it envisions the agency functioning in the future. For example, SSA anticipates offering services in person, over the telephone, and via the Internet; its telephonic and electronic access services will be equipped with sophisticated voice recognition and language translation features, and work will be accomplished through a paperless process. In this service vision document, SSA also states that it will rely heavily on a workforce with diverse and updated skills to accomplish its mission. Although this new vision represents a positive step for the agency toward acknowledging and preparing for future service delivery challenges, it is too broad and general to be useful in making specific information technology and workforce decisions. We have stressed that this document should be followed by a more detailed service delivery plan that spells out who will provide what type of services in the future, where these services will be made available, and the steps and timetables for accomplishing needed changes. SSA officials told us that they are working on such a blueprint. Without this plan, SSA cannot ensure that its investments in its workforce and technology are consistent with and fully support its future approach to service delivery. SSA also plans to rely heavily on information technology to cope with growing workloads and to enhance its processing capabilities. To this end, the agency has devoted considerable time and effort to identifying strategies to meet its goal of providing world-class service. For example, SSA has begun expanding its electronic service delivery capability— offering retirees the option of applying for benefits on-line as well as pursuing other on-line or Internet options to facilitate customer access to the agency’s information and services. Yet, SSA’s overall success in meeting its service delivery challenge will depend on how effectively it manages its information technology initiatives. As SSA transitions to electronic processes, it will be challenged to think strategically about its information technology investments and to effectively link these investments to the agency’s service delivery goals and performance. Furthermore, its actions and decisions must effectively address dual modes of service delivery—its traditional services via telephone, face-to- face, and mail contacts that are supported primarily by its mainframe computer operations, as well as a more interactive, on-line, Web-based environment aimed at delivering more readily accessible services in response to increased customer demands. SSA has experienced mixed success in carrying out prior information technology initiatives. For example, the agency has made substantial progress in modernizing workstations and local area networks to support its work processes, and it has clearly defined its business needs and linked information technology projects to its strategic objectives. Moreover, our evaluation of its information technology policies, procedures, and practices in five key areas—investment management, enterprise architecture, software development and acquisition, human capital, and information security—found that SSA had many important information technology management policies and procedures in place. For instance, SSA had sound policies and procedures for software development that were consistent with best practices. However, SSA had not implemented its policies and procedures uniformly and had not established several key policies and procedures essential to ensuring that its information technology investments and human capital were effectively managed. We noted weaknesses in each of the five key areas and recommended actions to improve SSA’s information technology management practices in each area. In total, our report included 20 specific recommendations for more effectively managing the agency’s information technology. In responding to our report, SSA agreed with all of the recommendations. Let me illustrate some of the weaknesses that formed the basis for our recommendations. In making decisions on technology projects, SSA lacked key criteria and regular oversight for ensuring consistent investment management and decision-making practices. It also did not always consider costs, benefits, schedules, and risks when making project selections and as part of its ongoing management controls. Without such information, SSA cannot be assured that its investment proposals will provide the most cost-effective solutions and achieve measurable and specific program-related benefits (e.g., high-quality service delivered on time, within cost, and to the customer’s satisfaction). Furthermore, given competing priorities and funding needs, SSA will need such information to make essential tradeoffs among its information technology investment proposals and set priorities that can maximize the potential for both short- and longer-term improvements to services provided to the public. As SSA pursues Internet and Web-based applications to better serve its customers, it must ensure that these efforts are aligned with the agency’s information technology environment. A key element for achieving this transition is the successful implementation of SSA’s enterprise architecture. An enterprise architecture serves as a blueprint for systematically and completely defining an organization’s current (baseline) and desired (target) environment and is essential for evolving information systems, developing new systems, and inserting emerging technologies that optimize their mission value. It also provides a tool for assessing benefits, impacts, and capital investment measurements and supporting analyses of alternatives, risks, and trade-offs. Nonetheless, we found that SSA had not completed key elements of its enterprise architecture, including (1) finalizing its enterprise architecture framework, (2) updating and organizing its architectures and architecture definitions under the framework, and (3) reflecting its future service delivery vision and e-business goals. In addition, it had not ensured that enterprise architecture change management and legacy system integration policies, procedures, and processes were effectively implemented across the agency. As SSA moves forward in implementing electronic services and other technologies, its architecture will be critical to defining, managing, and enforcing adherence to the framework required to support its current and future information processing needs. Moreover, without effective enterprise architecture change management and legacy system integration processes, SSA will lack assurance that (1) it can successfully manage and document changes to its architecture as business functions evolve and new technologies are acquired and (2) new software and hardware technologies will interoperate with existing systems in a cost-effective manner. In surveying 116 agencies across the federal government, we found the use of enterprise architectures to be a work in progress, with much left to be accomplished. We assessed SSA at a relatively low level of maturity in enterprise architecture management. SSA plans to rely extensively on software-intensive systems to help achieve processing efficiencies and improved customer service. Because SSA is an agency in which software development continues to be predominantly an in-house effort, in 1997, its Office of Systems established the Software Process Improvement program, in which new policies and procedures were created to enhance the quality of the agency’s software development. However, our evaluation of these policies and procedures found that SSA was not consistently applying them to its software development projects. In particular, SSA had not applied sound management and technical practices in its development of the electronic disability system. This poses a significant risk given SSA’s history of problems in developing and delivering the critical software needed to support its redesigned work processes. The use of sound, disciplined software development processes is critical to ensuring that SSA delivers quality software on schedule and within established cost estimates. Until SSA consistently and effectively implements its software development policies and procedures, it will lack assurance that it can meet its goal of developing a technological infrastructure to support its service delivery vision. As SSA places increased emphasis on using information technology to support new ways of delivering service, it must ensure that it effectively manages its human capital to anticipate, plan for, and support its information technology requirements. However, SSA had not taken all of the necessary steps to ensure the adequacy of its future information technology workforce. For instance, we found that although SSA had begun evaluating its short- and longer-term information technology needs, these efforts were not complete. Specifically, SSA had not linked its information technology staff needs to the competencies it would require to meet mission goals. Doing so is necessary, however, to ensure that SSA’s plans project workforce needs far enough in advance to allow adequate time for staff recruitment and hiring, skills refreshment and training, or outsourcing considerations. Furthermore, SSA lacked an inventory identifying the knowledge and skills of current information technology staff, which is essential for uncovering gaps between current staff and future requirements. Without such an inventory, SSA has no assurance that its plans for hiring, training, and professionally developing information technology staff will effectively target short- and long-term skills needed to sustain its current and future operations. These shortcomings in SSA’s information technology human capital management could have serious ramifications as the agency moves toward making larger investments in new electronic service delivery options, such as Internet applications. Developing Internet applications represents a new era for SSA—one in which the agency must ensure that is has enough of the right people and skills to bring its electronic service delivery plan to fruition. As SSA proceeds with the development and implementation of Internet and Web-based initiatives, the need for a strong program to address threats to the security and integrity of its operations will grow. Without proper safeguards, these initiatives pose enormous risks that make it easier for individuals and groups with malicious intentions to intrude into inadequately protected systems and use such access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other organizations’ sites. SSA has made progress in addressing the information protection issues raised in prior years. Specifically, during fiscal year 2001, the agency conducted a risk assessment to identify critical assets and vulnerabilities as part of the Critical Infrastructure Protection project; issued a final security policy for the state Disability Determination Service sites in accordance with the information security requirements included in the National Institute of Standards and Technology Special Publication 800-18; established and published technical security configuration standards for operating systems and servers; completed updates for accreditation and certification of key systems; and further strengthened physical access controls over the National Computer Center. Nonetheless, weaknesses in SSA’s information security program continue to threaten its ability to effectively mitigate the risk of unauthorized access to, and disclosure of, sensitive information. For example, although the agency has made improvements to its entity-wide security program and standards, control weaknesses continue to expose key elements of its distributed systems and networks to unauthorized access to sensitive data. The general areas where exposures occurred included implementation, enforcement, and ongoing monitoring of compliance with technical security configuration standards and rules governing the operation of firewalls; monitoring controls over security violations and periodic reviews of user access; and physical access controls at nonheadquarters locations. These exposures exist primarily because SSA has not completed implementation of its enterprise-wide security program. Until a complete security framework is implemented and maintained, SSA’s ability to effectively mitigate the risk of unauthorized access to, and modification or disclosure of, sensitive SSA data will be impaired. Unauthorized access to sensitive data can result in the loss of data as well as trust fund assets, and compromised privacy of information associated with SSA’s enumeration, earnings, benefit payment processes, and programs. The need for a strong security framework to address threats to the security and integrity of SSA operations will grow as the agency continues to implement Internet and Web-based applications to serve the American public. In the past, we have reported that SSA has not undertaken the range of research, evaluation, and policy analysis necessary (1) to identify areas where legislative or other changes are needed to address program weaknesses and (2) to assist policymakers in exploring and developing options for change. The long-term solvency of the Social Security system is a critical issue facing the nation and SSA. As the debate on Social Security reform proceeds, policymakers and the general public need thoughtful, detailed, and timely analyses of the likely effect of different proposals on workers, beneficiaries, and the economy. SSA is well positioned to assess the programmatic impacts of economic and demographic trends and to identify areas where policy changes are needed to ensure that recipients’ needs are met efficiently and cost effectively. At the same time, SSA needs to prepare for the implementation of whatever programmatic changes are eventually made. Many of the reform proposals currently under debate will likely affect not only SSA but other government agencies as well. As part of their debate, policymakers need to understand the administrative aspects of each proposal, including the amount of time and money necessary to implement the proposed changes. SSA has information that could be central to the implementation and administration of proposed Social Security reforms and should be providing this information in a timely and accurate manner. SSA also faces a wide range of pressing challenges with its disability programs, including how best to 1) ensure the quality and timeliness of its decisions, 2) integrate return-to-work strategies into all phases of its disability determination process, and 3) address program complexity problems that have contributed to vulnerability in the SSI program. To address these challenges, SSA will need to target its research and conduct analyses that will allow the agency to play a key role in proposing and analyzing major policy changes. However, in the past, we have noted SSA’s reluctance to take the actions needed to fulfill its policy development and planning role in advance of major program crises, particularly when they require long-term solutions, legislative change, or both. In recent years, SSA has taken action to strengthen its research and policy development role in these and other areas. It has initiated several reorganizations of its policy component to strengthen its capacity. The agency has also significantly increased the level of staff and resources available to support research activities and has several analyses planned or under way to address key policy issues. Specific to the long-term solvency issue, SSA’s Office of the Actuary has long provided key information on the financial outlook of Social Security and projections of the effects of different reform proposals on trust fund finances. In addition, SSA has expanded its ability to use modeling techniques to predict the effects of proposed program changes, and it has established a research consortium to conduct and advise on relevant research and policy activities. With respect to its disability programs, SSA has established a separate disability research institute and has submitted to the Congress its first major SSI legislative proposal aimed at improving program integrity. However, many of the agency’s actions and studies are in the early stages, and it is not yet clear how the agency will use them and what their ultimate effect on SSA program policy will be. The Social Security Administration is responsible for issuing SSNs to most Americans. The agency relies on the SSN to record wage data, maintain earnings records, and efficiently administer its benefit programs. In addition, the SSN is used by other government agencies as well as the private sector. This widespread use offers many benefits; however, combined with an increase in reports of identify theft, it has raised public concern over how this and other personal information is being used and protected. Moreover, the growth of the Internet, which can make personal information contained in electronic records more readily accessible to the general public, has heightened this concern. Finally, the terrorist attacks of September 11th and the indication that some of the terrorists fraudulently obtained SSNs have added new urgency to the need to assess how SSNs are used and protected. We have recently testified on work we are completing at the request of Chairman Shaw and others to review the many uses of SSNs at all levels of government and to assess how these government entities safeguard the SSNs. We found that SSNs are widely used across multiple agencies and departments at all levels of government. They are used by agencies that deliver benefits and services to the public as a convenient and efficient means of managing records. More importantly, these agencies rely on SSNs when they share data with one another, for example, to make sure that only eligible individuals receive benefits and to collect outstanding debt individuals owe the government. Although these agencies are taking steps to safeguard the SSNs from improper disclosure, our work identified potential weaknesses in the security of information systems at all levels of government. In addition, SSNs are widely found in documents that are routinely made available to the public, that is, in public records. Although some government agencies and courts are trying innovative approaches to prevent the SSN from appearing on public records, not all agencies maintaining public records have adopted these approaches. Moreover, increasing numbers of departments are considering placing or planning to place documents that may contain SSNs on the Internet, which would make these numbers much more readily available to others, raising the risk of their misuse. We also found that SSNs are one of three personal identifiers most often sought by identity thieves and that SSNs are often used to generate additional false documents, which can be used to set up false identities. What is harder to determine is a clear answer on where identify thieves obtain the SSNs they misuse. Ultimately, in light of the recent terrorist events, the nation must grapple with the need to find the proper balance between the widespread and legitimate uses of personal information such as SSNs, by both government and the private sector, and the need to protect individual privacy. There are no easy answers to these questions, but SSA has an important role to play in protecting the integrity of the SSN. Given the widespread use of SSNs, the agency needs to take steps to ensure that it is taking all necessary precautions to prevent individuals who are not entitled to SSNs from obtaining them. Currently, the agency is reexamining its process of assigning SSNs to individuals. This may require the agency to find a new balance between two competing goals: the need to take time to verify documents submitted during the application process and the desire to serve the applicant as quickly as possible. In addition, the agency is studying ways to make sure it provides accurate and timely information to financial institutions on deceased SSN holders. However, once SSA has issued an SSN, it has little control over how the number is used by other government agencies and the private sector. In this light, we look forward to exploring additional options to better protect SSNs with you as we complete our ongoing work in this area. We have outlined a number of difficult challenges, most of them long- standing, that the SSA Commissioner faces. These are, in general, the same challenges we have been highlighting since SSA became an independent agency. In some cases, SSA has begun to take positive steps to address its challenges. Specifically, SSA’s efforts to strengthen its research, evaluation, and policy development activities show promise. Likewise, SSA has made considerable progress in addressing weaknesses in the integrity of the SSI program. However, more can be done in these areas. As new pressures inevitably arise that will also demand attention from the commissioner and her team, it will be important for the commissioner to sustain and expand on the agency’s actions to date. We are particularly concerned, however, about other challenges where SSA’s efforts to date have fallen short and where the agency faces increasing pressures in the near future. The commissioner faces crucial decisions on how to proceed on several of these challenges. SSA has made disappointing progress on (1) its efforts to improve its disability claims process, (2) the need to better integrate return-to-work strategies into all phases of the disability process, and (3) the need to better plan for future service delivery pressures and changes. These challenges will be exacerbated by growing workload pressures as the baby boom generation ages. After almost a year without a long-term leadership structure in place, the commissioner and a SSA team have an opportunity to take a fresh look at these longstanding challenges and the fundamental issues impeding faster progress in these areas. Again, focused and sustained attention to these challenges is vital, as the agency is running out of time to make needed changes before the expected increases in workload overwhelm its operations. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other member of the subcommittees may have. For further information regarding this testimony, please contact Barbara D. Bovbjerg, Director, or Kay E. Brown, Assistant Director, Education, Workforce, and Income Security at (202) 512-7215. Individuals making key contributions to this testimony include Michael Alexander, Yvette Banks, Daniel Bertoni, Alicia Puente Cackley, Ellen Habenicht, Carol Langelier, Valerie Melvin, Angela Miles, Carol Dawn Petersen, and William Thompson. Social Security: Issues in Evaluating Reform Proposals. GAO-02-288T. Washington, D.C.: December 10, 2001. Social Security: Program’s Role in Helping Ensure Income Adequacy. GAO-02-62. Washington, D.C.: November 30, 2001. Social Security: Evaluating Reform Proposals. GAO/AIMD/HEHS-00-29. Washington, D.C.: November 4, 1999. SSA’s Management Challenges: Strong Leadership Needed to Turn Plans Into Timely, Meaningful Action. GAO/T-HEHS-98-113. Washington, D.C.: March 12, 1998. Social Security Disability: Disappointing Results From SSA’s Efforts to Improve the Disability Claims Process Warrant Immediate Attention. GAO-02-322. Washington, D.C.: February 27, 2002. SSA Disability: SGA Levels Appear to Affect the Work Behavior of Relatively Few Beneficiaries, but More Data Needed. GAO-02-224. Washington, D.C.: January 16, 2002. SSA Disability: Other Programs May Provide Lessons for Improving Return-to-Work Efforts. GAO-01-153. Washington, D.C.: January 12, 2001. Social Security Disability: SSA Has Had Mixed Success in Efforts to Improve Caseload Management. GAO/T-HEHS-00-22. Washington, D.C.: October 21, 1999. SSA Disability Redesign: Actions Needed to Enhance Future Progress. GAO/HEHS/99-25. Washington, D.C.: March 12, 1999. SSA Disability: SSA Return-to-Work Strategies From Other Systems May Improve Federal Programs. GAO-HEHS-96-133. Washington, D.C.: July 11, 1996. High Risk Series: An Update. GAO-01-273. Washington, D.C.: January 2001. Supplemental Security Income: Additional Actions Needed to Reduce Program Vulnerability to Fraud and Abuse. GAO/HEHS-99-151. Washington, D.C.: September 15, 1999. Supplemental Security Income: Long-Standing Issues Require More Active Management and Program Oversight. GAO/T-HEHS-99-51. Washington, D.C.: February 3, 1999. Supplemental Security Income: Action Needed on Long-Standing Problems Affecting Program Integrity. GAO/HEHS-98-158. Washington, D.C.: September 14, 1998. SSA Customer Service: Broad Service Delivery Plan Needed to Address Future Challenges. GAO/T-HEHS/AIMD-00-75. Washington, D.C.: February 10, 2000. Information Security: Additional Actions Needed to Fully Implement Reform Legislation. GAO-02-470T. Washington, D.C.: March 6, 2002. Information Technology: Enterprise Architecture Use Across the Federal Government Can Be Improved. GAO-02-6. Washington, D.C.: February 19, 2002. Information Technology Management: Social Security Administration Practices Can Be Improved. GAO-01-961. Washington, D.C.: August 21, 2001. Information Security: Serious and Widespread Weaknesses Persist at Federal Agencies. GAO/AIMD-00-295. Washington, D.C.: September 6, 2000. SSA Customer Service: Broad Service Delivery Plan Needed to Address Future Challenges. GAO/T-HEHS/AIMD-00-75. Washington, D.C.: February 10, 2000. Social Security Administration: Update on Year 2000 and Other Key Information Technology Initiatives. GAO/T-AIMD-99-259. Washington, D.C.: July 29, 1999. Information Security: Serious Weaknesses Place Critical Federal Operations and Assets at Risk. GAO/AIMD-98-92. Washington, D.C.: September 23, 1998.
The Social Security Administration (SSA) provided $450 billion in benefits to 50 million recipients in fiscal year 2001. Since 1995, when SSA became an independent agency, GAO has called for effective leadership and sustained management attention to several unresolved management challenges, including the redesign of its disability claims process, management and oversight problems with its SSI program, future service delivery demands, and implementing its information technology and research and policy development capacity. SSA has much more to do and will need to take bolder action or make more fundamental changes to existing programs.
The Magnuson-Stevens Act granted responsibility for managing marine resources to the Secretary of Commerce. The Secretary delegated this responsibility to NMFS, which is part of Commerce’s National Oceanic and Atmospheric Administration (NOAA). The act established eight regional fishery management councils, each with responsibility for making recommendations to the Secretary of Commerce about management plans for fisheries in federal waters. The eight councils—consisting of fishing industry participants, state and federal fishery managers, and other interested parties—and their areas of responsibility are New England covering waters off Maine, New Hampshire, Massachusetts, Rhode Island, and Connecticut; Mid-Atlantic covering waters off New York, New Jersey, Delaware, Maryland, Virginia, and North Carolina; South Atlantic covering waters off North Carolina, South Carolina, Georgia, and the east coast of Florida; Gulf of Mexico covering waters off Texas, Louisiana, Mississippi, Alabama, and the west coast of Florida; Caribbean covering waters off the U.S. Virgin Islands and the Commonwealth of Puerto Rico; Pacific covering waters off California, Oregon, and Washington; North Pacific covering waters off Alaska; and Western Pacific covering waters off Hawaii, American Samoa, Guam, the Commonwealth of the Northern Mariana Islands, and uninhabited U.S. territories in the Western Pacific. The Magnuson-Stevens Act also established national standards for fishery conservation and management. These standards deal with preventing overfishing, using scientific information, ensuring the equitable allocation of fishing privileges, preventing excessive accumulation of quota, using fishery resources efficiently, minimizing bycatch, minimizing administrative costs, promoting safety at sea, and considering the importance of fishery resources to fishing communities. The regional councils use these standards to guide their development of plans that are appropriate to the conservation and management of a fishery, including measures to prevent overfishing and rebuild overfished stocks and to protect, restore, and promote the long-term health and stability of the fishery. These measures may include, for example, requiring permits for fishery participants, designating fishing zones, establishing catch limits, prohibiting or limiting the use of fishing gear and fishing vessels, and establishing a limited access system. Under the Magnuson-Stevens Act, three regional councils (North Pacific, South Atlantic, and Mid-Atlantic) have developed IFQ programs to manage the halibut and sablefish, wreckfish, and surfclam/ocean quahog fisheries, respectively. Each IFQ program is designed individually, because the characteristics of each fishery differ. Pacific halibut (see fig. 1) and sablefish (see fig. 2) are bottom-dwelling species found off the coast of Alaska, among other areas. Halibut weigh about 40 pounds, on average, and are found at depths of about 50 to 650 feet. Sablefish weigh less than 11 pounds, on average, and are found at depths of about 325 to 4,925 feet. The halibut and sablefish fishing fleets are primarily owner-operated vessels of various lengths that use hook-and- line gear to fish for halibut and hook-and-line and pot gear for sablefish. Some vessels catch both halibut and sablefish, and, given the location of both species, they are often caught as bycatch of the other. Halibut are primarily sold domestically as a fresh or frozen product, and sablefish are primarily sold to the Asian market as a frozen product. In 2001, the total halibut and sablefish catch was 45.2 million pounds and 21.7 million pounds, respectively. Wreckfish (see fig. 3) are found in the deep waters far off the South Atlantic coast, primarily from Florida to South Carolina. They were first discovered in the southern Atlantic in the early 1980s by a fisherman recovering lost gear. Wreckfish are fished using specialized gear by vessels over 50 feet in length that are used primarily in other fisheries. The fishing fleet is small, with only three vessels reporting wreckfish landings totaling about 168,000 pounds—or about 8 percent of the total allowable catch—in 2000. Wreckfish are sold fresh or frozen as a market substitute for snapper and grouper. Surfclams (see fig. 4) and ocean quahogs (see fig. 5) are mollusks found along the East Coast, primarily from Maine to Virginia, with commercial concentrations found off the Mid-Atlantic states. While ocean quahogs are found farther offshore than surfclams, the same vessels are largely used in each fishery. These vessels pump water down to the ocean floor to raise the mollusks and then catch them in a dredge that runs over the bottom. Surfclams and ocean quahogs are processed into strips, juice, soup, chowder, and sauce. They must be processed generally within 24 hours of harvest or they will spoil. In 2000, the surfclam/ocean quahog fishery harvested 2.6 million bushels of surfclams and 3.2 million bushels of ocean quahogs. When designing the IFQ programs, each regional council set out specific objectives for improving conservation and management in their respective fisheries. These objectives differed for each program, as shown in table 1, depending on the desired biological, social, and economic outcomes for the fishery. When designing the IFQ programs, each of the respective regional councils also set out who was eligible to receive quota under the initial allocation (see table 2). The regional councils based eligibility and amount of quota to be received on, among other things, ownership and catch history of the vessels that participated during a portion of a set of qualifying years. Some halibut, sablefish, surfclam, and ocean quahog processors owned fishing vessels with a catch history during the IFQ programs’ qualifying years, and therefore received quota under the initial allocation. Consolidation of quota holdings occurred in all three IFQ programs, with much of it occurring in the early years of each program. In addition, consolidation of surfclam and ocean quahog quota is greater than NMFS data indicate. The governing rules of each program may have affected the extent of consolidation and the information collected. However, without clear and accurate data on quota holders and fishery-specific limits on quota holdings, it is difficult to determine whether any quota holdings in a particular fishery would be viewed as excessive, as prohibited by the Magnuson-Stevens Act. According to our analysis of NMFS data, from 1995 through 2001, the number of halibut and sablefish quota holders decreased by about 27 and 15 percent, respectively. Over 46 percent of the halibut consolidation and 35 percent of the sablefish consolidation occurred by the end of the second year of the program. From 1992 to 2002, the number of wreckfish quota holders decreased by 49 percent, with all of the consolidation occurring by the end of the program’s third year. Finally, from 1990 to 2002, the number of surfclam and ocean quahog quota holders decreased by about 17 and 34 percent, respectively. About 58 percent of the surfclam quota consolidation and 36 percent of the ocean quahog quota consolidation occurred by the start of the second year of the program. (See app. II for additional data on changes in quota holdings.) Surfclam and ocean quahog quota consolidation is greater than NMFS data indicate. According to NMFS officials and others knowledgeable about the fishery, the quota holder of record (i.e., the individual or entity under whose name the quota is listed) is often not the entity that controls the use of the quota. Some families hold quota under the names of more than one family member; some parent corporations hold quota under the names of one or more subsidiaries; some entities hold quota under the name of one or more incorporated vessels; and some financial institutions serve as transfer agents and hold quota on behalf of others or in lieu of collateral for loans. After aggregating quota controlled by the same individual or entities, we determined that consolidation of surfclam quota holders was about twice that indicated by NMFS data. As shown in figure 6, no more than 59 and 42 individuals or entities controlled surfclam quota in 1990 and 2002, respectively. One entity controlled quota held in 12 different names, accounting for 27 percent of the 2002 total surfclam quota allocated. Similarly, consolidation of ocean quahog quota holders was about twice that indicated by NMFS data. As shown in figure 7, no more than 48 and 29 individuals or entities controlled ocean quahog quota in 1990 and 2002, respectively. One entity controlled quota held in 2 different names, representing 22 percent of the 2002 total ocean quahog quota allocated. (See app. III for information on consolidation in the surfclam and ocean quahog processing sector.) The consolidation of surfclam and ocean quahog quota may be even greater than our analysis indicates because we could not determine the individuals or entities for whom banks hold quota. According to NMFS data, banks hold about 21 percent of the 2002 surfclam quota and 27 percent of the 2002 ocean quahog quota. However, we could not determine for whom the banks hold the quota and thus who controls the use of the quota. NMFS officials stated that, in theory, they had the ability to identify the individuals or entities for whom the banks hold quota. They explained, however, that such an analysis would be extremely difficult and labor-intensive because their record system is not designed for this purpose. As such, NMFS did not provide us with this information. Each program’s governing rules may have affected the extent of consolidation and the information NMFS collects and monitors on quota holders. To help meet the Magnuson-Stevens Act’s prohibition of any individual or entity acquiring an excessive share of the fishery, the regional fishery management councils may establish limits on the amount of quota any individual or entity can hold. In the Alaskan halibut and sablefish program, for example, the council set specific limits on individual holdings by, among others, species and area. Limits on individual halibut quota holdings, for example, range from 0.5 percent to 1.5 percent, depending on the fishing area, and sablefish holdings are limited to 1 percent. NMFS collects the information needed to monitor and ensure adherence to these requirements. NMFS requires halibut and sablefish transfer applicants to identify whether they are individuals or business entities. Business entities must also report their ownership interests at least annually. NMFS uses this information to ensure that all potential transfers and all current quota holdings comply with program rules. NMFS conducts computer checks on each transfer request to ensure that the transfer will not result in any entity, whether individually or collectively, exceeding the limits for quota holdings. In contrast, the regional fishery councils for the surfclam/ocean quahog and wreckfish programs did not set specific and measurable limits on the individual accumulation of quota. Instead, the councils let federal antitrust laws determine whether any quota holdings are excessive. However, NMFS officials explained that the Department of Justice would most likely base a decision for taking an antitrust action on whether or not an individual or entity could fix the price of fish, rather than the amount of quota an individual or entity held. Further, NMFS officials said that they have never referred such a case to the Department of Justice. The National Research Council pointed out in its 1999 study that “ lack of accumulation limits may unduly strengthen the market power of some quota holders and adversely affect wages and working conditions of labor in the fishing industry...” Establishing limits, however, is not an easy task. Program objectives and the political, economic, and social characteristics of each fishery may influence each council’s definition of what limits should be placed on an individual’s or entity’s quota holdings. In addition, fishery participants have different opinions on what these limits should be. Because the surfclam/ocean quahog and wreckfish programs have no specific limits on the amount of quota any one individual or entity can hold, NMFS does not routinely gather and assess information on the ownership interest of each quota holder. For example, NMFS requires transfer applicants in the surfclam/ocean quahog program to submit identifying information, including the name of the quota holder, the name of the related vessel, and the contact information for the quota holder. However, NMFS does not verify this information or require transfer applicants or quota holders to submit any information detailing ownership interest or eligibility. Further, NMFS does not conduct any assessment of the amount of quota held or controlled by an individual or entity, and NMFS records are not kept in a manner that would readily allow such an assessment. As such, it is difficult to determine how much quota any one individual or entity controls. Moreover, lacking specific limits on quota holdings, we could not determine if any individual’s or entity’s holdings in either the surfclam/ocean quahog or the wreckfish programs would be viewed as excessive for the fishery, as prohibited by the Magnuson-Stevens Act. We found no evidence that foreign entities currently hold or control quota in the three IFQ programs. Furthermore, industry participants and NMFS officials said that they did not know of any cases in which a foreign entity has been able to acquire quota in either the halibut and sablefish or the wreckfish IFQ programs. However, some foreign-owned entities have held or controlled quota in the surfclam/ocean quahog program, as the following examples show. A U.S. member firm of a foreign business that provides financial services held about 6 percent of the surfclam quota in 2002 while acting as a transfer agent in the sale of the quota. According to a representative of the firm, only the buyer and the seller controlled the quota and the fishing of the quota. When the sale was finalized in the spring of 2002, the quota was released to the buyer. The firm no longer holds quota in the fishery. A foreign-owned processing company once controlled about 7 percent of the surfclam and ocean quahog quota through its U.S. subsidiary. Foreign control of the quota ended when a group of fishery participants bought out the foreign interest in the processing company. On the eve of the implementation of the IFQ program, a foreign-owned processing company sold its fishing vessels with qualifying catch histories to a U.S. citizen eligible to hold quota. This individual then received the quota for these vessels—nearly one-fourth of the quota allocated under the initial allocation. However, control of the quota remained with the foreign-owned processing company until the processing company was sold to a U.S.-owned firm. The implementing regulations of each IFQ program, in effect, generally preclude foreign entities from holding quota. The Alaskan halibut and sablefish program explicitly prohibits foreign citizens and businesses from holding quota and requires all quota transfer applicants to declare themselves to be U.S. citizens or U.S. entities. In contrast, the surfclam/ocean quahog and wreckfish programs allocate quota to qualified “persons,” defined as U.S. citizens, and tie eligibility for holding quota to the requirements for owning a U.S.-documented vessel engaged in the fisheries of the United States, that is, being a U.S. citizen or in the case of a corporate owner being 75 percent owned and controlled by U.S. citizens. However, these two programs do not require quota holders or transfer applicants to declare that they are U.S. citizens or U.S. entities. In addition, NMFS officials overseeing the wreckfish program told us that they consider the U.S. Coast Guard’s approval of fishing vessel permits to be sufficient for determining eligibility to hold quota, because only vessels owned by U.S. citizens and U.S. companies are eligible for documentation as a U.S. fishing vessel. This procedure may be sufficient when a transfer applicant owns a permitted fishing vessel and applies for quota under the name used to document the vessel. However, an applicant who does not own such a vessel will never go through the Coast Guard verification process, because after the initial allocation, quota can be transferred to, and held by, nonvessel owners. Without information on the nationality or ownership of the quota holder, the potential exists for the transfer of surfclam, ocean quahog, and wreckfish quota to foreign entities. Some processors were adversely affected by the implementation of the halibut and sablefish IFQ program while others benefited. However, quantifying the economic effects of the IFQ program on processors is difficult because much of the data needed to measure changes in profitability are proprietary. Furthermore, other factors besides the IFQ program may lead to changes in processors’ economic situation. The IFQ program changed the environment in which traditional shore-based processors operated by extending the halibut and sablefish fishing seasons in some areas from several days to 8 months. Before the IFQ program was implemented, fishermen had just a few days to fish the total allowable catch for the year. Consequently, fishermen provided processors with large amounts of fish in a very short period of time, and processors organized their operations to process under these conditions. With the implementation of the IFQ program, the “race for fish” was eliminated because fishermen had more flexibility in choosing when to fish, and, as a result, processors received halibut and sablefish in smaller quantities over a longer period of time. This extended fishing season enabled more halibut to be processed and sold as a fresh product. Consequently, the fresh halibut market, as shown in figure 8, increased from 15 percent of the total halibut market in 1994 to 46 percent in 2001. Sablefish was not similarly affected, remaining primarily a frozen product that is shipped to and sold in the Asian market. To take advantage of the fresh market and its potential for higher wholesale prices, processors need ready access to highways and air transportation. As such, processors with access to transportation systems may have been competitively advantaged while those who were in more remote locations may have been competitively disadvantaged because transportation costs were higher. For example, one processor estimated that the cost to transport fresh product from Kodiak Island, Alaska, to Seattle, Washington, was about 20 cents a pound higher than from Seward or Homer, Alaska, which has ready access to a major road system. (See app. IV for more information on Alaskan ports and major transportation networks.) Also, processors located near services, such as fuel, ice, stores, and entertainment, said that fishermen were more willing to deliver fish to them than if these services were not available. The shift toward fresh product in the halibut market resulting from the IFQ program led to the emergence of the buyer-broker, a middleman who buys fish at a port and ships it fresh to market. Processors told us that the emergence of buyer-brokers, generally one-person operations with lower overhead costs, resulted in increased competition for fish and contributed to the increase in ex-vessel halibut prices (prices paid to fishermen for raw product). As shown in table 3, the percentage of halibut purchased by buyer-brokers increased from 3.7 in 1995 to 17.4 in 1999. Along with an increase in buyer-broker halibut purchases, there was a decrease in the number of individual shore-based plants that processed halibut and sablefish. While some plants stopped processing halibut and sablefish, others decided it was beneficial to start. Between 1995 and 2001, as shown in table 4, 68 plants stopped processing halibut and 56 started, resulting in a net decrease of 12 plants. Similarly, 54 plants stopped processing sablefish and 40 started, resulting in a net decline of 14 plants. Most of the shore-based plants that stopped or started processing were relatively small in comparison to other processors in that they purchased less than 100,000 pounds of halibut or sablefish annually. About 80 percent of the shore-based plants that stopped processing halibut and 75 percent of those that started purchased less than 100,000 pounds of fish. Similarly, about 81 percent of the plants that stopped processing sablefish and 70 percent of those that started were also small plants. The IFQ program, however, did not necessarily cause a plant to stop processing halibut or sablefish. According to industry and government officials, some plants stopped processing halibut or sablefish because the plant was sold to another processor, the plant closed for personal reasons, plant management made poor business decisions that were unrelated to the IFQ program, or the plant burned down. For example: One processor with a freezing operation bought halibut and sablefish, but it primarily bought and sold salmon off trollers. When the supply of farmed salmon increased, contributing to price decreases, the owners decided to sell the plant. One company that owned several plants consolidated its halibut production under fewer plants. One plant went out of business because its owner paid too much for fish—10 to 15 cents a pound more than others—and then resold it for less than he paid. One plant burned down and the processor now uses the site to offload fish from vessels and then transport it to another site for processing. In addition to changes in the number of plants processing halibut and sablefish, companies experienced some change in their market share. Some processing companies lost market share, while others gained market share. Comparing market shares for 1995 and 2001, we found that of 28 companies that processed halibut in 1995, 15 experienced a decrease in market share and 13 experienced an increase. Similarly, of the 17 companies that processed sablefish in 1995, 7 experienced a decrease in market share while 10 experienced an increase. To determine the IFQ program’s effect on processors, Alaska’s Department of Fish and Game commissioned a study to examine how halibut and sablefish processors were affected economically. This was the only study we could find that attempted to quantify the economic effect the IFQ program had on halibut and sablefish processors. Using a sample of halibut and sablefish processors, the study assessed the change in processors’ gross operating margins (revenues minus variable costs of processing). The study used the periods 1992-1993 for pre-IFQ margins and 1999-2000 for post-IFQ margins. According to the study’s principal author, these years were chosen because they provided the longest possible length of time between the pre- and post-IFQ years for which data were available. The study estimated that halibut processors suffered a 56 percent, or $8.7 million, loss in gross operating margins because the IFQ program caused halibut prices to increase and processors’ market shares to change. While we could not validate or replicate the study’s results because the proprietary data used in the study were confidential, we identified a number of problems with the study’s methodology and scope that brings into question the reliability of the study’s estimates. These problems include (1) the pre- and post-IFQ time periods do not provide an accurate measure of processors’ economic welfare, (2) the study’s results may not be representative of the industry as a whole, and (3) the document requesting economic information from processors may have biased participant responses. Further, the study’s authors acknowledged that examining the pre- and post-IFQ impacts on the processing sector does not necessarily imply that the IFQ program alone caused these effects. The pre- and post-IFQ time periods used to assess changes in processors’ gross operating margins do not provide an accurate measure of changes in processors’ economic welfare over time. First, the study’s methodology makes the assumption that all costs, except labor and material inputs, remain fixed from 1992 through 2000. However, as pointed out in a critique of the study, assuming that all of these other costs remain the same would not be adequate for a period as short as a year, and is clearly unjustified for the 7 year period evaluated, because the longer the time period assessed, the more likely costs will change. Even if the study’s assumption about costs were valid, the pre- and post- IFQ periods examined identify a greater negative change in gross operating margins than may be identified if different or longer periods were used. The changes in gross operating margins and the estimated economic effects are influenced by the fact that ex-vessel halibut prices dipped in the period 1992-1993 and were near their peak in 1999-2000 (see fig. 9). Real ex-vessel halibut prices in 1999-2000 were 44.5 percent higher than they were in 1992- 1993. However, when different base years, such as 1991-1992, are compared with 1999-2000, the price increase is 22.7 percent. The influence of the choice of base years and the corresponding ex-vessel prices also can be demonstrated by looking at the difference between the price a processor pays for raw fish and the price a processor receives for the processed fish—the processor’s price margin. We calculated a simplified version of the price margin to demonstrate the sensitivity of the margin to the choice of the time period examined. As shown in table 5, comparing the study’s pre- and post-IFQ price margins of 47.3 percent and 24.1 percent, respectively, shows a 23.2 percentage point decrease in margins. However, comparing the price margins for 1991-1992 with 1999-2000 shows a 13.0 percentage point decrease and comparing 1993-1994 with 1998-1999 shows a 1.1 percentage point increase. Moreover, the study’s results may not be representative of the industry as a whole. In total, 53 halibut processors and 46 sablefish processors, representing 88 percent of all halibut purchased and 96 percent of all sablefish purchased in the study years, were asked to participate in the survey. Responses were used from processors representing only 52 percent of all halibut and 54 percent of all sablefish purchased in the pre-IFQ years and 61 percent of all halibut and 59 percent of all sablefish purchased in the post-IFQ years. The study does not provide the actual number of participants whose data were used. Without knowing the number of participants or the characteristics of the respondents whose data were used, we cannot determine whether the study’s estimates are representative of the industry as a whole. Finally, the document requesting economic information from processors may have biased participant responses. In the preamble to the survey document, participants were told, among other things, that the purpose of the study was to test the theory that a harvester-only quota allocation transfers wealth from processors to harvesters and that the survey’s results would be used to assist in designing future IFQ or other fishery rationalization programs. Such statements leave little doubt as to how responses could benefit or harm processors with economic interests in other fisheries. According to standard economic research practice, these types of statements are to be avoided when designing a survey as they can influence the results. Factors other than the IFQ program’s implementation could contribute to changes in the economic well-being of processors, such as changes in the market of other species processed and changes in the total allowable catch. According to NMFS officials and industry experts, most processors handled other species of fish in addition to halibut and sablefish, and the relative proportion and value of these species will affect the economic condition of processors. According to our analysis of data from the Alaska Commercial Operators Annual Report, halibut and sablefish were relatively small portions of the fish processed by shore-based plants that processed halibut and/or sablefish. Specifically, from 1994 to 2001, halibut production ranged, on average, from 2.0 percent to 4.1 percent of all fish processed at a plant, while average sablefish production ranged from 1.4 percent to 2.3 percent. In terms of value, as shown in table 6, halibut was 4.4 percent of total plant product value in 1994 and 7.9 percent in 2001. Sablefish was 4.7 percent of total plant product value in 1994 and 5.3 percent in 2001. (These ranges are averages for all plants processing halibut and/or sablefish and a particular plant may process a higher percentage of these fish.) Another factor that affects processors economically is a change in the total allowable catch—limits on the amount of fish that can be caught annually. Such limits were used for halibut and sablefish long before the introduction of the IFQ program. Since the introduction of the IFQ program, the total allowable catch for halibut has increased by 56.4 percent and the total allowable catch for sablefish has decreased by 36.2 percent. In its 1999 report, Sharing the Fish, the National Research Council said that changes in the total allowable catch may affect the supply of fish available to processors and therefore the price they pay. Individual fishing quotas are one of many tools available for conserving and managing fishery resources on a sustainable basis. Concerns have been raised about the possibility of quota holdings becoming concentrated among a few individuals or entities, which, among other things, might lead to control of fish prices and/or might adversely affect wages and working conditions in the fishing industry. Moreover, there is a need to ensure that program rules on foreign holdings and quota concentration levels are complied with. NMFS collects the necessary data on halibut and sablefish quota holders and periodically monitors it to provide these assurances. However, NMFS does not gather sufficient information or periodically analyze the data it does collect on surfclam/ocean quahog and wreckfish quota holders to determine (1) who actually controls the use of the quota and (2) whether the holder is a foreign individual or entity. Furthermore, while each fishery is different, the regional councils have not defined the amount of quota that constitutes an excessive share in the surfclam/ocean quahog and wreckfish IFQ programs. Different program objectives and the political, economic, and social characteristics of each fishery make it difficult to define excessive share. However, without the information on who controls quota and defined limits on quota accumulation, NMFS cannot determine whether eligibility requirements are being met or raise questions as to whether any quota holdings are excessive. We recommend that the Secretary of Commerce take the following actions to improve the management of IFQ programs: To ensure that quota holders meet eligibility requirements, such as being a U.S. citizen or entity, we recommend that the Secretary of Commerce direct the Director of NMFS to collect and analyze information on quota holders, including who actually holds and controls the use of the quota and for whom financial institutions hold quota. To help prevent an individual or entity from acquiring an excessive share of the quota in future IFQ programs, we recommend that the Secretary of Commerce require regional fishery management councils to define what constitutes an excessive share for the fishery. To assist the regional fishery management councils in defining excessive share for a particular fishery, we recommend that the Secretary of Commerce direct the Director of NMFS to provide guidance to the councils on the factors to consider when determining what constitutes an excessive share in future IFQ programs. We provided a draft of this report to the Department of Commerce for review and comment. In the Secretary’s response, the Department’s National Oceanic and Atmospheric Administration provided written comments. NOAA’s comments and our detailed responses are presented in appendix V of this report. NOAA generally agreed with the accuracy and conclusions of our report. NOAA agreed in principle with our recommendation to collect and analyze information on quota holders, disagreed with our recommendation to set limits, and agreed with our recommendation to provide guidance for setting limits on quota holdings in future programs. NOAA also provided technical comments that we incorporated in the report as appropriate. NOAA agreed in principle with our first recommendation, to collect and analyze information on quota holders. While NOAA stated that it would place greater emphasis on collecting this information in its IFQ programs, it noted that its ability to collect economic information might be constrained by provisions of the Magnuson-Stevens Act that protect certain economic and proprietary data. NOAA believed that existing IFQ programs provide adequate information on quota holders, citing, for example, the Alaskan halibut and sablefish program, but stated it would be difficult to collect information on who actually controlled the quota. However, our recommendation is aimed at requiring all IFQ programs to collect information similar to the information collected in the Alaskan halibut and sablefish program. We do not believe that information on the identity of quota holders and their ownership interests involves economic data protected by the Magnuson-Stevens Act, and, in fact, the Alaskan program requires that such information be provided. We also believe that without a requirement to collect similar information in all IFQ programs, it will be difficult, if not impossible, to monitor for compliance with eligibility requirements. Such information is especially important where banks hold quota on behalf of others, such as in the surfclam/ocean quahog program. NOAA disagreed with our second recommendation, to set limits on the amount of quota an individual or entity may hold in future IFQ programs. NOAA acknowledged that avoiding excessive shares was a serious mandate and that fishery management councils should analyze the projected impacts of various levels of ownership on market performance, distributional issues, and equity considerations. NOAA stated, however, that councils should have flexibility to deal with preventing excessive shares according to the circumstances of each IFQ program and that limits on quota holdings might be warranted and necessary in certain cases, but not in all IFQ programs. NOAA cited the wreckfish program—a program where there has been little activity—as an example where limits should not be required. We agree with NOAA’s position that circumstances vary from fishery to fishery and that councils need to analyze the various issues when determining how to prevent excessive shares. We continue to believe, however, that fishery management councils need to define what constitutes excessive share for future IFQ programs. The Magnuson-Stevens Act clearly mandates that new IFQ programs prevent any person from acquiring an excessive share of quota. Without a specific and measurable definition, it would be difficult for councils and NMFS to know whether any quota holding could be viewed as excessive. A similar conclusion was reached by the National Research Council, which recommended the creation of fishery-specific limits on the accumulation of quota share by individuals or firms in each new IFQ program. We have revised our recommendation to reflect the full range of economic, social, and political considerations that need to be taken into account and the need for guidance to assist councils in determining excessive share. Finally, NOAA agreed with our third recommendation, to provide guidance to fishery management councils on factors to consider when setting limits on quota holdings in future IFQ programs. NOAA agreed that limits should be based on factors that are appropriate to the fishery. These factors include market effects, distributional issues, and equity considerations. We have revised this recommendation, however, from “providing guidance for setting limits” to “providing guidance for defining what constitutes an excessive share” to take into account NOAA’s comments and make it consistent with our second recommendation. We conducted our review from April 2002 through October 2002 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Commerce and the Director of the National Marine Fisheries Service. We will also provide copies to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841 or Keith Oleson at (415) 904-2218. Key contributors to this report are listed in appendix VI. To assist in deliberations on individual fishing quota (IFQ) programs, we reviewed the Alaskan halibut and sablefish, wreckfish, and surfclam/ocean quahog programs to determine (1) the extent of consolidation of quota holdings, (2) the extent of foreign holdings of quota, and (3) the economic effect of IFQ programs on seafood processors. For all three objectives, we interviewed agency officials at the Department of Commerce’s National Marine Fishery Service’s (NMFS) headquarters office and the Northeast, Southeast, and Alaska regional offices; representatives of the Mid-Atlantic, South Atlantic, and North Pacific Fishery Management Councils; officials from the Alaska Department of Fish and Game; and fishery participants, researchers, and other industry experts. We visited Easton, Maryland; Cape May and Atlantic City, New Jersey; and Sitka, Petersburg, Juneau, Homer, Seward, and Kodiak, Alaska, where we interviewed quota holders, processors, and industry representatives and viewed processing plants. We selected these sites in accordance with suggestions from program managers and industry representatives to obtain IFQ program and geographic coverage. In addition, to determine the extent of consolidation of quota holdings, for each IFQ program, we reviewed pertinent laws, rules, and regulations; the fishery management plan; processes and procedures; and relevant program documents that NMFS used to track quota holdings. We analyzed NMFS data on quota allocations and transfers, searched public corporate ownership and U.S. Coast Guard vessel documentation records, and interviewed NMFS officials, industry experts, and fishery participants to identify who controlled the use of the quota. As agreed with the requesters, we did not review the Maine mahogany quahogs as part of the surfclam/ocean quahog IFQ program because of the fishery’s small size and unique characteristics. To determine the extent of foreign holdings of quota, we reviewed federal laws, regulations, and IFQ program rules pertaining to foreign individuals holding quota in U.S. fisheries. We also reviewed the U.S. Coast Guard’s requirements for documenting U.S. fishing vessels. We searched public records on corporate ownership for foreign interest in, and affiliation with, entities holding quota. To determine the economic effect of IFQ programs on seafood processors, we limited our assessment to the economic effects on Alaskan halibut and sablefish processors because few of these processors were eligible to hold quota under the IFQ program. In contrast, processors in the surfclam/ocean quahog and wreckfish programs were eligible to hold quota. We analyzed (1) NMFS data on registered buyers, landings by port, and total allowable catch; (2) Alaska Department of Fish and Game, Commercial Operators Annual Report data on fish production, ex-vessel prices, and processing at shore-based plants; and (3) public records on ownership of seafood processing companies. We interviewed fishery participants, including NMFS and regional management council officials, seafood processors, quota holders, researchers, and other experts on IFQ programs, to identify changes in the processing sector after the IFQ program’s implementation. We searched the economic literature on the Alaskan halibut and sablefish IFQ program and reviewed the only study that quantified the economic effect of the IFQ program on processors, interviewed the study’s principal author, and obtained the views of other economists who had reviewed the study. We could not verify the study’s results because the data used in the study were proprietary. NMFS data on quota holdings show that consolidation occurred in all three IFQ programs—Alaskan halibut and sablefish (see table 7), wreckfish (see table 8), and surfclam/ocean quahog (see table 9)—with much of the consolidation occurring in early program years. Major holders of surfclam and ocean quahog quota include seafood processors that are vertically integrated--owning both processing plants and fishing vessels. Processing companies that owned fishing vessels were eligible to receive quota under the initial quota allocation and some have held quota from the beginning of the IFQ program. The IFQ program also allows processing companies to purchase and transfer surfclam/ocean quahog quota. According to NMFS data, three-fourths of the companies that processed surfclams and all of the companies that processed ocean quahogs in 2000 were quota holders. In addition, our analysis of NMFS quota allocation data for 2000 showed that seafood processors held about one-third of the total surfclam quota and almost one-half of the total ocean quahog quota. Further, NMFS data indicate that fewer processors processed surfclams and ocean quahogs since the IFQ program was implemented and that several small- and mid-sized processors left the fishery. The number of surfclam processors decreased by more than 40 percent and the number of ocean quahog processors decreased by more than two-thirds from 1990 to 2000 (see table 10), with the same key companies processing both surfclams and ocean quahogs. The top 4 processors handled about 74 percent of the surfclam catch in 1990 and 86 percent in 2000. Ready access to highways and air transportation makes it easier for processors and buyer-brokers to take advantage of the fresh fish market and its potential for higher wholesale prices because they can get their products to market more quickly and at a lower cost than processors or other buyers in more remote locations. Figure 10 shows the location of major Alaskan halibut and sablefish ports in relation to major transportation networks leading to the lower 48 states and international destinations. The Alaskan port with the greatest amount of halibut landed changed between 1995 and 2001, as shown in table 11. NMFS and industry officials attribute much of the change in port rankings to the increase in the fresh halibut market and the need for ready access to transportation networks. While ports may have access to air and ferry service to the lower 48 states, the number of flights and ferries may be limited and subject to weather delays or cancellations. While sablefish remained primarily a frozen product, sablefish ports experienced a similar change in rankings (see table 12), because, according to processors, many fishermen sell their catch of both halibut and sablefish to the processor who pays the most for the halibut. The following are GAO’s comments on NOAA’s written comments provided by the Secretary of Commerce’s letter dated November 21, 2002. 1. We revised the text to reflect that the domestic commercial fish catch remained relatively the same as in 2001 as it was in 1990. 2. We revised the text to reflect that the IFQ program extended the halibut and sablefish fishing seasons in some areas. 3. We changed the text to make it clear that the cost differential was due to the fact that Homer and Seward had access to a road system. 4. We revised the legend for figure 10 to show that the airports are international airports. 5. NOAA commented that the report’s treatment of wreckfish would have benefited from consulting a 1994 wreckfish article. We reviewed the article and determined that generally only the article’s discussion of consolidation and control of quota holdings was pertinent to our objectives. The article explained that the wreckfish program did not set limits on quota holdings, in part, because it would be difficult to determine who actually controlled the use of the quota. We believe that our report had already adequately addressed this issue. By not defining limits, the information needed to determine who controls the use of quota is not collected and monitored. For this reason, we did not revise our report. 6. Our point was that banks hold quota for someone else who controls its use. As such, consolidation may be greater than NMFS data indicate. Nonetheless, we revised the text to make it clearer that financial institutions held, but did not control the use of quota in IFQ programs. 7. Although our definition was technically correct, we revised footnote 2 by providing the definition of bycatch under the Magnuson-Stevens Act. 8. We revised the title of table 1 to make it clear that the table listed examples of objectives for the IFQ programs we reviewed. 9. We revised the text to remove some of the redundancy. 10. We added a footnote to explain that the rules of the Alaskan halibut and sablefish program specify limits on quota holdings as quota share use caps. 11. We added a note to tables 3, 4, and 11 to indicate that 1995 was the earliest year for which NMFS data were available. In addition to the name above, Charles W. Bausell, Jr., Susan J. Malone, Mark R. Metcalfe, Rebecca A. Sandulli, and Tama R. Weinberg made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
To assist in deliberations on individual fishing quota (IFQ) programs, GAO determined (1) the extent of consolidation of quota holdings in three IFQ programs (Alaskan halibut and sablefish, wreckfish, and surfclam/ocean quahog); (2) the extent of foreign holdings of quota in these programs; and (3) the economic effect of the IFQ program on Alaskan halibut and sablefish processors. All three IFQ programs have experienced some consolidation of quota holdings. Further, consolidation of surfclam and ocean quahog quota is greater than National Marine Fisheries Service (NMFS) data indicate, because different quota holders of record are often part of a single corporation or family business that, in effect, controls many holdings. Program rules may affect the extent of consolidation in each IFQ program. While the Alaskan halibut and sablefish program set specific and measurable quota limits, the surfclam/ocean quahog and wreckfish programs did not, relying instead on federal antitrust laws to determine whether any quota holdings are excessive. Without defined limits on the amount of quota an individual or entity can hold, it is difficult to determine if any holdings would be viewed as excessive. GAO did not identify any instances where foreign entities currently hold or control quota. While NMFS requires transfer applicants in the halibut and sablefish program to declare themselves to be U.S. citizens or U.S. entities, there is no similar requirement for the surfclam/ocean quahog and wreckfish programs. As a result, in these programs, the potential exists for the transfer of quota to foreign entities. The economic effects of the halibut and sablefish IFQ program are not uniform. Some processors were adversely affected by the IFQ program, while others benefited; however, it is difficult to quantify the actual effects. The only estimate of the program's economic effect on processors is a 2002 study commissioned by the state of Alaska. This study estimated that halibut processors experienced a 56 percent loss in gross operating margins. While GAO could not validate or replicate the study's results, its analysis of public data and the study's methodology raised several concerns about the reliability of the study's estimates. Also, the study did not take into account other factors that may affect profits, such as the diversity and value of other species processed.
FDA is responsible for overseeing the safety and effectiveness of medical devices that are marketed in the United States, whether manufactured in domestic or foreign establishments. All establishments that manufacture medical devices for marketing in the United States are required to register annually with FDA. As part of its efforts to ensure the safety, effectiveness, and quality of medical devices, FDA is responsible for inspecting certain foreign and domestic establishments to ensure that, among other things, they meet manufacturing standards established in FDA’s quality system regulation. Within FDA, CDRH is responsible for assuring the safety and effectiveness of medical devices. Among other things, CDRH works with ORA, which conducts inspections of foreign establishments. FDA may conduct inspections before and after medical devices are approved or otherwise cleared to be marketed in the United States. Premarket inspections are conducted before FDA approves U.S. marketing of a new medical device that is not substantially equivalent to one that is already on the market. Premarket inspections primarily assess manufacturing facilities, methods, and controls and may verify pertinent records. Postmarket inspections are conducted after a medical device has been approved or otherwise cleared to be marketed in the United States and include several types of inspections: (1) Quality system inspections are conducted to assess compliance with applicable FDA regulations, including the quality system regulation to ensure good manufacturing practices and the regulation requiring reporting of adverse events. These inspections may be comprehensive or abbreviated, which differ in the scope of inspectional activity. Comprehensive postmarket inspections assess multiple aspects of the manufacturer’s quality system, including management controls, design controls, corrective and preventative actions, and production and process controls. Abbreviated postmarket inspections assess only some of these aspects, but always assess corrective and preventative actions. (2) For-cause and compliance follow- up inspections are initiated in response to specific information that raises questions or problems associated with a particular establishment. (3) Postmarket audit inspections are conducted within 8 to 12 months of a premarket application’s approval to examine any changes in the design, manufacturing process, or quality assurance systems. Requirements governing foreign and domestic inspections differ. Specifically, FDA is required to inspect domestic establishments that manufacture class II or III medical devices every 2 years. There is no comparable requirement to inspect foreign establishments. FDA does not have authority to require foreign establishments to allow the agency to inspect their facilities. However, if an FDA request to inspect is denied, FDA may prevent the importation of medical devices from that foreign establishment into the United States. In addition, FDA has the authority to conduct physical examinations of products offered for import and, if there is sufficient evidence of a violation, prevent their entry at the border. Unlike food, for which FDA primarily relies on inspections at the border, physical inspection of manufacturing establishments is a critical mechanism in FDA’s process to ensure that medical devices are safe and effective and that manufacturers adhere to good manufacturing practices. FDA determines which establishments to inspect using a risk-based strategy. High priority inspections include premarket approval inspections for class III devices, for-cause inspections, inspections of establishments that have had a high frequency of device recalls, and other devices and manufacturers FDA considers high risk. The establishment’s inspection history may also be considered. A provision in FDAAA may assist FDA in making decisions about which establishments to inspect because this law authorizes the agency to accept voluntary submissions of audit reports addressing manufacturers’ conformance with internationally established standards for the purpose of setting risk-based inspectional priorities. FDA’s programs for foreign and domestic inspections by accredited third parties provide an alternative to the traditional FDA-conducted comprehensive postmarket quality system inspection for eligible manufacturers of class II and III medical devices. MDUFMA required FDA to accredit third persons—which are organizations—to conduct inspections of certain establishments. In describing this requirement, the House of Representatives Committee on Energy and Commerce noted that some manufacturers have faced an increase in the number of inspections required by foreign countries and that the number of inspections could be reduced if the manufacturers could contract with a third-party organization to conduct a single inspection that would satisfy the requirements of both FDA and foreign countries. Manufacturers that meet eligibility requirements may request a postmarket inspection by an FDA-accredited organization. The eligibility criteria for requesting an inspection of an establishment by an accredited organization include that the manufacturer markets a medical device in the United States and markets (or intends to market) a medical device in at least one other country and that the establishment to be inspected must not have received warnings for significant deviations from compliance requirements on its last inspection. MDUFMA also established minimum requirements for organizations to be accredited to conduct third-party inspections, including protections against financial conflicts of interest and assurances of the competence of the organization to conduct inspections. FDA developed a training program for inspectors from accredited organizations that involves both formal classroom training and completion of three joint training inspections with FDA. Each individual inspector from an accredited organization must complete all training requirements successfully before being cleared to conduct independent inspections. FDA relies on manufacturers to volunteer to host these joint inspections, which count as FDA postmarket quality system inspections. A manufacturer that is cleared to have an inspection by an accredited third party enters an agreement with the approved accredited organization and schedules an inspection. Once the accredited organization completes its inspection, it prepares a report and submits it to FDA, which makes the final assessment of compliance with applicable requirements. FDAAA added a requirement that accredited organizations notify FDA of any withdrawal, suspension, restriction, or expiration of certificate of conformance with quality systems standards (such as those established by the International Organization for Standardization) for establishments they inspected for FDA. In addition to the Accredited Persons Inspection Program, FDA has a second program for accredited third-party inspections of medical device establishments. On September 7, 2006, FDA and Health Canada announced the establishment of PMAP. This pilot program was designed to allow qualified third-party organizations to perform a single inspection that would meet the regulatory requirements of both the United States and Canada. The third-party organizations eligible to conduct inspections through PMAP are those that FDA accredited for its Accredited Persons Inspection Program (and that completed all required training for that program) and that are also authorized to conduct inspections of medical device establishments for Health Canada. To be eligible to have a third- party inspection through PMAP, manufacturers must meet all criteria established for the Accredited Persons Inspection Program. As with the Accredited Persons Inspection Program, manufacturers must apply to participate and be willing to pay an accredited organization to conduct the inspection. FDA relies on multiple databases to manage its program for inspecting medical device manufacturing establishments. FDA’s medical device registration and listing database contains information on domestic and foreign medical device establishments that have registered with FDA. Establishments that are involved in the manufacture of medical devices intended for commercial distribution in the United States are required to register annually with FDA. These establishments provide information to FDA, such as an establishment’s name and its address and the medical devices it manufactures. Prior to October 1, 2007, this information was maintained in DRLS. As of October 1, 2007, establishments are required to register electronically through FDA’s Unified Registration and Listing System and certain medical device establishments pay an annual establishment registration fee, which in fiscal year 2008 is $1,706. OASIS contains information on medical devices and other FDA-regulated products imported into the United States, including information on the establishment that manufactured the medical device. The information in OASIS is automatically generated from data managed by Customs and Border Protection (CBP). These data are originally entered by customs brokers based on the information available from the importer. CBP specifies an algorithm by which customs brokers generate a manufacturer identification number from information about an establishment’s name, address, and location. FACTS contains information on FDA’s inspections, including those of domestic and foreign medical device establishments. FDA investigators enter information into FACTS following completion of an inspection. According to FDA data, there are more registered establishments in China and Germany reporting that they manufacture class II or III medical devices than in any other foreign countries. Canada and the United Kingdom also have a large number of registered establishments. FDA faces challenges in its program to inspect foreign establishments manufacturing medical devices. The databases that provide FDA with data about the number of foreign establishments manufacturing medical devices for the U.S. market have not provided it with an accurate count of foreign establishments for inspection. In addition, FDA conducted relatively few inspections of foreign establishments. Moreover, inspections of foreign medical device manufacturing establishments pose unique challenges to FDA—both in human resources and logistics. FDA’s databases on registration and imported medical devices have not provided an accurate count of establishments subject to inspection, although recent improvements to FDA’s medical device registration database may address some weaknesses. In January 2008, we testified that DRLS provided FDA with information about foreign medical device establishments and the products they manufacture for the U.S. market. According to DRLS, as of September 2007, 4,983 foreign establishments that reported manufacturing a class II or III medical device for the U.S. market had registered with FDA. However, these data contained inaccuracies because establishments may register with FDA but not actually manufacture a medical device or may manufacture a medical device that is not marketed in the United States. In addition, FDA did not routinely verify the data within this database. Recent changes to FDA’s medical device establishment registration process could improve the accuracy of its database. In fiscal year 2008, FDA implemented, in addition to its annual user fee, electronic registration and an active re-registration process for medical device establishments. According to FDA, about half of previously registered establishments had reregistered using the new system as of April 11, 2008. While FDA officials expect that additional establishments will reregister, they expect that the final result will be the elimination of establishments that do not manufacture medical devices for the U.S. market and thus a smaller, more accurate database of medical device establishments. FDA officials indicated that implementation of electronic registration and the annual user fee seemed to have improved the data so FDA can more accurately identify the type of establishment registered, the devices manufactured at an establishment, and whether or not an establishment should be registered. According to FDA officials, the revenue from device registration user fees is applied to the process for the review of device applications, including premarket inspections. FDA has also proposed, but not yet implemented, the Foreign Vendor Registration Verification Program, which could also help improve the accuracy of information FDA maintains on registered foreign establishments. Through this program, FDA plans to contract with an external organization to conduct on-site verification of the registration data and product listing information of foreign establishments shipping medical devices and other FDA-regulated products to the United States. FDA has solicited proposals for this contract, but it is still developing the specifics of the program. For example, as of April 2008, the agency had not yet established the criteria it would use to determine which establishments would be visited for verification purposes or determined how many establishments it would verify annually. FDA plans to award this contract in June 2008. Given the early stages of this process, it is too soon to determine whether this program will improve the accuracy of the data FDA maintains on foreign medical device establishments. FDA also obtains information on foreign establishments from OASIS, which tracks the importation of medical devices and other FDA-regulated products. While not intended to provide a count of establishments, OASIS does contain information about the medical devices actually being imported into the United States and the establishments manufacturing them. However, inaccuracies in OASIS prevent FDA from using it to develop a list of establishments subject to inspection. OASIS contains an inaccurate count of foreign establishments manufacturing medical devices imported into the United States as a result of unreliable identification numbers generated by customs brokers when the product is offered for entry. FDA officials told us that these errors result in the creation of multiple records for a single establishment, which results in inflated counts of establishments offering medical devices for entry into the U.S. market. According to OASIS, in fiscal year 2007, there were as many as 22,008 foreign establishments that manufactured class II medical devices for the U.S. market and 3,575 foreign establishments that manufactured class III medical devices for the U.S. market. FDA has supported a proposal with the potential to address weaknesses in OASIS, but FDA does not control the implementation of this proposed change. FDA is pursuing the creation of a governmentwide unique establishment identifier, as part of the Shared Establishment Data Service (SEDS), to address these inaccuracies. Rather than relying on the creation and entry of an identifier at the time of import, SEDS would provide a unique establishment identifier and a centralized service to provide commercially verified information about establishments. The standard identifier would be submitted as part of import entry data when required by FDA or other government agencies. SEDS could thus eliminate the problems that have resulted in multiple identifiers associated with an individual establishment. The implementation of SEDS is dependent on action from multiple federal agencies, including the integration of the concept into a CBP import and export system under development and scheduled for implementation in 2010. In addition, once implemented by CBP, participating federal agencies would be responsible for bearing the cost of integrating SEDS with their own operations and systems. FDA officials are not aware of a specific time line for the implementation of SEDS. Developing an implementation plan for SEDS was recommended by the Interagency Working Group on Import Safety. Although comparing information from its registration and import databases could help FDA determine the number of foreign establishments marketing medical devices in the United States, the databases do not exchange information to be compared electronically and any comparisons are done manually. FDA is in the process of implementing additional initiatives to improve the integration of its databases, and these changes could make it easier for the agency to establish an accurate count of foreign manufacturing establishments subject to inspection. The agency’s Mission Accomplishments and Regulatory Compliance Services (MARCS) is intended to help FDA electronically integrate data from multiple systems. It is specifically designed to give individual users more complete information about establishments. FDA officials estimated that MARCS, which is being implemented in stages, could be fully implemented by 2011 or 2012. However, FDA officials told us that implementation has been slow because the agency has been forced to shift resources away from MARCS and toward the maintenance of current systems that are still heavily used, such as FACTS and OASIS. Taken together, changes to FDA’s databases could provide the agency with more accurate information on the number of establishments subject to inspection. However, it is too early to tell whether this will improve FDA’s management of its inspection program. From fiscal year 2002 through fiscal year 2007, FDA inspected relatively few foreign medical device establishments and primarily inspected establishments located in the United States. During this period, FDA conducted an average of 247 foreign establishment inspections each year, compared to 1,494 inspections of domestic establishments. This average number of foreign inspections suggests that each year FDA inspects about 6 percent of registered foreign establishments that reported manufacturing class II or class III medical devices. FDA officials estimated the agency had inspected foreign class II manufacturers every 27 years and foreign class III manufacturers every 6 years. The inspected foreign establishments were in 44 foreign countries and more than two-thirds were in 10 countries. Most of the countries with the highest number of inspections were also among those with the largest number of registered establishments that reported manufacturing class II or III medical devices. The lowest rate of inspections in these 10 countries was in China, where 64 inspections were conducted in this 6-year period and 568 establishments were registered as of May 6, 2008. (See table 1.) FDA’s inspections of foreign medical device establishments were primarily postmarket inspections. While premarket inspections were generally FDA’s highest priority, relatively few have had to be performed in any given year. Therefore, FDA focused its resources on postmarket inspections. From fiscal year 2002 through fiscal year 2007, 89 percent of the 1,481 foreign establishment inspections were for postmarket purposes. Inspections of foreign establishments pose unique challenges to FDA— both in human resources and logistics. FDA does not have a dedicated cadre of investigators that only conduct foreign medical device establishment inspections; those staff who inspect foreign establishments also inspect domestic establishments. Among those qualified to inspect foreign establishments, FDA relies on staff to volunteer to conduct inspections. FDA officials told us that it has been difficult to recruit investigators to voluntarily travel to certain countries. However, they added that if the agency could not find an individual to volunteer for a foreign inspection trip, it would mandate the travel. Logistically, foreign medical device establishment inspections are difficult to extend even if problems are identified because the trips are scheduled in advance. Foreign medical device establishment inspections are also logistically challenging because investigators do not receive independent translational support from FDA or the State Department and may rely on English- speaking employees of the inspected establishment or the establishment’s U.S. agent to translate during an inspection. FDA recently announced proposals to address some of the challenges unique to conducting foreign inspections, but specific steps toward implementation and associated time frames are unclear. FDA noted in its report on revitalizing ORA that it was exploring the creation of a cadre of investigators who would be dedicated to conducting foreign inspections. However, the report did not provide any additional details or time frames about this proposal. In addition, FDA announced plans to establish a permanent presence overseas, although little information about these plans is available. FDA intends that its foreign offices will improve cooperation and information exchange with foreign regulatory bodies, improve procedures for expanded inspections, allow it to inspect facilities quickly in an emergency, and facilitate work with private and government agencies to assure standards for quality. FDA’s proposed foreign offices are intended to expand the agency’s capacity for overseeing, among other things, medical devices, drugs, and food that may be imported into the United States. The extent to which the activities conducted by foreign offices are relevant to FDA’s foreign medical device inspection program is uncertain. Initially, FDA plans to establish a foreign office in China with three locations—Beijing, Shanghai, and Guangzhou—comprised of a total of eight FDA employees and five Chinese nationals. The Beijing office, which the agency expects will be partially staffed by the end of 2008, will be responsible for coordination between FDA and Chinese regulatory agencies. FDA staff located in Shanghai and Guangzhou, who are to be hired in 2009, will be focused on conducting inspections and working with Chinese inspectors to provide training as necessary. FDA noted that the Chinese nationals will primarily provide support to FDA staff, including translation and interpretation. The agency is also considering setting up offices in other locations, such as India, the Middle East, Latin America, and Europe, but no dates have been specified. While the establishment of both a foreign inspection cadre and offices overseas have the potential for improving FDA’s oversight of foreign establishments, it is too early to tell whether these steps will be effective or will increase the number of foreign medical device establishment inspections. Few inspections of foreign medical device manufacturing establishments—a total of six—have been conducted through FDA’s two accredited third-party inspection programs, the Accredited Persons Inspection Program and PMAP. FDAAA specified several changes to the requirements for inspections by accredited third parties that could result in increased participation by manufacturers. Few inspections have been conducted through FDA’s Accredited Persons Inspection Program since March 11, 2004—the date when FDA first cleared an accredited organization to conduct independent inspections. Through May 7, 2008, four inspections of foreign establishments had been conducted independently by accredited organizations. As of May 7, 2008, 16 third-party organizations were accredited, and individuals from 8 of these organizations had completed FDA’s training requirements and been cleared to conduct independent inspections. FDA and accredited organizations had conducted 44 joint training inspections. As we previously reported, fewer manufacturers volunteered to host training inspections than have been needed for all of the accredited organizations to complete their training, and scheduling these joint training inspections has been difficult. FDA officials told us that, when appropriate, staff are instructed to ask manufacturers to host a joint training inspection at the time they notify the manufacturers of a pending inspection. FDA schedules inspections a relatively short time prior to an actual inspection, and as we previously reported, some accredited organizations have not been able to participate because they had prior commitments. We previously reported that manufacturers’ decisions to request an inspection by an accredited organization might be influenced by both potential incentives and disincentives. According to FDA officials and representatives of affected entities, potential incentives to participation include the opportunity to reduce the number of inspections conducted to meet FDA and other countries’ requirements. For example, one inspection conducted by an accredited organization was a single inspection designed to meet the requirements of FDA, the European Union, and Canada. Another potential incentive mentioned by FDA officials and representatives of affected entities is the opportunity to control the scheduling of the inspection by an accredited organization by working with the accredited organization. FDA officials and representatives of affected entities also mentioned potential disincentives to having an inspection by an accredited organization. These potential disincentives include bearing the cost for the inspection, doubts about whether accredited organizations can cover multiple requirements in a single inspection, and uncertainty about the potential consequences of an inspection that otherwise may not occur in the near future—consequences that could involve regulatory action. Changes specified by FDAAA have the potential to eliminate certain obstacles to manufacturers’ participation in FDA’s programs for inspections by accredited third parties that were associated with manufacturers’ eligibility. For example, a requirement that foreign establishments be periodically inspected by FDA before being eligible for third-party inspections was eliminated. Representatives of the two organizations that represent medical device manufacturers with whom we spoke about FDAAA told us that the changes in eligibility requirements could eliminate certain obstacles and therefore potentially increase manufacturers’ participation. These representatives also noted that key incentives and disincentives to manufacturers’ participation remain. FDA officials told us that they were revising their guidance to industry in light of FDAAA and expected to issue the revised guidance during fiscal year 2008. It is too soon to tell what impact these changes will have on manufacturers’ participation. FDA officials have acknowledged that manufacturers’ participation in the Accredited Persons Inspection Program has been limited. In December 2007, FDA established a working group to assess the successes and failures of this program and to identify ways to increase participation. Representatives of two organizations that represent medical device manufacturers told us that they believe manufacturers remain interested in the Accredited Persons Inspection Program. The representative of one large, global manufacturer of medical devices told us that it was in the process of arranging to have 20 of its domestic and foreign device manufacturing establishments inspected by accredited third parties. As of May 7, 2008, two inspections of foreign establishments had been conducted through PMAP, FDA’s second program for inspections by accredited third parties. Although it is too soon to tell what the benefits of PMAP will be, the program is more limited than the Accredited Persons Inspection Program and may pose additional disincentives to participation by both manufacturers and accredited organizations. Specifically, inspections through PMAP would be designed to meet the requirements of the United States and Canada, whereas inspections conducted through the Accredited Persons Inspection Program could be designed to meet the requirements of other countries. In addition, two of the five representatives of affected entities whom we spoke to for our January 2008 statement noted that in contrast to inspections conducted through the Accredited Persons Inspection Program, inspections conducted through PMAP could undergo additional review by Health Canada. Health Canada will review inspection reports submitted through this pilot program to ensure the inspections meet its standards. This extra review poses a greater risk of unexpected outcomes for the manufacturer and the accredited organization, which could be a disincentive to participation in PMAP that is not present with the Accredited Persons Inspection Program. Americans depend on FDA to ensure the safety and effectiveness of medical devices manufactured throughout the world. A variety of medical devices are manufactured in other countries, including high-risk devices designed to be implanted or used in invasive procedures. However, FDA faces challenges in inspecting foreign establishments. Weaknesses in its database prevent it from accurately identifying foreign establishments manufacturing medical devices for the United States and prioritizing those establishments for inspection. In addition, staffing and logistical difficulties associated with foreign inspections complicate FDA’s ability to conduct such inspections. The agency has recently taken some positive steps to improve its foreign inspection program, such as initiating changes to improve the accuracy of the data it uses to manage this program and announcing plans to increase its presence overseas. However, it is too early to tell whether these steps will ultimately enhance the agency’s ability to select establishments to inspect and increase the number of foreign establishments inspected. To date, FDA’s programs for inspections by accredited third parties have not assisted FDA in meeting its regulatory responsibilities nor have these programs provided a rapid or substantial increase in the number of inspections performed by these organizations, as originally intended. Recent statutory changes to the requirements for inspections by accredited third parties may encourage greater participation in these programs. However, the lack of meaningful progress in conducting inspections to this point raises questions about the practicality and effectiveness of these programs to help FDA conduct additional foreign inspections. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or the other Members of the subcommittee may have at this time. For further information about this statement, please contact Marcia Crosse at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may found on the last page of this statement. Geraldine Redican-Bigott, Assistant Director; Kristen Joan Anderson; Katherine Clark; William Hadley; Cathleen Hamann; Julian Klazkin; and Lisa Motley made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As part of the Food and Drug Administration's (FDA) oversight of the safety and effectiveness of medical devices marketed in the United States, it inspects certain foreign and domestic establishments where these devices are manufactured. To help FDA address shortcomings in its inspection program, the Medical Device User Fee and Modernization Act of 2002 required FDA to accredit third parties to inspect certain establishments. In response, FDA has implemented two voluntary programs for that purpose. This statement is based primarily on GAO testimonies from January 2008 (GAO-08-428T) and April 2008 (GAO-08-701T). In this statement, GAO assesses (1) FDA's program for inspecting foreign establishments that manufacture medical devices for the U.S. market and (2) FDA's programs for third-party inspections of those establishments. For GAO's January and April 2008 testimonies, GAO interviewed FDA officials, analyzed information from FDA, and updated GAO's previous work on FDA's programs for inspections by accredited third parties. GAO updated selected information for this statement in early May 2008. FDA faces challenges managing its program to inspect foreign establishments that manufacture medical devices. GAO testified in January 2008 that two databases that provide FDA with information about foreign medical device establishments and the products they manufacture for the U.S. market contained inaccurate information about establishments subject to FDA inspection. In addition, comparisons between these databases--which could help produce a more accurate count--had to be done manually. Recent changes FDA made to its registration database could improve the accuracy of the count of establishments, but it is too soon to tell whether these and other changes will improve FDA's management of its foreign inspection program. Another challenge is that FDA conducts relatively few inspections of foreign establishments; officials estimated that the agency inspects foreign manufacturers of high-risk devices (such as pacemakers) every 6 years and foreign manufacturers of medium-risk devices (such as hearing aids) every 27 years. Finally, inspections of foreign manufacturers pose unique challenges to FDA, such as difficulties in recruiting investigators to travel to certain countries and in extending trips if the inspections uncovered problems. FDA is pursuing initiatives that could address some of these unique challenges, but it is unclear whether FDA's proposals will increase the frequency with which the agency inspects foreign establishments. Few inspections of foreign medical device manufacturing establishments have been conducted through FDA's two accredited third-party inspection programs--the Accredited Persons Inspection Program and the Pilot Multi-purpose Audit Program (PMAP). Under FDA's Accredited Persons Inspection Program, from March 11, 2004--the date when FDA first cleared an accredited organization to conduct independent inspections--through May 7, 2008, four inspections of foreign establishments had been conducted by accredited organizations. An incentive to participation in the program is the opportunity to reduce the number of inspections conducted to meet FDA's and other countries' requirements. Disincentives include bearing the cost for the inspection, particularly when the consequences of an inspection that otherwise might not occur in the near future could involve regulatory action. The Food and Drug Administration Amendments Act of 2007 made several changes to program eligibility requirements that could result in increased participation by manufacturers. PMAP was established on September 7, 2006, as a partnership between FDA and Canada's medical device regulatory agency and allows accredited organizations to conduct a single inspection to meet the regulatory requirements of both countries. As of May 7, 2008, two inspections of foreign establishments had been conducted by accredited organizations through this program. The small number of inspections completed to date by accredited third-party organizations raises questions about the practicality and effectiveness of these programs to quickly help FDA increase the number of foreign establishments inspected.
Contracts and grants are two instruments the government may use to achieve its missions, with their selection principally governed by the nature of the activity. Contracts are procurement instruments and, as such, are governed by the Federal Acquisition Regulation (FAR) and agency procurement regulations. Contracts are to be used when the principal purpose of the project is the acquisition of goods and services for the direct benefit of the federal government. Grants, on the other hand, are to be used when the principal purpose of a project is to accomplish a public purpose of support or stimulation authorized by federal statute. Contract administration, as defined by the Office of Federal Procurement Policy, consists of those activities performed after a contract has been awarded to determine how well the government and the contractor performed to meet the requirements of the contract. Contract and grant administration include a number of similar functions, including monitoring contractor or grantee performance and reviewing contractor or grantee financial information. Contract administration functions are carried out under the direction of contracting officers, who are responsible for ensuring performance of all necessary actions for effective contracting, ensuring compliance with the terms of the contract, and safeguarding the government’s interests. Contracting officers have authority to enter into, administer, or terminate contracts. A contracting officer may designate another individual to provide oversight on his or her behalf. For the purposes of this report, we use “COR” to refer to such individuals, although in some cases agencies or offices use other terms. The COR functions as the “eyes and ears” of the contracting officer, monitoring technical performance and reporting any potential or actual problems to the contracting officer. Functions of the COR typically include informing the contracting officer of any technical or contractual difficulties encountered during performance, informing the contractor of failures to comply with technical requirements of the contract, performing inspection and acceptance of all final work required under the contract, and maintaining contract files. Similarly, grant administration functions are carried out under the direction of grant or agreement officers, who may be assisted by grants officers’ representatives or agreement officers’ technical representatives. Our prior work has identified risks related to agencies’ decisions to use contractors to support certain types of agency missions, including potential conflicts of interest. An organizational conflict of interest can occur when a contractor has present or currently planned interests (including business or relationships with other contractors) that either directly or indirectly relate to the work to be performed under a contract and (1) may diminish its capacity to give impartial, technically sound, objective assistance or advice or (2) may result in it having an unfair competitive advantage. For this report, a personal conflict of interest is one that can occur in a situation in which an individual is employed by a contractor or is contracted for directly by the government as a personal services contractor and is in a position to materially influence an agency’s recommendations or decisions and, because of his or her personal activities, relationships, or financial interests, may lack or appear to lack objectivity or appear to be unduly influenced by personal financial interest. In addition, other risks to the agencies may occur when using contractors for services that closely support inherently governmental functions. Inherently governmental functions are so intimately related to the public interest as to require performance by government employees, and include functions that require discretion in applying government authority or value judgments in making decisions for the government. FAR section 7.503(c) provides 20 examples of functions considered to be inherently governmental, including determining agency policy or federal program budget request priorities; directing and controlling federal employees; and awarding, administering, or terminating federal contracts. Similarly, FAR section 7.503(d) provides examples of functions that while not inherently governmental, may approach the category because of the nature of the function, the manner in which a contractor performs the contract, or the manner in which the government administers performance under a contract. These functions closely support the performance of inherently governmental functions and generally include professional and management support activities, such as those that involve or relate to supporting budget preparation, evaluation of another contractor’s performance, acquisition planning, or technical evaluation of contract proposals. When contractors perform these functions, there is a risk of inappropriately influencing the government’s control over and accountability for decisions that may be based, in part, on contractor work. DOD, State, and USAID use both nonpersonal and personal services contractors to perform contract or grant administration functions. Nonpersonal services contracts are distinguished from personal services contracts in part by the nature of the government’s relationship with the contractor. Under a nonpersonal services contract, the personnel rendering the services are not subject either by the contract’s terms or by the manner of its administration to the relatively continuous supervision and control of government personnel. On the other hand, personal services contracts are characterized by an employer-employee relationship created between the government and the contractor. Personal services contracts involve close and continual supervision and control of contractor personnel by government employees rather than general oversight of contractor operations. In general, personal services contractors perform services that are comparable in scope and nature to those of civil service employees and often appear, in effect, to be government employees. Additionally, the risks of contracting for personal services are not always the same as the risks of contracting for nonpersonal services. For example, personal services contractors are not explicitly prohibited in the FAR from performing inherently governmental functions. Also, the level and type of oversight and management may differ between personal and nonpersonal services contracts. The government is normally required to obtain its employees by direct hire under competitive appointment or other procedures required by the civil service laws, and contracting for personal services is prohibited unless authorized by statute. DOD, State, and USAID are each authorized to hire personal services contractors under certain circumstances. For example, USAID and selected bureaus at State are permitted to hire personal services contractors to perform services outside of the United States. Similarly, DOD has specific authority to enter into personal services contracts to support operations outside of the United States in certain circumstances. Personal services contractors may be U.S. citizens, local nationals, or third-country nationals. State and USAID regulations state that personal services contractors generally cannot supervise government employees, serve as contracting officers, or otherwise obligate government funds. DOD regulations do not specifically address whether personal services contractors can supervise government employees or otherwise obligate government funds. DOD, State, and USAID relied on contractors to perform a wide range of administration functions for contracts and grants with performance in Iraq and Afghanistan, but did not know the full extent of their use of contractors to perform such functions. Our review found 223 contracts and task orders active during fiscal year 2008 or the first half of fiscal year 2009 that included the performance of administration functions for other contracts or grants in Iraq and Afghanistan. DOD, State, and USAID officials told us that there were no agencywide data sources that provided detailed information about the functions performed by contractors and that individual contracting offices would have to manually review their contracts to identify contracts within our scope. Of the 186 contracts or task orders reported to us by individual contracting offices, we determined that 161 were within our scope. Through our review of FPDS-NG data and agency data compiled for another purpose, we found an additional 62 contracts or task orders within our scope. Given limitations we have previously reported with FPDS-NG and agency contracting data, the 223 contracts and task orders we identified, including 119 contracts and task orders for personal services, represent the minimum number of contracts and task orders within our scope (see app. II for more information on the contracts and task orders we identified). According to FPDS-NG and agency data, the agencies had obligated approximately $990 million as of March 31, 2009, on the 223 contracts and task orders we identified, although we were unable to determine how much of this amount was specifically obligated for the performance of administration functions for contracts or grants with performance in Iraq or Afghanistan. For example, some of the contracts or task orders included the performance of functions besides contract or grant administration or the performance of administration functions for contracts or grants with performance outside of Iraq and Afghanistan. FPDS-NG and agency obligation data were not detailed enough to allow us to isolate the amount obligated for other functions or locations. The approximately $990 million obligated by the agencies on the contracts and task orders we identified also includes more than $116 million reported by USAID for grants that were awarded by USAID contractors in Iraq on behalf of USAID, as authorized in the terms of their contracts. USAID contractors also awarded grants on behalf of USAID in Afghanistan, but USAID officials told us that the Afghanistan mission does not track grants awarded by contractors. As illustrated in tables 1 through 4, contractors in our case studies performed a wide variety of services in support of DOD, State, and USAID’s administration and oversight of other contracts and grants collectively worth billions of dollars. Contract and grant administration functions performed by contractors included on-site monitoring of contractor activities, contracting office support, program office support on contract-related matters, and awarding or administering grants. For instance, Air Force Center for Engineering and the Environment officials told us that they used contractors to perform quality assurance functions for all of the center’s construction projects in Iraq and Afghanistan. Obligations for construction on these projects totaled over $790 million for approximately 200 task orders during fiscal year 2008 and the first half of fiscal year 2009. In another example, State had obligated just over $700,000 as of March 2009 for a Bureau of International Narcotics and Law Enforcement personal services contractor to provide oversight, such as performing inspections and accepting contractor work on behalf of the U.S. government, for two task orders that included support for an Iraq criminal justice development program and had combined obligations of $343 million as of March 2009. We found that the way DOD acquired personal services contractors and the functions performed by these contractors differed when compared to those of State and USAID. At DOD, we identified two contracts for personal services awarded by the U.S. Army Corps of Engineers to firms that would in turn hire individuals, including local nationals, to provide construction quality assurance. In these cases, contract personnel (up to an estimated 174 individuals, in one case) work under the direct supervision and control of agency officials while administrative aspects of their employment are managed by the contracted firm. In contrast, State and USAID awarded personal services contracts directly to individuals for a range of functions, including on-site monitoring of contractor activities, supporting contracting and program offices on contract-related matters, and awarding grants and monitoring grantee performance. The decisions to use contractors to support contract or grant administration functions are largely made by individual contracting or program offices within the agencies on a case-by-case basis. The offices cited the lack of a sufficient number of government staff, the lack of in- house expertise, or frequent rotations among government personnel as key factors contributing to the decision to use contractors to support their efforts. These individual decisions, however, are generally not informed by more strategic, agencywide workforce plans or guidance on the extent to which contractors should be used to support these functions. Individual contracting or program offices generally decided to use contractors to perform administration functions for other contracts or grants to address workforce challenges, including a shortage of government personnel and a lack of expertise among government personnel to perform specific functions, as well a lack of continuity because of frequent rotations. While workforce-related challenges were cited most frequently as a reason for needing to acquire contractor support, contracting and program officials also noted that using contractors in contingency environments can be beneficial to meet unforeseen or changing needs, address safety concerns regarding the use of U.S. personnel in high-threat areas, and provide a means to overcome language barriers or help develop the local economy (see table 5). The examples in table 6 provide illustrations from our case studies of the reasons cited by the agencies for their reliance on contractors to perform contract or grant administration functions for other contracts or grants in Iraq or Afghanistan. Individual offices’ decisions to use contractors are generally not informed by more strategic, agencywide workforce plans or guidance on the extent to which contractors should be used to support contract or grant administration functions. Agencies’ current strategic human capital plans and guidance generally do not address the extent to which it is appropriate to use contractors, either in general or more specifically to perform contract or grant administration functions. Some DOD, State, and USAID officials noted that they would prefer to use government employees to perform some of the functions currently being performed by contractors. Our work indicated, however, that agencies intend to continue to rely on contractors to perform these functions in Iraq or Afghanistan on a longer- term basis. For example, in 15 of the 32 case studies we conducted, contracts or task orders were awarded in 2007 or earlier, and we found cases in which the contract or task order had recently been or was in the process of being recompeted. Our prior work has noted that to mitigate risks associated with using contractors, agencies have to understand when, where, and how contractors should be used given the risk of diminished institutional capacity, potentially greater costs, and mission risks. We have also reported that decisions regarding the use of contractors should be based on strategic planning regarding what types of work are best done by the agency or by contractors. DOD and the Office of Management and Budget have recently issued guidance that further emphasized the importance of this type of planning. Specifically, after recognizing in its 2006 update to the Quadrennial Defense Review that contractors are part of the total force, DOD issued guidance in May 2009 that encouraged DOD components to consider when to use contractors as part of a total force approach to workforce management and strategic human capital planning. Similarly, the Office of Management and Budget’s July 2009 Managing the Multi-Sector Workforce guidance required civilian agencies to take immediate steps to adopt a framework for planning for and managing the multisector workforce of federal employees and contractors, including principles for considering the appropriate mix of contractors and government employees. We reported in 2009 that while DOD had made good progress in developing a civilian workforce plan and had recognized contractors as a part of its total workforce, the department had yet to develop a strategy for determining the appropriate mix of contractor and government personnel. DOD Instruction 1100.22, which provides guidance for determining the appropriate military, civilian, and contractor mix needed to accomplish the department’s mission, focuses on individual decisions of whether to use contractors to provide specific capabilities and not the overarching question of what the appropriate role of contractors should be. For example, the guidance distinguishes between contract administration functions that contractors can and cannot perform based on which functions are considered to be inherently governmental and states that contractors may be used in certain circumstances to perform contract quality control and performance evaluation or inspection functions, but does not address the extent to which contractors should be used to perform these functions. We recommended in March 2009 that DOD revise its criteria and guidance to clarify under what circumstances and the extent to which it is appropriate to use contractors to perform acquisition-related functions. DOD concurred with our recommendation and, according to DOD officials, is in the process of finalizing revisions to its guidance as of March 2010. State’s departmentwide workforce plan also generally does not address the extent to which contractors should be used to perform specific functions. As part of State’s fiscal year 2011 budget process, State has asked its bureaus to focus on transitioning some activities performed by contractors to performance by government employees. State officials told us, however, that departmentwide workforce planning efforts generally have not addressed the extent to which the department should use contractors because those decisions are left up to individual bureaus. Officials at State’s Bureaus of Acquisition Management, Diplomatic Security, and International Narcotics and Law Enforcement Affairs told us that they do not have workforce plans that include consideration of the extent to or the circumstances under which contractors should be used to perform contract or grant administration functions. These officials indicated that decisions about the use of contractors are generally made on a case-by-case basis and often reflect the necessity of using contractors because of a shortage of direct hire employees. USAID has taken steps to determine the extent to which personal services contractors should be used, but has not addressed the extent to which nonpersonal services contractors outside the United States should be used, either in general or to perform specific functions. USAID officials told us that personal services contractors are used across the agency’s overseas missions and that they consider these contractors to be part of their workforce. As such, personal services contractors have been included in the agency’s workforce planning model. For example, the model for USAID headquarters includes an estimate of the extent to which various functions should be performed by personal services contractors. Officials told us that future iterations of the model will address the extent to which personal services contractors should be used to staff contracting offices in Iraq and Afghanistan. USAID’s current workforce planning efforts, including its human capital and workforce plans, however, do not address the extent to which nonpersonal services contractors working outside of the United States should be used as officials do not consider those contractors to be part of USAID’s workforce. DOD, State, and USAID will be challenged to fully address the appropriate role for contractors performing specific functions during workforce planning efforts because of the lack of complete and reliable data on the functions performed by contractors. We recently reported that all three agencies continue to struggle in implementing improvements to track data on contracts and contractor personnel in Iraq and Afghanistan. Our past work has shown that such data are important to enable agencies to conduct adequate workforce planning. DOD, State, and USAID took a number of actions to mitigate conflict of interest and oversight risks associated with contractors supporting contract and grant administration functions, but did not always fully address these risks. For example, the agencies generally complied with requirements related to organizational conflicts of interest. USAID, however, did not always include a contract clause generally required by USAID policy intended to protect the government’s interest regarding potential organizational conflicts of interest. Additionally, some State officials were uncertain as to whether or how federal ethics laws regarding personal conflicts of interest applied to personal services contractors. In almost all cases, the agencies had designated personnel to provide contract oversight, though they did not ensure enhanced oversight for contractors that closely supported inherently governmental functions in accordance with federal requirements. FAR subpart 9.5 requires contracting officers to identify and evaluate potential organizational conflicts of interest prior to contract award and take steps to address potential conflicts that they determine to be significant. If the contract may involve a significant potential conflict, before issuing a solicitation, the contracting officer must submit for approval to the head of the contracting activity a written analysis with courses of action for avoiding, mitigating, or neutralizing the conflict. Though not mandatory, the contracting officer may use solicitation provisions or a contract clause to restrict the contractor’s eligibility for other contract awards or require agreements about the use of other contractors’ proprietary information obtained during the course of contract performance. In six of the contracts we reviewed, agencies addressed potential organizational conflicts of interest by incorporating a clause into the contract that precluded the contractor from bidding on other related work that may result in a conflict of interest. For example, Air Force Center for Engineering and the Environment officials identified the potential for an organizational conflict of interest in a contract used in part to support the center’s CORs in Iraq and therefore restricted the contractor from participating in any of the center’s other contracts for the life of the contract plus 1 year. Similarly, a State contract to support the department’s management and oversight of security operations overseas, including in Iraq and Afghanistan, had a clause that precluded the contractor and its subcontractors from participating in directly related department contracts for 3 years after the completion of the contract. These six case studies also included a contract clause addressing the protection or nondisclosure of other contractors’ proprietary data. Agencies have broad discretion in how to address potential organizational conflicts of interest. Solicitation and contract clauses are one of many options contracting officers have, though they are not always used. For example, agency documents in two cases suggested that there had been consideration of the possible need to restrict contractors’ activities because of potential conflicts of interest. Clauses related to potential conflicts of interest, however, were not included in the contracts at the time of award. In one case, the Commander of the Joint Contracting Command – Iraq/Afghanistan’s (JCC-I/A) letter of justification for contract and property specialist support stated that the award of the contract may preclude the contractor from being eligible for or working on other contracts. The contract itself, though, did not contain any related organizational conflict of interest clauses. Additionally, the contract file of a Defense Energy Support Center contract to support the oversight of fuel delivery in Afghanistan included e-mails indicating that the oversight support contractor could not provide services to the companies providing fuel delivery services for the center. Contracting officials told us that the related discussions had been informal and therefore had not been documented. In addition to the FAR, USAID also has specific agency policy that addresses organizational conflicts of interest for certain contractors, including contractors that evaluate USAID program activities or other contractors. The policy requires that an organizational conflict of interest clause be included in the evaluation contract that precludes the contractor from providing certain related services within 18 months of USAID receiving an evaluation report from the contractor unless a waiver is authorized; restricts the use of information obtained from other parties during the course of the contract; and requires nondisclosure agreements with other contractors to protect proprietary data. This clause was not, however, incorporated in any of the three USAID contracts we reviewed that included the evaluation of program activities or contractors. In one of these contracts, the statement of work notes that the contractor may be precluded from performing work under the current task order or from award of other contracts if USAID determines the contractor has a conflict of interest, and that the contractor shall protect proprietary information. This statement is not, however, specific as to when those circumstances occur, nor does it specifically restrict the contractor’s use of information obtained from other parties during the course of the contract in future proposals. USAID officials told us that when this contract is recompeted in 2010, the clause required by USAID policy will be included. In another of these three cases, USAID’s response to a prospective bidder’s questions indicated that prior to the award of the contract, a determination was made that the contractor would be restricted from bidding for the award of other related contracts, but the restrictions were not addressed in the solicitation or contract despite the requirement to do so in FAR section 9.507-2. One case study illustrated the challenges of identifying potential organizational conflicts of interest prior to award and the potential effect if one is identified after award. In this case, JCC-I/A awarded a $1 million contract to support the Armed Contractor Oversight Directorate in Afghanistan. The contractor, which itself was a private security contractor, was assigned a number of responsibilities related to oversight of private security contractors, including monitoring private security contractor activity, documenting and analyzing security incidents, and assisting the government in conducting incident inspections. The contract files we reviewed did not include documentation that the contracting officer assessed the potential for a conflict of interest, though as previously noted, a written analysis would not be necessary unless the contracting officer decided that there was a significant potential conflict of interest. In addition, no clauses were included in the solicitation or contract that precluded the contractor from bidding on other contracts. After the support contract had been awarded and performance had begun, the support contractor competed for and won a separate contract to provide armed guard services in Afghanistan. Subsequent to the award of the second contract, however, a JCC-I/A attorney became aware of the two contracts and, according to JCC-I/A officials, alerted a JCC-I/A contracting official. JCC-I/A counsel concluded that the contractor’s objectivity in supporting the Armed Contractor Oversight Directorate could potentially be impaired by its performance of armed guard services. Ultimately, JCC- I/A counsel determined that no mitigation plan would adequately mitigate this conflict. Therefore, JCC-I/A terminated the ongoing Armed Contractor Oversight Directorate support contract for the convenience of the government and awarded another support contract to a different contractor. Agencies are not required to have a formal process for monitoring potential organizational conflicts of interest after award, but in some cases, officials told us that they did so informally. For example, for a State task order to provide contract administration support, officials noted that it was possible to mitigate potential conflicts of interest because the small size of the office facilitates direct government oversight of contractor activities, and contractors that perform contract administration functions for State do not often perform other services that could be in conflict with their current responsibilities. In several other case studies we conducted, agency officials told us that contractors have responsibility to bring organizational conflicts of interest to the attention of contracting officials if they occur. Under USAID acquisition regulations, contracts that include restrictions on a contractor’s eligibility for future work should also include a standard clause stating that the contractor should disclose any postaward conflicts of interest it discovers. In the three contracts that we previously indicated should have had a clause to restrict the contractor’s eligibility for future work based on USAID policy because they included evaluation services, one contract had the clause to disclose postaward conflicts of interest discovered while the other two did not. USAID officials in these cases, however, told us that they take steps to mitigate potential organizational conflicts of interest during the life of the contract. For example, for a USAID contract for monitoring and evaluation services in Iraq, the personal services contractor responsible for contract oversight told us that he addressed potential conflicts of interest by limiting contact between the contractors responsible for executing mission programs and the contractor evaluating their services. Although DOD and State regulations do not require contract clauses related to the disclosure of conflicts of interest by contractors, changes to governmentwide requirements on organizational conflicts of interest, including the establishment of standard contract clauses, are being considered. Most requirements governing personal conflicts of interest that apply to federal employees are generally not applicable to nonpersonal services contractors and their employees. Since December 2007, the FAR has required certain contractors to have a written code of business ethics and conduct, although this requirement did not apply in most of the nonpersonal services case studies we conducted. We have previously reported that this requirement will not ensure that the advice and assistance received from contractor employees is not tainted by personal conflicts of interest. We recommended in March 2008 that DOD develop and implement policy that requires personal conflict of interest safeguards for certain defense contractor employees that are similar to those required of DOD’s federal employees. In November 2009, DOD issued a memorandum providing additional information on risks related to personal conflicts of interest and how those risks should be addressed under current federal regulations, but DOD’s response to our recommendation is pending resolution of a proposed amendment to the FAR to address personal conflicts of interest by contractor employees performing acquisition functions. Several contracting officials told us that contractors have responsibility to bring personal conflicts of interest to the agency’s attention. In our case studies, we found that contractors managed personal conflicts of interest in a variety of ways. For example, in two USAID case studies that included the award of grants, the contractor included in its grant management plan criteria for identifying contractor personnel with conflicts of interest and the process for mitigating those conflicts. Representatives from a DOD contractor providing construction quality assurance services in Iraq and Afghanistan told us that they screen and interview all employees they hire to identify personal conflicts of interest and require employees to sign a form stating that they have no such conflicts. In this and three other DOD case studies we conducted, agency contracting and program officials stated that they attempt to identify and mitigate potential personal conflicts of interest by reviewing the résumés of proposed contractor employees. The agencies vary in how they address personal conflicts of interest among personal services contractors. DOD officials told us that the department does not have specific policies related to conflicts of interest among personal services contractors. USAID policy states that personal services contractors are covered by all federal ethics laws that apply to direct hire personnel, including requirements to file financial disclosure forms. USAID policy requires the contracting officer or executive officer who awards a personal services contract to make a determination at the time of contract award about the specific financial disclosure filing requirements that will apply to the personal services contractor and to include that determination as part of the contract. USAID officials complied with this requirement in each of the six USAID personal services case studies we conducted. Unlike USAID, neither State nor its bureaus that hired personal services contractors within our scope have guidance that specifically addresses the applicability of federal ethics laws to personal services contractors. According to the senior ethics counsel at State, understanding which financial disclosure requirements apply to personal services contractors is complicated and depends on the personal services contractor’s contract and the bureau’s statutory basis for hiring that personal services contractor. Our work at State identified some confusion among contracting personnel and supervisors of personal services contractors as to whether federal ethics laws, including those related to financial disclosure requirements, were applicable to personal services contractors. In the five personal services case studies we conducted at State’s Bureaus of Diplomatic Security and International Narcotics and Law Enforcement Affairs, contracting personnel and supervisors of personal services contractors either were uncertain of how requirements to file financial disclosure forms applied to personal services contractors, told us that the requirements did not apply, or told us that the requirements had only recently been applied at all or consistently to personal services contractors. The five personal services contractors in these case studies told us, however, that they were generally required to complete financial disclosure forms or that they had completed financial disclosure forms in the past year. In most case studies we conducted, the agencies had designated oversight personnel to monitor contractors performing administration functions for other contracts or grants in Iraq or Afghanistan. A primary characteristic of a personal services contract is the relatively continuous supervision and control of the personal services contractor by a government employee, and in the case studies we conducted, we generally found personal services contractors had designated government supervisors who worked within the same program. In 18 of 19 nonpersonal services case studies we conducted, agencies had identified individuals to provide contract oversight, though the extent of that oversight varied in part based on the functions performed by the contractor and whether the contractor performed at remote locations. For example, State officials told us that for a contract to provide program and acquisition support for the department’s oversight of overseas security operations, including those in Iraq and Afghanistan, government officials supported by the contractor are collocated with contractor employees, and government branch chiefs routinely meet with the contractor’s program manager to discuss contractor employee performance on assigned work. In contrast, CORs in some quality assurance case studies conducted oversight primarily of remote locations. For example, U.S. Army Corps of Engineers officials told us that CORs for a construction quality assurance contract conduct oversight primarily by reviewing contractor reports and photos of work sites and conducting meetings with quality assurance and construction contractor personnel. In several cases, agency officials indicated that they did not maintain or could not locate documentation of oversight activities. Agencies faced challenges providing sufficient oversight of contractors performing administration functions for other contracts in Iraq or Afghanistan in several case studies we conducted. For example, agency officials stated that when they cannot visit contractor work sites for security reasons—as with some sites for the Defense Energy Support Center’s fuel delivery inspection contract in Afghanistan and USAID’s Monitoring and Evaluation Performance Program contract in Iraq—their oversight is entirely remote. USAID officials told us that other U.S. government officials, such as representatives from provincial reconstruction teams, may be able to provide some insight into contractor activities during times when those officials are at contractor work sites. Defense Energy Support Center officials told us that the inability of government personnel to visit contractor work sites can make it difficult for them to verify the quality of work of the contractor that is supporting the oversight of work performed by other contractors. In addition, the COR for a U.S. Army Corps of Engineers quality assurance contract told us that some contractor personnel did not provide high-quality reports and that construction oversight personnel who reviewed the reports on a daily basis sometimes lacked the quality assurance expertise to direct the contractor’s quality assurance personnel. The COR told us that training efforts were under way to address this issue. Further, according to State officials, they had difficulty filling the government deputy program manager position for State’s aviation quality assurance contract, which affected the department’s plans to provide continuous oversight of the contractor’s technical operations in Iraq since the deputy program manager was intended to provide in-country oversight when the program manager was not in Iraq. In another State case, contracting officials told us that oversight was conducted entirely by the COR and program office staff but were unaware that there was not a COR currently designated for the contract. The officials later told us that staff turnover in Iraq had resulted in the lack of a COR, and they were taking steps to try to get a new COR appointed. In the 19 nonpersonal services case studies we conducted, we found that the contract or task order statements of work provided for the contractor to perform functions that closely support inherently governmental functions. For contractors administering other contracts, this includes evaluating another contractor’s performance, providing inspection services, and performing tasks that might allow access to confidential business or other sensitive information, among other functions; for contractors administering grants, awarding or recommending the award of grants closely supports the performance of inherently governmental functions. We have previously reported that when contractors provide services that closely support inherently governmental functions, there is the potential for loss of government control and accountability for mission-related policy and program decisions, and that risk increases the closer the services come to supporting inherently governmental functions. This loss of government control may result in decisions that are not in the best interest of the government and may increase vulnerability to waste, fraud, and abuse. To address this risk, the FAR and Office of Federal Procurement Policy guidance require that agencies provide greater scrutiny and an enhanced degree of management oversight of contractors performing services that tend to affect government decision making, support or influence policy development, or affect program management. This enhanced oversight would include assigning a sufficient number of qualified government employees to provide oversight and to ensure that agency officials retain control over and remain accountable for policy decisions that may be based in part on a contractor’s performance and work products. These requirements for enhanced oversight are not applicable to personal services contractors, including the 13 personal services case studies we conducted, because Office of Federal Procurement Policy guidance and FAR restrictions on contractors performing inherently governmental functions do not apply to these contractors. Although we found that statements of work for all of the 19 nonpersonal services case studies we conducted provided for the contractor to perform activities that closely supported inherently governmental functions, we did not find evidence that the agencies considered related requirements to provide greater scrutiny and an enhanced degree of management oversight in these 19 cases. In our prior work at DOD and the Department of Homeland Security, we found that program and contracting personnel were unaware of requirements related to providing enhanced oversight of services that closely support inherently governmental functions. In the case studies we conducted, we found that many contracting and program officials were unfamiliar with the concept of contractors closely supporting inherently governmental functions. Further, DOD, State, and USAID regulations generally do not require contracting or program officials to document an assessment of whether contractors closely support inherently governmental functions or any consideration given to enhanced oversight. According to DOD, State, and USAID officials, no specific guidance has been developed that defines how contracting and program officials should conduct enhanced oversight. In November 2009, we recommended that DOD require program and contracting officials to document risks and risk mitigation steps when awarding any contract or issuing any task order for services closely supporting inherently governmental functions and develop guidance to identify approaches to enhance management oversight for these contracts or task orders. DOD concurred with these recommendations and identified a number of actions that would be taken to address them. Contracting in contingency environments such as Iraq and Afghanistan presents unique security and logistical challenges, including difficulty traveling to dangerous or remote locations and frequent rotations among government personnel. Despite such challenges, effective oversight of contractors and grantees remains critical to help ensure that contractors are meeting contract requirements and grant funds are being used for their intended purposes. Using contractors to support the administration and oversight of other contracts and grants can facilitate the government’s ability to carry out this critical function. Our prior work and the Office of Management and Budget’s July 2009 guidance, however, have underscored the importance of strategic planning to guide decisions related to how contractors should be used to support agency missions. Until DOD, State, and USAID fully consider in their workforce planning efforts the extent to which contractors should perform contract and grant administration functions, the agencies will not be positioned to consider the potential implications of relying on contractors to perform these functions, such as a loss of institutional capacity to perform mission-critical functions or greater costs. The agencies did not fully address risks related to potential conflicts of interest and oversight for contractors performing contract or grant administration functions. For example, USAID did not always address potential organizational conflicts of interest in its contracts in accordance with agency policy, though ongoing efforts to revise federal organizational conflict of interest regulations could potentially improve USAID’s and other agencies’ ability to mitigate this risk in the future. Additionally, without management understanding of whether federal ethics laws related to personal conflicts of interest apply to the department’s personal services contractors, State runs the risk of inconsistent application of these laws, potentially limiting the department’s ability to ensure that contract and grant administration decisions are made in the best interest of the government. Further, DOD, State, and USAID’s lack of consideration of the need to provide greater scrutiny and an enhanced degree of management oversight when nonpersonal services contractors closely support inherently governmental functions may impair the agencies’ ability to ensure the appropriate level of oversight. The agencies will continue to face this challenge without an effective process to identify contracts that closely support inherently governmental functions and guidance to assist program and contracting officials. In 2009, we made recommendations to DOD with regard to improving the department’s ability to plan for the use of contractors supporting acquisition functions and mitigate the risks of contractors closely supporting the performance of inherently governmental functions. Since the department concurred with these recommendations and has identified steps it plans to take to address them, we are not making any additional recommendations to DOD. To improve State and USAID’s ability to plan effectively for the use of contractors to perform contract or grant administration functions and to improve oversight of contracts that closely support inherently governmental functions in Iraq, Afghanistan, and future contingency environments where the agencies rely heavily on contractors, we recommend that the Secretary of State and Administrator of USAID take the following three actions: Determine the extent to which contractors should perform administration functions for other contracts and grants in accordance with strategic human capital planning principles outlined in the Office of Management and Budget’s July 2009 multisector workforce guidance. Develop guidance to identify approaches that contracting and program officials should take to enhance management oversight when nonpersonal services contractors provide services that closely support inherently governmental functions. Before the award of any nonpersonal services contract or task order for services closely supporting inherently governmental functions, require that program and contracting officials document their consideration of related risks and the steps that have been taken to mitigate such risks. To improve State’s ability to mitigate risks related to potential personal conflicts of interest among personal services contractors, we recommend that the Secretary of State clarify the department’s policies regarding the application of federal ethics laws to personal services contractors. We provided DOD, State, and USAID with a draft of this report for their review and comment. DOD provided technical comments, which we incorporated as appropriate. State agreed with our recommendations and identified steps that the department plans to take to address each recommendation. State’s comments, along with our response, are reprinted in appendix III. USAID generally agreed with our recommendations and identified steps the agency is taking or plans to take to address them. With regard to our recommendation related to determining the extent to which contractors should perform contract or grant administration functions, USAID noted that it is already in the process of determining the extent to which nonpersonal services contractors, which USAID refers to as institutional support contractors, should perform such functions. As we noted in the report, however, USAID’s current efforts do not address the extent to which nonpersonal services contractors performing such functions outside of the United States, such as in Iraq or Afghanistan, should be used. We believe it is important for the agency to make such a determination to position itself to effectively mitigate the potential risks associated with reliance on contractors. USAID also provided some points of clarity related to the recommendations, and we incorporated the comments in the report as appropriate. USAID’s comments, along with our responses, are reprinted in appendix IV. We are sending copies of this report to the Secretary of Defense, the Secretary of State, the Administrator of the U.S. Agency for International Development, and interested congressional committees. The report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The National Defense Authorization Act for Fiscal Year 2008 directed us to report annually on Department of Defense (DOD), Department of State (State), and U.S. Agency for International Development (USAID) contracts in Iraq and Afghanistan, including information on any specific contract or class of contracts the Comptroller General determines raises issues of significant concern. Pursuant to that mandate, we reviewed DOD, State, and USAID’s use of contractors, including personal services contractors, to perform administration functions for contracts or grants with performance in Iraq and Afghanistan for fiscal year 2008 and the first half of fiscal year 2009. Specifically, we analyzed (1) the extent to which DOD, State, and USAID rely on contractors to perform administration functions for other contracts and grants in Iraq and Afghanistan; (2) the reasons behind decisions to use contractors to perform these functions and whether the decisions are guided by strategic workforce planning; and (3) whether the agencies have considered and mitigated conflict of interest and oversight risks related to contractors performing contract or grant administration functions. To determine the extent to which DOD, State, and USAID rely on contractors to perform administration functions for other contracts and grants in Iraq and Afghanistan, we obtained data from the agencies on contracts and task orders with at least 1 day of performance in fiscal year 2008 or the first half of fiscal year 2009 for which duties included administration functions for other contracts or grants with performance in Iraq, Afghanistan, or both. The data we obtained from the agencies were intended to include all contracts with administration functions for other contracts and grants with performance in Iraq and Afghanistan, regardless of the place of performance of the contractor performing administration functions. For example, some contracts or task orders in our scope included performance in the United States in support of the administration of contracts or grants with performance in Iraq or Afghanistan. To assess whether the data obtained from the agencies were accurate and appropriately categorized as within the scope of this engagement, we reviewed contract documents for a selection of reported contracts. We reviewed contract documents for all 49 contracts or task orders reported for DOD and 37 of the 39 contracts or task orders reported by State. The 2 contracts we did not review at State were contracts for which officials could not identify the task orders that were within our scope. For USAID, we reviewed contract documents for all 17 nonpersonal services contracts or task orders reported by the agency and selected 25 of 81 personal services contract files for review during our fieldwork in Iraq and Afghanistan. For the most part, we determined that the contracts or task orders we reviewed had been appropriately reported by the agencies as being within our scope. When we noted discrepancies, we gathered additional information about contracts or task orders reported by the agencies, and if we determined that a contract or task order was not within our scope, we removed it from our analysis. We attempted to identify additional contracts or task orders within our scope by reviewing data from the Federal Procurement Data System – Next Generation (FPDS-NG) and data provided to GAO by these agencies for a related engagement. Specifically, for both sources of data, we used a list of keywords related to contract and grant administration to search for contracts or task orders not reported by the agencies that might be within our scope. When we identified such contracts or task orders, we followed up with the agencies to obtain contract documents and additional information from knowledgeable officials as necessary to determine whether the contracts or task orders were within our scope. If we determined that the contracts or task orders were within our scope, we added them to our analysis. In total, we added 62 contracts or task orders as a result of our data reliability reviews. Although we found that the agencies’ data were incomplete based on these additional contracts and task orders we identified, we determined that taken collectively, data provided by the agencies and data on the contracts and task orders we identified and included in our scope were sufficiently reliable to establish the minimum number of contracts and task orders active in fiscal year 2008 or the first half of fiscal year 2009 awarded by DOD, State, and USAID to perform the functions within our scope. For the contracts and task orders within our scope, we also obtained data from FPDS-NG or the agencies on the total obligations for the contracts or task orders through March 31, 2009. To assess the reliability of the obligation data from FPDS-NG, we compared them with related data from our contract file reviews for the nonpersonal services contracts we selected as case studies. In two cases at USAID, we identified discrepancies. In these cases, we followed up with the agency to determine the reasons for the discrepancies and made corrections as necessary. We determined that the data were sufficiently reliable for the purposes of our review, although obligations for some USAID contracts in Afghanistan may be underreported in FPDS-NG because of discrepancies between USAID information systems and FPDS-NG. State and USAID obligations for personal services contracts were generally not included in FPDS-NG, so we obtained obligation data from the agencies for these contracts. We assessed the reliability of the data provided by the agencies by comparing them to related data we collected during our file reviews for the personal services contracts we selected as case studies. Based on this assessment, we determined that the data were sufficiently reliable for the purposes of our review. We were unable to determine how much of the amount obligated by the agencies on the contracts or task orders within our scope was specifically obligated for the performance of contract or grant administration functions in Iraq or Afghanistan. Some of the contracts or task orders included the performance of functions besides contract or grant administration or the performance of administration functions for contracts or grants with performance outside of Iraq and Afghanistan, but FPDS-NG and agency obligation data were not detailed enough to allow us to isolate the amount obligated for other functions or locations. To gather information about the reasons behind decisions to use contractors to perform functions within our scope, we purposefully selected 13 personal services contracts and 19 nonpersonal services contracts at DOD, State, and USAID for case studies to provide a cross section of types of contracts, locations, and functions performed. For these case studies, we conducted fieldwork in Iraq, Afghanistan, and the United States. We reviewed available documentation of agencies’ justifications for using contractors, such as acquisition strategies and relevant determinations and findings for the contracts we selected. We also interviewed agency officials, such as contracting officers, program managers, and contracting officers’ representatives (COR), about the reasons for using contractors to perform contract or grant administration functions. To determine the extent to which agencies had developed strategies to inform decisions about the use of contractors to perform these functions, we reviewed agency workforce planning documents, such as strategic human capital plans. We also analyzed relevant guidance, including DOD Instruction 1100.22, Guidance for Determining Workforce Mix, and Office of Management and Budget and DOD guidance related to insourcing, and reviewed our prior work on the inclusion of contractors in workforce planning. Further, we interviewed knowledgeable agency officials about steps taken to include contractors in agency workforce planning efforts. To assess the agencies’ consideration and mitigation of conflict of interest and oversight risks related to contractors performing contract or grant administration functions, we reviewed relevant federal regulations and agency policy and analyzed data collected through the case studies we conducted. Specifically, to determine the steps taken by the agencies to address risks related to potential organizational and personal conflicts of interest, we analyzed contract clauses and other contract documentation relevant to conflicts of interest. To determine the steps taken by the agencies to address risks related to oversight, we analyzed relevant contract documentation, such as COR appointment letters, surveillance or contract administration plans, and documentation of the consideration of whether contractors closely supported inherently governmental functions when available. To gain additional insight into how potential conflict of interest and oversight risks were addressed by the agencies, we interviewed agency officials responsible for contracting policy as well as officials such as contracting officers, CORs, and program officials responsible for the contracts we selected as case studies. Our review did not assess the effectiveness of contractors performing contract or grant administration functions for other contracts or grants. We conducted this performance audit from February 2009 through April 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DOD, State, and USAID relied on contractors to perform a wide range of administration functions for contracts and grants with performance in Iraq and Afghanistan. Our review found 223 contracts and task orders active during fiscal year 2008 or the first half of fiscal year 2009 that included the performance of administration functions for other contracts or grants in Iraq and Afghanistan. Tables 7, 8, and 9 provide more details about these contracts and task orders. The following are GAO’s supplemental comments on the Department of State’s letter dated April 2, 2010. 1. State’s comments raised an issue about whether there was a need for additional regulatory guidance to help determine the scope of ethics obligations applicable to all executive branch personal services contractors. As we focused on DOD, State, and USAID, whether there is a need for consistent guidance across the executive branch was beyond the scope of this work. . The following are GAO’s supplemental comments on the U.S. Agency for International Development’s letter dated April 8, 2010. 1. We have revised table 2 to reflect USAID’s comments. 2. As noted in the report, we consider contractors responsible for assisting in the award of grants as closely supporting an inherently governmental function. USAID’s Automated Directives System provides guidance on the use of contractors to award grants on behalf of USAID. We have reflected this guidance in the final report. John P. Hutton, (202) 512-4841 or [email protected]. In addition to the contact named above, Timothy DiNapoli, Assistant Director; Johana R. Ayers; Shea Bader; Noah Bleicher; John C. Bumgarner; Morgan Delaney Ramaker; Kathryn M. Edelman; Walker Fullerton; Katherine Hamer; Art James, Jr.; Justin M. Jaynes; Julia Kennon; Anne McDonough-Hughes; Christopher J. Mulkins; Jason Pogacnik; Thomas P. Twambly; and Gwyneth B. Woolwine made key contributions to this report.
The Departments of Defense (DOD) and State and the U.S. Agency for International Development (USAID) have relied extensively on contractors in Iraq and Afghanistan, including using contractors to help administer other contracts or grants. Relying on contractors to perform such functions can provide benefits but also introduces potential risks, such as conflicts of interest, that should be considered and managed. Pursuant to the National Defense Authorization Act for Fiscal Year 2008, GAO reviewed (1) the extent to which DOD, State, and USAID rely on contractors to perform contract and grant administration in Iraq and Afghanistan; (2) the reasons behind decisions to use such contractors and whether the decisions are guided by strategic workforce planning; and (3) whether agencies considered and mitigated related risks. GAO analyzed relevant federal and agency policies and agency contract data, and conducted file reviews and interviews for 32 contracts selected for case studies. DOD, State, and USAID'suse of contractors to help administer contracts and grants was substantial, although the agencies did not know the full extent of their use of such contractors. GAO found that the agencies had obligated nearly $1 billion through March 2009 on 223 contracts and task orders active during fiscal year 2008 or the first half of fiscal year 2009 that included the performance of administration functions for contracts and grants in Iraq and Afghanistan. The specific amount spent to help administer contracts or grants in Iraq and Afghanistan is uncertain because some contracts or task orders included multiple functions or performance in various locations and contract obligation data were not detailed enough to allow GAO to isolate the amount obligated for other functions or locations. Overall, the agencies relied on contractors to provide a wide range of services, including on-site monitoring of other contractors' activities, supporting contracting or program offices on contract-related matters, and awarding or administering grants. For example, Air Force Center for Engineering and the Environment officials noted that contractors performed quality assurance for all of the center's construction projects in Iraq and Afghanistan. In another example, USAID contractors awarded and administered grants on USAID's behalf to support development efforts in Iraq and Afghanistan. Decisions to use contractors to help administer contracts or grants are largely made by individual contracting or program offices on a case-by-case basis. In doing so, the offices generally cited the lack of sufficient government staff, the lack of in-house expertise, or frequent rotations of government personnel as key factors contributing to the need to use contractors. Offices also noted that using contractors in contingency environments can be beneficial, for example, to meet changing needs or address safety concerns regarding the use of U.S. personnel in high-threat areas. GAO has found that to mitigate risks associated with using contractors, agencies have to understand when, where, and how contractors should be used, but offices' decisions were generally not guided by agencywide workforce planning efforts. DOD, State, and USAID took actions to mitigate conflict of interest and oversight risks associated with contractors helping to administer other contracts or grants, but did not always fully address these risks. For example, agencies generally complied with requirements related to organizational conflicts of interest, but USAID did not include a contract clause required by agency policy to address potential conflicts of interest in three cases. Also, some State officials were uncertain as to whether federal ethics laws regarding personal conflicts of interest applied to certain types of contractors. In almost all cases, the agencies had designated personnel to provide contract oversight. DOD, State, and USAID contracting officials generally did not, however, ensure enhanced oversight as required for situations in which contractors provided services closely supporting inherently governmental functions despite the potential for loss of government control and accountability for mission-related policy and program decisions.
DOD policy states that each military service must administer the PDHRA to active component, Reserve component, and separated servicemembers 90 to 180 days following deployment if they meet the following conditions: deployed for greater than 30 days, deployed to locations outside the continental United States, and deployed to locations without permanent military treatment facilities. The military services, using service-specific databases, identify active and Reserve component servicemembers to whom the PDHRA requirement applies and notify these servicemembers via various methods, such as postcards, e-mail, telephone, and face-to-face contact. The DOD policy that initiated the PDHRA program stated that the military services should contact servicemembers who separate before or during the reassessment period to offer them the opportunity to fill out the PDHRA questionnaire. However, there is no mechanism in place to offer separated servicemembers the opportunity to fill out the PDHRA questionnaire. Once these servicemembers have separated from military service, they have no further obligation to the military services and accordingly, cannot be required to fill out the PDHRA questionnaire. It can also be difficult to locate and contact servicemembers after they separate. Instead, the military services, with the exception of the Air Force, implemented policies to administer the PDHRA to active and Reserve component servicemembers as part of the separation process from the military. Although the Army formally implemented its policy to administer the PDHRA to servicemembers before they separate in April 2008, the practice had been in effect as early as January 2006. The Navy and Marine Corps implemented a similar policy in January 2009. According to Air Force officials, there are plans to implement a policy by December 2009. The PDHRA involves two steps—active or Reserve component servicemembers fill out a PDHRA questionnaire and then discuss their questionnaire health concern responses with health care providers. The PDHRA questionnaire consists of a demographic section, a medical section, and a third section for a health care provider to fill out and sign. The demographic section asks for servicemembers to provide information such as date of birth, gender, and marital status. The medical section of the questionnaire asks servicemembers to self-report information on their current physical and mental health condition and concerns. DOD considers a PDHRA questionnaire complete when a health care provider reviews and signs the questionnaire, regardless of whether servicemembers fill out the entire PDHRA questionnaire or only the demographic section. (See app. II for a sample PDHRA questionnaire.) The Army, Air Force, Navy, and Marine Corps require active and Reserve component servicemembers to whom the PDHRA requirement applies to fill out at least the demographic section of the PDHRA questionnaire. Although filling out the medical section is voluntary, DOD officials estimate that less than 1 percent of servicemembers who fill out the PDHRA questionnaire decline to fill out the medical section, with the exception of the Air Force, which reported a higher declination rate. Active component servicemembers typically fill out the PDHRA questionnaire online either prior to PDHRA on-site events or during such events, which are usually held at military installations. Similarly, Reserve component servicemembers may fill out the PDHRA questionnaire— administered by the contractor, LHI—online, prior to or during on-site events. In addition, LHI maintains call centers to administer the PDHRA to Reserve component servicemembers. Reserve component servicemembers who use LHI’s call centers may fill out the demographic and medical sections of the questionnaire online or, with the help of call center staff, may fill out both sections of the questionnaire through the call centers. DOD requires that a health care provider review and discuss active and Reserve component servicemembers’ health concern responses on the PDHRA questionnaire, including any physical and mental health concerns that servicemembers self-identify on their questionnaires. Health care providers use professional judgment to decide whether a further evaluation is needed, based on servicemembers’ responses and other information revealed from the discussions with servicemembers. If referrals for a further evaluation are recommended, health care providers offer servicemembers information on obtaining a referral appointment. For example, a health care provider may provide information on obtaining an appointment at a military treatment facility or a Vet Center. Once a referral for a further evaluation is made, DOD does not require the military services to follow up to determine if active and Reserve component servicemembers have made or attended an appointment generated as a result of the health care provider’s assessment. However, for Reserve component servicemembers, referral follow-up is part of LHI’s administration of the PDHRA. LHI staff follow up within 72 hours with Reserve component servicemembers who have been issued a referral for a further evaluation through the call centers to ensure that these servicemembers have the information needed to obtain an appointment and to encourage servicemembers to schedule an appointment. In addition, LHI staff attempt to contact Reserve component servicemembers 30 days after medical referrals are issued, whether issued through PDHRA on-site events or through the call centers, to ask whether they scheduled and attended an appointment. If servicemembers did not, LHI staff ask if assistance is needed in scheduling an appointment. Each of the military services is required to electronically submit PDHRA questionnaires for both active and Reserve component servicemembers to DOD’s central repository. The central repository is the single source of DOD-level health surveillance information. The central repository contains data on diseases and medical events and longitudinal data on personnel and deployments, including information from DOD’s various deployment health assessments, such as the PDHRA. For Reserve component servicemembers who fill out PDHRA questionnaires through LHI, LHI staff are responsible for verifying that the questionnaires are submitted to the appropriate military service’s database. The military services are then responsible for submitting the questionnaires from their respective databases to DOD’s central repository. DOD established a deployment health quality assurance program in January 2004 to assess whether DOD’s deployment health assessments, including the PDHRA, are conducted as required. The deployment health quality assurance program relies on data from DOD’s central repository, data from the military services, and site visits to military installations to monitor and report on the extent of compliance among the military services with DOD’s deployment health requirements, such as the number of active and Reserve component servicemembers that filled out the PDHRA questionnaire. According to DOD’s deployment health quality assurance program manager, the program performed a site visit to Reserve units to validate data provided by the military services for the first time in October 2008, as part of its oversight of deployment health assessments, including the PDHRA. Prior to that, the quality assurance program only performed site visits to active component sites. On a quarterly basis, the quality assurance program reports to the military services on each service’s compliance with deployment health requirements. The quality assurance program also reports annually to the Armed Services Committees of the House of Representatives and Senate on site visit findings and on deployment health assessment data, including the number and percentage of servicemembers with PDHRA questionnaires in DOD’s central repository. Through its monitoring and reporting, DOD’s quality assurance program helps ensure that DOD’s deployment health assessments are conducted for active and Reserve component servicemembers as required. DOD contracts with LHI to administer the PDHRA to Reserve component servicemembers. Although LHI administers the PDHRA to Reserve component servicemembers on behalf of DOD, the military services are responsible for identifying and notifying servicemembers to whom the PDHRA requirement applies and for submitting questionnaires to DOD’s central repository. DOD’s RHRP office is responsible for monitoring DOD’s contract with LHI, which includes the administration of the PDHRA to Reserve component servicemembers, as well as the provision of other health services for this population, such as immunizations, physical examinations, and dental examinations and X-rays. DOD’s contract with LHI is a performance-based contract and, as such, establishes performance standards that the RHRP office uses in monitoring and assessing LHI’s performance in providing services to Reserve component servicemembers, including the administration of the PDHRA. For example, under its contract with DOD, LHI call center staff are required to answer 80 percent of incoming calls within 120 seconds. In monitoring LHI’s performance, DOD’s RHRP office helps ensure that the objective of the PDHRA program—to identify and address servicemembers’ health concerns, including mental health concerns, that emerge over time following deployments—is achieved for Reserve component servicemembers. On the two occasions we queried DOD’s central repository, we did not find PDHRA questionnaires for a substantial percentage of the active and Reserve component servicemembers in our population of interest. DOD policy requires that the military services electronically submit questionnaires to DOD’s central repository, which DOD uses as a key source of health surveillance information. The first of our two queries of DOD’s central repository occurred on April 15, 2009. On this date, we found that the central repository contained PDHRA questionnaires for only 77 percent of the roughly 319,000 active component, Reserve, and National Guard servicemembers who, according to DOD DMDC deployment data, returned from deployment to Iraq or Afghanistan between January 1, 2007, and May 31, 2008. We could not identify PDHRA questionnaires in DOD’s central repository on April 15, 2009, for a large number of servicemembers in our population—about 74,000 servicemembers—which represents the remaining 23 percent of our population of interest. We made our query nearly 1 year after the last servicemembers in our population returned from deployment. The percentage of PDHRA questionnaires absent from DOD’s central repository for our population of interest varied by military service and component. For example, among military service components, this percentage ranged from a low of about 10 percent to a high of about 61 percent. (For more information on the extent to which servicemembers in our population of interest did not have PDHRA questionnaires in DOD’s central repository as of April 15, 2009, see app. III.) After determining that about 74,000 servicemembers in our population of interest did not have questionnaires in DOD’s central repository based on our first query, we asked the military services whether these servicemembers had PDHRA questionnaires that could be identified in the services’ own databases. With the help of the services, we found that approximately 7,000 servicemembers—or about 9 percent of the 74,000 servicemembers—had questionnaires in their respective military services’ databases, but not in DOD’s central repository. The number of questionnaires identified in the military services’ databases that were not in the central repository varied by military service—ranging from over 300 questionnaires for Air Force and Navy servicemembers to about 3,000 for Army and Marine Corps servicemembers. On September 4, 2009, we queried DOD’s central repository again to update our April 2009 data and determine whether any progress had been made in reporting questionnaires to the central repository. On this second query—15 months after the last servicemembers in our population of interest returned from deployment—we found that DOD’s central repository was missing the same percentage of PDHRA questionnaires as had been missing in April. As a result of our September 2009 query, we still found PDHRA questionnaires for only 77 percent of the approximately 319,000 servicemembers in our original population of interest. While we identified slightly more questionnaires than we identified in our April query, we were still unable to identify questionnaires in the central repository for about 72,000 servicemembers. (See table 1.) The absence of 72,000 PDHRA questionnaires from DOD’s central repository for servicemembers who should have filled them out hinders DOD’s deployment health quality assurance program from effectively assessing the military services’ compliance with PDHRA requirements. The program, which DOD established to assess whether DOD’s deployment health assessments are conducted as required, relies in part on the presence of PDHRA questionnaires in the central repository. These questionnaires document the extent to which servicemembers were given the opportunity to fill out the questionnaire, as required under DOD policy. DOD officials specifically cited the importance of this documentation for helping the quality assurance program ensure that servicemembers have the opportunity to have their health concerns identified and addressed. However, the absence of questionnaires from the central repository for servicemembers who should have filled them out suggests either that not all of these servicemembers filled out the questionnaire or that questionnaires were filled out, but were not incorporated into DOD’s central repository. When questionnaires for servicemembers from our population of interest are not in the central repository, DOD does not have reasonable assurance that all members of this vulnerable population of active component, Reserve, and National Guard servicemembers that deployed to Iraq or Afghanistan were administered the PDHRA questionnaire, which is intended to help identify deployment-related health concerns that emerge over time and facilitate the opportunity for servicemembers to address these concerns. DOD’s Reserve Health Readiness Program (RHRP) office uses four methods to monitor LHI’s administration of the PDHRA to Reserve component servicemembers. However, in using these methods, DOD’s RHRP office does not always clearly document its monitoring of the PDHRA program. The office’s documentation does not allow DOD to have reasonable assurance that potential problems that may relate to the welfare and safety of servicemembers have been addressed and resolved. DOD’s RHRP office uses four methods to monitor LHI’s performance in administering the PDHRA to Reserve component servicemembers. More broadly, the RHRP office also uses the four methods to monitor whether the objective of the PDHRA program—to identify and address servicemembers’ health concerns that emerge over time following deployments—is being met for Reserve component servicemembers. The four methods, which are identified in DOD’s contract with LHI, are the following: Reviews of periodic reports. The RHRP office receives several periodic reports that DOD requires from LHI on LHI’s administration of the PDHRA to Reserve component servicemembers. DOD requires LHI to report aggregate information on LHI’s administration of the PDHRA, including reports on its compliance with performance standards. For instance, the RHRP office receives a report on LHI’s compliance with the performance standard requiring that LHI call center staff answer 80 percent of incoming calls within 120 seconds. In addition, LHI provides descriptive information on the number of Reserve component servicemembers administered the PDHRA, referred for a further evaluation, and contacted by LHI staff 30 days after receiving referrals. An RHRP official told us that the office reviews LHI’s reports to examine data related to the administration of the PDHRA and identifies potential problems that could pose risks to servicemembers and the objective of the PDHRA program. For example, the number of servicemembers referred each month is compared against historical data to monitor any changes in the rate at which servicemembers receive referrals for physical and mental health concerns. Inspections of the administration of PDHRA. The RHRP office also conducts inspections related to LHI’s administration of the PDHRA to Reserve component servicemembers. The inspections have included observing PDHRA on-site events, at which servicemembers are administered the PDHRA, to assess the quality of LHI services delivered during the events. For instance, an RHRP official told us that he checks whether the events are staffed with a sufficient number of health care providers and administrative staff. In another instance, an RHRP official listened to LHI call center discussions between LHI staff and Reserve component servicemembers to examine how LHI staff administer the PDHRA. Feedback on the administration of the PDHRA from military service officials. The RHRP office obtains informal feedback about how the PDHRA is being administered to Reserve component servicemembers through e-mail correspondence, telephone conversations, and in-person discussions with military service officials who are responsible for managing the PDHRA for their respective services. These officials told us that the RHRP office generally maintains open, informal communication channels through which they can and do express their concerns. Weekly telephone discussions with LHI staff. An RHRP official said that weekly telephone discussions with LHI staff are held to obtain their feedback on the administration of the PDHRA to Reserve component servicemembers. In addition, during these discussions, an RHRP official and LHI staff discuss and address potential problems identified through the periodic reports, inspections, and military service feedback. The potential problems staff discuss may include those that could pose a risk to the PDHRA program objective and to the welfare and safety of Reserve component servicemembers. For example, some problems concerned how servicemembers were responding to questions on the PDHRA questionnaire. According to an RHRP official, these discussions serve as a forum to determine the actions DOD officials or LHI staff need to take to address identified problems and to verify that problems raised during previous discussions have been properly resolved. When monitoring the administration of the PDHRA to Reserve component servicemembers, DOD does not maintain clear documentation that is consistent with good management practices outlined in federal internal control standards. According to these standards, internal control activities such as monitoring should be clearly documented in a manner that is accurate, timely, and helps provide reasonable assurance that program objectives are being achieved. Further, such documentation should be properly managed and maintained so that it is readily accessible and should allow someone other than the assigned officials to understand the identified potential problem, the actions taken to address the problem, and whether these actions have resolved the problem. Instead of adopting an approach that generated documentation consistent with management practices outlined in federal internal control standards, the RHRP office created an unsystematic, improvised approach for documenting potential problems that were identified through review of periodic reports, inspections, and feedback from military service officials and LHI staff. These problems may pose a risk to the PDHRA program objective. The RHRP office’s approach solely relies on agendas and e-mail correspondence to document these potential problems and the actions taken to resolve them. Weekly agendas with related notes. Prior to the weekly discussions with LHI staff, a typed agenda is prepared for discussion that lists the potential problems identified through monitoring activities. During these discussions, annotated brief handwritten notes are made on an agenda item and these notes are used to indicate that certain actions need to be taken by DOD officials or LHI staff to address an identified problem. When it appears that problems have been resolved, instead of documenting how the problem was resolved, the problem is not included on the agenda for the next week’s discussion. A hard copy of the agenda is retained and filed in the RHRP office after each weekly discussion with LHI staff. E-mail correspondence. The RHRP office receives and generates e-mail correspondence with LHI staff and military service officials that discusses potential problems identified through monitoring the administration of the PDHRA to Reserve component servicemembers. We found that the RHRP office’s sole reliance on these agendas and e-mail correspondence did not always result in clear documentation—that is, documentation that is understandable and readily accessible to others outside the RHRP office. In particular, the agendas and e-mail correspondence we reviewed did not always clearly describe the decisions made and the actions taken to address identified problems in a manner that provides reasonable assurance that the problems have been resolved. An RHRP official acknowledged that he could not consistently rely on the agendas and e-mail correspondence to reconstruct information obtained through the office’s internal control responsibility to monitor and address problems associated with the administration of the PDHRA to Reserve component servicemembers. Instead, this official relies on memory to recall such information. In our review of the documentation related to 15 potential problems we selected, we found that the RHRP office’s documentation generally did not clearly describe the problem, the actions taken to address the problem, and whether these actions have resolved the problem. For example, on two September 2008 agendas for the weekly discussions with LHI staff, the agendas include a potential problem identified by DOD officials that 25 to 30 percent of servicemembers were not responding to questions on the PDHRA questionnaire about the number of alcoholic drinks they consumed on a typical day, and 18 percent were not responding to the questions about whether they were depressed. The RHRP office asked LHI staff if they were finding similar nonresponses to these questions from Reserve component servicemembers. However, the subsequent agendas— which the RHRP office relies on to track problems—do not contain any additional information about how this potential problem was resolved. There is no further information, including any related e-mail correspondence, on how this potential problem was addressed. This example raises questions about whether Reserve component servicemembers’ mental health concerns—specifically those related to alcohol use and depression—are being consistently identified or whether any follow-up actions are needed to address this problem. We also reviewed e-mail correspondence related to the 15 selected potential problems, dated between March 4, 2008, and May 19, 2009, and found that it often contained vague information about the identified problems and the actions taken to resolve them. Some of these problems relate to the welfare and safety of servicemembers and require more information than is present in the available documentation to understand whether or how the problems were resolved. For example, we asked about an instance in which a military service official reported that an LHI health care provider failed to document on a Reserve component servicemember’s PDHRA questionnaire why the servicemember with reported suicidal ideations did not get an immediate referral. The e-mail correspondence about this potential problem had gaps. For instance, while the e-mail correspondence indicates that the RHRP office asked LHI staff to look into the incident and contains a reply from LHI staff stating that they would investigate further, the e-mail correspondence does not document the final results of the LHI staff’s investigation, including whether or how this potential problem was resolved. Although this potential problem is listed on several agendas, the agendas do not provide any more information about the problem or how it was resolved. Instead, an RHRP official told us that he relied on his memory to explain to us what happened and how it was resolved. This official told us he requested that LHI staff remind its health care providers to fully document the results of their physical and mental health assessments on Reserve component servicemembers’ PDHRA questionnaires in the future. However, there is no documentation of this request to LHI staff or any documentation of plans to follow up to ensure that LHI staff carried out the request. (For more information on the documentation related to the 15 potential problems, see app. IV.) In addition to being generally incomplete and unclear, the e-mail correspondence related to the potential problems is not readily accessible. E-mail correspondence about identified problems is placed into labeled folders within the office’s e-mail system, but the labels are general and e- mail correspondence could appropriately be placed in a number of folders. An official told us that, as a result, he struggles in remembering into which folder a specific e-mail was placed. For example, e-mail correspondence about an identified problem about an Army Reserve servicemember’s PDHRA questionnaire may be placed in either an Army PDHRA folder, a PDHRA complaints folder, or a standard operating procedures folder. Not having documentation that is readily accessible hinders the RHRP office’s ability to promptly ensure that identified problems have been properly addressed. In addition, should LHI’s performance diminish—for example, if LHI was not resolving identified problems—the lack of readily available documentation could compromise DOD’s ability to take appropriate action. DOD established the PDHRA program in order to identify and address servicemembers’ health concerns—including mental health concerns— that emerge over time following deployments. The PDHRA questionnaire is a key tool in DOD’s efforts to assess the physical and mental health condition of servicemembers who have returned from deployments to Iraq and Afghanistan, where exposure to intense combat can place servicemembers at risk for developing conditions, such as post-traumatic stress disorder. DOD’s deployment health quality assurance program assesses the military services’ compliance with the requirement to administer the PDHRA to active and Reserve component servicemembers to help ensure that servicemembers have the opportunity to have their health concerns identified and addressed through the PDHRA. However, our current findings show that the concerns we previously raised about DOD’s quality assurance program remain. Specifically, the absence of PDHRA questionnaires in DOD’s central repository for thousands of eligible active and Reserve component servicemembers continues to hinder the program’s ability to assess the extent to which these servicemembers fill out the PDHRA questionnaire and have the opportunity to identify any health concerns that emerge over time following deployments. DOD officials have identified the presence of questionnaires in the central repository as important for the efforts of DOD’s quality assurance program. Without this information, the program may not be able to accurately determine completion rates among the military services and thus provide reasonable assurance to DOD or to Congress that one of DOD’s key health assessments is being administered as required. Ensuring that Reserve component servicemembers fill out the PDHRA questionnaire may be particularly important, as some evidence suggests that these servicemembers may be more likely to develop mental health conditions after returning from deployment when compared with their active component counterparts. Although DOD monitors the administration of the PDHRA to Reserve component servicemembers, documentation associated with this monitoring needs to be strengthened. The RHRP office has not always clearly documented—in a way that is consistent with federal internal control standards—information related to problems that may pose risks to the objective of the PDHRA program. Some of these potential problems also involve Reserve component servicemembers’ welfare and safety. The office’s improvised, unsystematic approach to documenting potential problems results in documentation that does not provide DOD with reasonable assurance that these problems have been resolved. To help DOD obtain reasonable assurance that all active and Reserve component servicemembers to whom the PDHRA requirement applies are provided the opportunity to have their health concerns identified, we recommend that the Assistant Secretary of Defense for Health Affairs and the military services take steps to ensure that PDHRA questionnaires are included in DOD’s central repository for each of these servicemembers. To ensure adequate documentation of problems that may pose risks to the objective of the PDHRA program for Reserve component servicemembers, we recommend that the Assistant Secretary of Defense for Health Affairs require the RHRP office to document the information obtained through monitoring the PDHRA program in a manner consistent with federal internal control standards. In written comments on a draft of this report, DOD concurred with our two recommendations. Specifically, DOD agreed with our recommendation that it take steps to ensure that PDHRA questionnaires are included in DOD’s central repository for each servicemember to whom the PDHRA requirement applies. DOD stated that to implement this recommendation, it will take the following actions: work to correctly identify servicemembers who need to receive the PDHRA and work to identify and resolve any obstacles to transmission of data from the military services to AFHSC. Additionally, DOD stated that it sent a memorandum on October 15, 2009, to the military services’ Surgeons General re- emphasizing the importance of deployment health requirements. DOD also agreed with our recommendation concerning documentation of problems that may pose risks to the objective of the PDHRA program for Reserve component servicemembers. DOD stated that during our engagement, the RHRP office recognized a need to improve documentation of its monitoring of the PDHRA program. DOD also stated in its response that the RHRP office established a more distinct and clear electronic filing system and began documenting not only potential problems with the PDHRA program, but also their resolution in a manner that DOD reports is sufficiently comprehensive, accessible, and understandable. We did not assess this new approach to documentation. DOD’s written comments are included in their entirety in appendix V. DOD did not provide technical comments. We are sending copies of this report to the Secretary of the Department of Defense. The report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To determine the extent to which post-deployment health reassessment (PDHRA) questionnaires are contained in the Department of Defense’s (DOD) central repository for active and Reserve component servicemembers who returned from deployment to Iraq or Afghanistan, we conducted a quantitative analysis using DOD data from two sources—the Defense Manpower Data Center’s (DMDC) Contingency Tracking System (CTS) and the Armed Forces Health Surveillance Center’s (AFHSC) Defense Medical Surveillance System (DMSS). DMDC’s CTS contains data on servicemembers deployed in support of the Overseas Contingency Operations—including data on servicemembers’ deployment dates and location of deployment. We used CTS data to identify a selected population of interest: active component, Reserve, and National Guard servicemembers who had returned from deployments of greater than 30 days to Iraq or Afghanistan between January 1, 2007, and May 31, 2008. DMDC officials identified this population for us using servicemembers’ deployment dates and locations. If a servicemember had multiple deployments during this period, we received data on a servicemember’s most recent return from deployment during this time period. Although DOD initiated the PDHRA program in March 2005, the military services implemented the program at different times, with full implementation across all services in late 2006. In addition, servicemembers are not eligible to fill out a PDHRA questionnaire until 90 to 180 days after they have returned from deployment. Thus, when we requested data from DMDC in late fall 2008, we needed to focus on a population of servicemembers who had returned from deployment at least 180 days prior to our data request. Therefore, we focused our analysis on servicemembers who returned from deployment on or after January 1, 2007, and on or before May 31, 2008. As a secondary check that this population of interest had, in fact, deployed, we sent our population of interest to AFHSC officials, who compared our population to two other deployment rosters. The results of this match identified our final population of interest of approximately 319,000 servicemembers. AFHSC’s DMSS is DOD’s central repository for PDHRA questionnaires and the military services are required to submit questionnaires to this central repository. To determine the extent to which servicemembers in our population of interest had PDHRA questionnaires in DOD’s central repository, we sent personal identifying information and beginning and end deployment dates from CTS for the servicemembers in our population of interest to AFHSC. AFHSC officials matched our population of interest to the PDHRA questionnaires in DMSS using a servicemember’s personal identifying information and the beginning and end deployment dates as recorded in CTS and as reported by the servicemember on the PDHRA questionnaire. AFHSC officials then sent us data from identified questionnaires that had been incorporated into DMSS as of April 15, 2009. As the military services collect PDHRA questionnaires in their own databases before transmitting them to AFHSC’s DMSS, we then asked officials from the military services to query their own databases to identify PDHRA questionnaires for servicemembers in our population of interest for which a PDHRA questionnaire could not be identified in DMSS. We provided the military services with a servicemember’s personal identifying information and deployment end date from CTS and asked them to query their databases for any questionnaires that they would consider to be ready to transmit to AFHSC that were filled out after the deployment end date listed in CTS. The military services then queried their own databases and, for each servicemember that we sent to them for which they could identify a PDHRA questionnaire, returned to us the end deployment date listed on the PDHRA questionnaire and date of questionnaire completion. If a servicemember filled out multiple PDHRA questionnaires after the deployment end date listed in CTS, we received information from each of these questionnaires. We received these data from the military services in late May and June 2009. We then matched our population of interest to the information from the PDHRA questionnaires received from the military services’ databases using a servicemember’s personal identifying information and the end deployment date as recorded in CTS and as reported by the servicemember on the PDHRA questionnaire. Finally, in September 2009, we obtained additional data from AFHSC to update our April 2009 data. AFHSC officials matched information from the servicemembers in our population of interest without a questionnaire in DMSS as of April 15, 2009, to the PDHRA questionnaires in DMSS as of September 4, 2009, using a servicemember’s personal identifying information and the beginning and end deployment dates as recorded in CTS and as reported by the servicemember on the PDHRA questionnaire. AFHSC officials then sent us data from identified questionnaires that had been incorporated into DMSS as of September 4, 2009. We conducted data reliability assessments for each DOD and military service data source we used by reviewing related documentation, interviewing knowledgeable agency officials, and performing electronic data testing for missing data, outliers, and obvious errors. We determined that these data sources were sufficiently reliable for our purposes. We determined only the extent to which questionnaires could be identified in AFHSC’s DMSS or the military services databases for our population of interest, and not the extent to which the servicemembers actually filled out a PDHRA questionnaire. We interviewed officials from DMDC and AFHSC, the DOD deployment health quality assurance program, and Army, Air Force, Navy, and Marine Corps officials involved in the collection and transfer of PDHRA questionnaires from the military services’ databases to AFHSC. We also reviewed DOD policies, as well as those of the Army, Air Force, Navy, and Marine Corps, for submitting PDHRA questionnaires to DOD’s central repository. Finally, we reviewed our prior work on DOD’s deployment health quality assurance program. To determine how DOD monitors the administration of the PDHRA to Reserve component servicemembers, we reviewed DOD’s policies for monitoring its contract with Logistics Health, Inc. (LHI), the contractor that administers the PDHRA to Reserve component servicemembers. We also interviewed officials with DOD’s Reserve Health Readiness Program (RHRP)—the DOD office responsible for monitoring LHI—along with military service officials responsible for managing the administration of the PDHRA to Reserve component servicemembers. We reviewed DOD’s contract with LHI and obtained and analyzed contractually required reports and other documentation on LHI’s performance related to the PDHRA portion of DOD’s contract with LHI. We visited LHI’s headquarters in La Crosse, Wisconsin, where we interviewed LHI staff to confirm our understanding of how LHI staff administers the PDHRA to Reserve component servicemembers. We additionally interviewed officials with the U.S. Army Medical Research Acquisition Activity, which provides support to DOD for the contract with LHI. We reviewed DOD’s monitoring of the administration of the PDHRA to Reserve component servicemembers to determine the extent to which these monitoring efforts met or were consistent with GAO’s Standards for Internal Control in the Federal Government. Internal controls include components of an organization’s management that provide reasonable assurance that program objectives are being achieved. The RHRP office receives contractually required reports from LHI on its administration of the PDHRA and we obtained and reviewed monthly reports from May 2008 through April 2009 for the following four required reports: (1) the PDHRA monthly activity report, (2) the call center access report, (3) the data entry report, and (4) the PDHRA customer satisfaction survey report. We selected the PDHRA monthly activity and the call center access reports to review because the RHRP office told us that these reports provide information that is needed to monitor LHI’s activities in administering the PDHRA to Reserve component servicemembers. The PDHRA monthly activity report provides aggregate information on LHI’s PDHRA administration, such as the number of Reserve component servicemembers administered the PDHRA, referred for a further evaluation, and contacted by LHI 30 days after receiving referrals. The call center access report provides information on LHI’s performance in operating its call center. In addition, we selected the data entry and PDHRA customer satisfaction survey reports to review because both reports provide information on performance standards associated with the PDHRA that LHI must meet to comply with the contract. The data entry report provides information on LHI’s performance in entering PDHRA data into military services’ databases, and the PDHRA customer satisfaction survey report provides information on whether LHI meets a performance standard associated with servicemember feedback on LHI’s administration of the PDHRA. We also reviewed the documentation associated with the RHRP office’s inspections of LHI’s administration of the PDHRA from October 1, 2007, through May 6, 2009. We reviewed a list, provided by the RHRP office, of the feedback received from military service officials between January 1, 2008, and April 30, 2009. To obtain more in-depth information on how DOD documents the information it obtains through its monitoring efforts, we examined documentation maintained by the RHRP office on its monitoring. Specifically, we reviewed the RHRP office’s agendas from the weekly telephone discussions with LHI staff and the office’s e-mail correspondence, which the RHRP office uses to document its monitoring. We reviewed approximately 70 of the RHRP office’s agendas for the period between October 1, 2007, and May 11, 2009. From the agendas and the provided list of military service feedback, we judgmentally selected 15 potential problems to review in further detail and asked the RHRP office to provide us with all of the documentation available on these problems, including any e-mail correspondence that related to the problem. We selected these 15 potential problems because the subject of each of these may have involved welfare and safety concerns for Reserve component servicemembers. The RHRP office provided us with e-mail correspondence dated between March 4, 2008, and May 19, 2009, related to the 15 potential problems. We reviewed the agendas and the available e- mail correspondence for the 15 selected problems to determine the actions taken to resolve the problems and how this information is documented and maintained. Our findings related to the documentation of the 15 potential problems are for illustrative purposes only and are not generalizable to other RHRP office documentation. We conducted this performance audit from October 2008 to October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Servicemembers fill out the post-deployment health reassessment (PDHRA) questionnaire electronically on form DD 2900, which was originally issued in June 2005. The Department of Defense (DOD) issued a revised form DD 2900 in January 2008. Department of Defense (DOD) policy requires that the military services electronically submit post-deployment health reassessment (PDHRA) questionnaires to DOD’s central repository, which DOD uses as a key source of health surveillance information. We queried DOD’s central repository on two occasions—April 2009 and September 2009. On April 15, 2009, we found that for approximately 23 percent of the roughly 319,000 servicemembers who, according to DOD deployment data, returned from deployment to Iraq or Afghanistan between January 1, 2007, and May 31, 2008, we could not identify questionnaires in the central repository (see table 2). After determining that about 74,000 servicemembers in our population of interest did not have questionnaires in DOD’s central repository based on our first query, we asked the military services whether these servicemembers had PDHRA questionnaires that could be identified in the services’ own databases. With the help of the services, we found that approximately 7,000 servicemembers had questionnaires in their respective military services’ databases, but not in DOD’s central repository (see table 3). On our second query of the central repository, which occurred on September 4, 2009, questionnaires for about 1,000 of these 7,000 servicemembers were in the central repository. The Department of Defense (DOD) contracts with Logistics Health, Inc. (LHI) to administer the post-deployment health reassessment (PDHRA) to Reserve component servicemembers. DOD’s Reserve Health Readiness Program (RHRP) is responsible for monitoring LHI’s administration of the PDHRA. To obtain more in-depth information on how the RHRP office documents the information it obtains through its monitoring of the administration of the PDHRA to Reserve component servicemembers, we examined documentation maintained by the RHRP office on its monitoring efforts. Specifically, from the potential problems that the RHRP office had identified as possibly posing a risk to the objective of PDHRA program, we judgmentally selected 15 to review in detail and obtained all available RHRP documentation from the RHRP office on those problems. We selected these 15 potential problems because the subject of each of these may have involved welfare and safety concerns for Reserve component servicemembers. We reviewed RHRP’s documentation to determine the extent to which RHRP maintains documentation in a manner consistent with GAO’s federal internal control standards. In particular, we examined the extent to which RHRP’s documentation clearly documented any actions taken to address a problem and indicated whether the problem had been resolved. In general, the selected RHRP documentation we reviewed did not meet these standards. Nine of the 15 selected potential problems lacked documentation on the actions taken to address the problems and/or lacked documentation of the problems’ resolutions. Four of the 15 problems had documentation of the actions taken and their resolutions, however, RHRP’s documentation was not sufficiently clear to allow us to independently understand what actions had been taken to address the problem or the problems’ resolutions. Rather, an RHRP official had to explain to us what had occurred. Two of the 15 problems had documentation that allowed us to understand the actions taken to address the problems and the problems’ resolutions. Table 4 summarizes the results of our analysis of RHRP’s documentation. For these potential problems, documented the nature of a servicemember’s suicidal ideations on the PDHRA questionnaire. documentation was lacking on the actions taken to address the problem and/or the problem’s resolution. For these potential problems, the RHRP call center to be administered the PDHRA may have been denied LHI PDHRA services and not administered the PDHRA. office had some available documentation, but this documentation was not sufficiently clear to allow us to independently determine the actions taken to address each problem and to determine the ultimate resolution of each problem. 11. Servicemembers who should not have been called by LHI may have been inappropriately called by LHI. 12. A PDHRA event may have lacked sufficient staff. 13. Health care providers may not have been documenting needed referrals for further evaluations when servicemembers declined the referrals. For these potential problems, the RHRP through the LHI call center were not receiving the same informational brochures as servicemembers administered the PDHRA at PDHRA on-site events. office had documentation of the actions taken to address each problem and the problem’s resolution. 15. Reserve component servicemembers did not know what to do to set up an appointment for a further evaluation. In addition to the contact named above, Mary Ann Curran, Assistant Director; Katherine L. Amoroso; Helen T. Desaulniers; Michael Erhardt; Martha A. Fisher; Krister Friday; Martha Kelly; Carolyn Kirby; Carolina Morgan; Lisa A. Motley; Julie E. Pekowski; William Woods; and Suzanne Worth made key contributions to this report. Federal Contractors: Better Performance Information Needed to Support Agency Contract Award Decisions. GAO-09-374. Washington, D.C.: April 23, 2009. Military Operations: DOD Needs to Address Contract Oversight and Quality Assurance Issues for Contracts Used to Support Contingency Operations. GAO-08-1087. Washington, D.C.: September 26, 2008. DOD Systems Modernization: Maintaining Effective Communication Is Needed to Help Ensure the Army’s Successful Deployment of the Defense Integrated Military Human Resources System. GAO-08-927R. Washington, D.C.: September 8, 2008. Defense Health Care: Oversight of Military Services’ Post-Deployment Health Reassessment Completion Rates Is Limited. GAO-08-1025R. Washington, D.C.: September 4, 2008. Electronic Health Records: DOD and VA Have Increased Their Sharing of Health Information, but More Work Remains. GAO-08-954. Washington, D.C.: July 28, 2008. VA and DOD Health Care: Administration of DOD’s Post-Deployment Health Reassessment to National Guard and Reserve Servicemembers and VA’s Interaction with DOD. GAO-08-181R. Washington, D.C.: January 25, 2008. Defense Health Care: Comprehensive Oversight Framework Needed to Help Ensure Effective Implementation of a Deployment Health Quality Assurance Program. GAO-07-831. Washington, D.C.: June 22, 2007. Military Personnel: DMDC Data on Officers’ Commissioning Programs is Insufficiently Reliable and Needs to be Corrected. GAO-07-372R. Washington, D.C.: March 8, 2007. Military Personnel: Actions Needed to Strengthen Management of Imminent Danger Pay and Combat Zone Tax Relief Benefits. GAO-06-1011. Washington, D.C.: September 28, 2006. Post-Traumatic Stress Disorder: DOD Needs to Identify the Factors its Providers Use to Make Mental Health Evaluation Referrals for Servicemembers. GAO-06-397. Washington, D.C.: May 11, 2006. Military Pay: Inadequate Controls for Stopping Overpayments of Hostile Fire and Hardship Duty Pay to Over 200 Sick or Injured Army National Guard and Army Reserve Soldiers Assigned to Fort Bragg. GAO-06-384R. Washington, D.C.: April 27, 2006. Contract Management: Opportunities to Improve Surveillance on Department of Defense Service Contracts. GAO-05-274. Washington, D.C.: March 17, 2005. VA and Defense Health Care: More Information Needed to Determine If VA Can Meet an Increase in Demand for Post-Traumatic Stress Disorder Services. GAO-04-1069. Washington, D.C.: September 20, 2004. Military Pay: Army Reserve Soldiers Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-911. Washington, D.C.: August 20, 2004. Defense Health Care: Quality Assurance Process Needed to Improve Force Health Protection and Surveillance. GAO-03-1041. Washington, D.C.: September 19, 2003.
The Department of Defense (DOD) implemented the post-deployment health reassessment (PDHRA), which is required to be administered to servicemembers 90 to 180 days after their return from deployment. DOD established the PDHRA program to identify and address servicemembers' health concerns that emerge over time following deployments. This report is the second in response to a Senate Armed Services Committee report directing the Government Accountability Office (GAO) to review DOD's administration of the PDHRA, and to additional congressional requests. In this report, GAO examined (1) the extent to which DOD's central repository contains PDHRA questionnaires for active and Reserve component servicemembers who returned from deployment to Iraq or Afghanistan and (2) how DOD monitors the administration of the PDHRA to Reserve component servicemembers. To conduct this review, GAO performed a quantitative analysis using DOD deployment and PDHRA data, reviewed relevant PDHRA policies, and interviewed DOD officials. DOD policy requires that the military services electronically submit PDHRA questionnaires to DOD's central repository. Based on two separate queries to this repository in 2009, GAO did not find PDHRA questionnaires for a substantial percentage of the 319,000 active and Reserve component servicemembers who returned from deployment to Iraq or Afghanistan between January 1, 2007, and May 31, 2008. GAO's first query on April 15, 2009, showed that only 77 percent of this population of interest had questionnaires in the central repository, leaving approximately 74,000 servicemembers without questionnaires in the repository. On September 4, 2009, GAO queried DOD's central repository again to update its April 2009 data and found that DOD's central repository was still missing PDHRA questionnaires for about 72,000 servicemembers, or 23 percent of the servicemembers in GAO's original population of interest. When PDHRA questionnaires are not in DOD's central repository, DOD does not have reasonable assurance that servicemembers to whom the PDHRA requirement applies were given the opportunity to fill out the questionnaire and identify and address health concerns that could emerge over time following deployment. DOD uses four methods to monitor the contractor, Logistics Health, Inc. (LHI), that administers the PDHRA to Reserve component servicemembers. The four monitoring methods are: (1) reviews of periodic reports from LHI; (2) inspections of LHI's administration of the PDHRA; (3) feedback on LHI's administration of the PDHRA from military service officials; and (4) weekly telephone discussions with LHI staff. These methods are used to help ensure that the objective of the PDHRA program is being met for Reserve component servicemembers. Through these methods, DOD identified a number of potential problems that may pose risks to the PDHRA program objective and to the welfare and safety of Reserve component servicemembers. However, GAO found that when monitoring the administration of the PDHRA to Reserve component servicemembers, DOD does not maintain clear documentation that is consistent with federal internal control standards. GAO found that the documentation generated by DOD generally did not clearly describe the potential problems, the actions taken to address the problems, and whether these actions had resolved the problems. Overall, this lack of clear documentation does not allow DOD to have reasonable assurance that potential problems related to the administration of the PDHRA to Reserve component servicemembers have been addressed and resolved.
Although aviation-related activities currently account for only 0.5 percent of total air pollution in the United States, the types of pollutants emitted by these activities are among the most prevalent and harmful in the atmosphere, and are expected to grow over time. The major sources of aviation-related emissions are aircraft, which emit pollutants at ground level as well as over a range of altitudes; the equipment (such as vehicles that transport baggage) that services them on the ground at airports; and vehicles transporting passengers to and from the airport. The amount of emissions attributable to each source varies by airport. A 1997 study of mobile source emissions at four airports found that ground access vehicles were the most significant source (accounting for 27 to 63 percent of total mobile source emissions), followed by aircraft (15 to 38 percent of the total) and ground service equipment (12 to 13 percent of the total). The emissions produced by these sources include carbon monoxide; sulfur dioxide; particulate matter; toxic substances (such as benzene and formaldehyde); and nitrogen oxides and volatile organic compounds, which contribute to the formation of ozone, a major pollutant in many metropolitan areas. In addition, aircraft emit carbon dioxide and other gases that have been found to contribute to climate change due to warming. According to the United Nations’ Intergovernmental Panel on Climate Change, global aircraft emissions accounted for approximately 3.5 percent of the warming generated by human activities. (The types, amounts, and impact of emissions from aviation-related sources are described in detail in appendix II.) Although only limited research has been done on the impact of projected growth in air travel on emissions, indications are that emissions are likely to continue increasing. FAA reported in June 2001 that the number of commercial flights is expected to increase about 23 percent by 2010 and about 60 percent by 2025. Each flight represents a takeoff and landing cycle during which most aircraft emissions enter the local atmosphere. In addition, an EPA study of 19 airports projected that the proportion of mobile-source emissions of nitrogen oxides attributable to aircraft in the areas adjacent to these airports will triple from a range of 0.6 to 3.6 percent in 1990 to a range of 1.9 to 10.4 percent in 2010. Such projections, however, do not consider recent industry changes, such as airlines’ increased use of smaller aircraft and the financial uncertainties in the aviation industry. A recent report by the Department of Transportation indicated that the September 11, 2001, terrorist attacks, combined with a cut-back in business travel, had a major and perhaps long-lasting impact on air traffic demand. A number of federal, state, and international agencies are involved in controlling aviation-related emissions. The Clean Air Act mandates standards for mobile sources of emissions such as aircraft, ground service equipment, and automobiles. As mandated by the act, EPA promulgates emission standards for aircraft, and has chosen to adopt international emission standards for aircraft set by ICAO, which was chartered by the United Nations to regulate international aviation and includes the United States and 188 other nations. As the United States’ representative to ICAO, FAA, in consultation with EPA, works with representatives from other member countries to formulate the standards. EPA and FAA work to ensure that the effective date of emissions standards permit the development and application of needed technology and give appropriate consideration to the cost of compliance, according to FAA officials. The officials also noted that EPA is responsible for consulting with FAA concerning aircraft safety and noise before promulgating emission standards. In addition to issuing aircraft emission standards, ICAO has studied aviation-related emission issues and issued guidance to its members on ways to reduce these emissions. States can address airport emissions in plans, known as state implementation plans, that they are required to submit to EPA for reducing emissions in areas that fail to meet the National Ambient Air Quality Standards set by the EPA under the Clean Air Act for common air pollutants with health and environmental effects (known as criteria pollutants). Geographic areas that have levels of a criteria pollutant above those allowed by the standard are called nonattainment areas. Areas that did not meet the standard for a criteria pollutant in the past but have reached attainment and met certain procedural requirements are known as maintenance areas. The options available to states for controlling pollution from airports are limited because most emissions come from mobile sources, such as automobiles, which are already regulated by EPA, and states are generally preempted from issuing regulations on aircraft emissions because of EPA’s federal responsibility in this area. FAA is responsible for enforcing the emission standards and for ensuring that emissions resulting from airport construction projects under their authority comply with the National Environmental Policy Act, which requires an environmental review of such projects, and the Clean Air Act’s requirement that the projects comply with state implementation plans for attaining air quality standards. (See appendix III for additional information on federal, state, and international responsibilities concerning aviation- related emissions.) Many of the nation’s busiest airports and airlines that serve them have initiated voluntary emission reduction measures, such as converting shuttle buses and other vehicles from diesel or gasoline fuels to cleaner alternative fuels. While the actual impact of these measures is unknown, some measures (such as shifting to new cleaner gas or diesel engines or alternative fuels) have the potential to significantly reduce emissions, such as nitrogen oxides, volatile organic compounds, particulate matter, and carbon monoxide. The airports and airlines have undertaken these efforts for a variety of reasons, including requirements by states imposed as part of their plans to ensure that severely polluted areas (i.e., nonattainment areas) achieve the air quality standards established by the Clean Air Act and to gain federal approval for airport construction projects. In late 2003, EPA will begin implementing stricter standards for ozone, which could make it more difficult for areas to achieve or maintain attainment status. Representatives from the aviation industry as well as federal and state officials told us that the new air quality standards, combined with the boost in emissions expected from increases in air travel, could cause airports to be subject to more emission control requirements in the future. In addition, according to FAA officials, approval of some projects in these areas may be less likely because of several factors, including increased focus on air quality by communities that oppose airport development. Many of the nation’s busiest airports, in conjunction with the air carriers that serve them, have implemented voluntary control measures to reduce emissions from major sources, including aircraft, ground support equipment, and passenger vehicles entering and exiting the airport, according to our review of FAA documents and interviews with airport and state environmental officials. Specific guidelines or regulations for airports to reduce emissions from these sources do not exist, but some airports have been proactive in developing programs and practices that reduce emissions. Although the actual impact of these measures is unknown, some initiatives have the potential to significantly reduce emissions from certain sources. For example, a number of carriers at Dallas/Fort Worth International and Houston airports have agreed to voluntarily reduce emissions associated with ground service equipment by up to 75 percent. Figure 1 provides examples of activities to reduce emissions that have been implemented at U.S. airports. Appendix V provides more information on some airports’ voluntary efforts to reduce emissions. Only 3 of the 13 states with major commercial airports in nonattainment areas—California, Texas, and Massachusetts—have targeted airports for emission reductions. The remaining states have not included emission reductions at airports as part of their strategies for bringing nonattainment areas into compliance with the Clean Air Act’s ambient air quality standards because they have attempted to achieve sufficient reductions from other pollution sources. Officials from these states noted that EPA has the authority to set emission standards for aircraft and nonroad vehicles, including ground support equipment at airports, which preempts the states’ regulation of these sources. California and Texas face major ozone nonattainment problems— California in the Los Angeles metropolitan area and Texas in the Dallas- Fort Worth and Houston metropolitan areas. According to air quality officials from both states, even after imposing all of the traditional emission control measures available, such as vehicle emission inspections, the three metropolitan areas still may not be able to reach attainment status for ozone by the 2010 deadline for Los Angeles and by the 2005 and 2007 deadlines for Dallas-Fort Worth and Houston, respectively. Despite potential legal challenges from airlines, both California and Texas turned to airports for additional emission control measures. Texas has negotiated an agreement with the Dallas/Fort Worth International and Houston airports and the airlines that serve them to reduce emissions attributable to ground support equipment by 90 percent. California has reached a similar agreement with the major airlines serving the five commercial airports in the Los Angeles nonattainment area to reduce emissions from ground support equipment. California’s efforts to cut ground support equipment emissions in the Los Angeles area are part of a statewide campaign to reduce airport pollution. In addition to using its limited authority under the Clean Air Act to implement airport-related emission reductions, the state has also employed a certification process provided for in federal law. Under this provision, before FAA can approve a grant for any new airport, new runway, or major runway extension project, the governor must certify that the project complies with applicable air and water quality standards. California has developed criteria for determining whether a proposed airport expansion project would have an impact on the environment, including air quality. Unlike other states, California uses the criteria as a mandatory condition for project certification. If the project exceeds one of the criteria—by increasing the number of passengers, aircraft operations, or parking spaces and thereby producing an impact on the environment— the airport is required to implement emission mitigation measures in order to attain certification. Thus far, three airports—Sacramento International, San Jose International, and Ontario International—have initiated expansion projects that were required to comply with the certification standards. However, in a legal opinion issued in August 2000, FAA’s Office of Chief Counsel stated that California has no legal authority to impose operational limitations on airports through the certification process. According to FAA, California has not publicly responded to the opinion. A California air quality official told us that the state disagrees with the opinion and does not plan to change its certification process. In 1999, Boston Logan International Airport began building a new runway to reduce serious flight delays. As a condition for approving the project, the state required the airport to cap emissions at 1999 levels (referred to as a “benchmark”) because it has determined that the airport is a significant contributor to Boston’s serious ozone problem. To stay within the limit, the airport had considered reduction strategies that include charging higher landing fees during peak operating times to reduce congestion and the resulting emissions. Now that air traffic and emission levels have fallen off since the events of September 11, 2001, the operator of the Boston airport, the Massachusetts Port Authority, believes that peak pricing and other emission reduction strategies will not be needed for several years to keep emissions below 1999 levels. The Massachusetts Port Authority, however, continues to work with airport tenants to implement voluntary emission reduction strategies. More information on states’ efforts to reduce emissions appears in appendix IV. In addition to facing control measures as part of state strategies to attain the Clean Air Act’s ambient air quality standards, airports must also submit most major construction project proposals for federal environmental review, which includes an evaluation of the proposed project’s impacts on air quality. The National Environmental Policy Act and the Clean Air Act require that FAA perform environmental reviews of all airport projects that involve the federal government, such as the construction of federally subsidized runways. As part of this review process, FAA must determine that emissions from projects at airports in nonattainment and maintenance areas do not adversely interfere with states’ plans for the areas to reach attainment. We examined all environmental reviews conducted by FAA at major commercial airports in nonattainment areas during the 3-year period 1998 to 2001. These reviews include those required by the National Environmental Policy Act as well as those required under the Clean Air Act to ensure compliance with state implementation plans for achieving ambient air quality standards. During the period, FAA performed such reviews at 24 of the 26 major commercial airports in nonattainment areas. The projects reviewed included developing runways, expanding passenger terminals and air cargo and airline support facilities, and developing roadways and intersections on airport property. Our analysis of airport environmental review documents showed that while air quality issues are a significant consideration for airports planning major development projects, emissions have not been a major obstacle in gaining approval for projects; however, FAA is concerned that increasing emissions from operations could jeopardize the approval of future expansion projects. In 12 of the 24 cases we examined, the environmental reviews stated that the airport expansion projects would not affect air quality in the regions. The environmental reviews for 7 of these 12 projects estimated that emissions would decrease as a result of improvements in operational efficiency. For example, John F. Kennedy International Airport expected its proposed passenger terminal, air cargo, and airline support facilities expansion project to decrease the emission of nitrogen oxides by 207.2 tons per year by 2010 (about a 5-percent reduction in total airport nitrogen oxides emissions) because the project was expected to decrease the amount of time aircraft take to taxi from the runway to the terminal. For 8 of the projects, significant project-related emission increases resulted from construction activities and, although the increases were temporary, the airports were required, under EPA’s general conformity rules, to adopt mitigation measures to allow FAA to determine that the projects complied with state implementation plans. In only 3 cases, was a significant permanent rise in emissions expected to result from the project. Five airports —Atlanta Hartsfield, Dallas/Fort Worth International, Los Angeles International, San Jose International, and Oakland International—were required to reduce emissions from other sources in order to mitigate the effects of the increased emissions expected from either project construction or operations related to a project. Atlanta Hartsfield, for example, committed to reduce emissions associated with construction by requiring construction equipment to be operated with catalytic converters that would reduce emissions and by using a massive conveyor system to haul fill material, thereby minimizing the use of trucks. Although most recent airport construction projects in nonattainment areas met the requirements of the Clean Air Act, FAA officials noted that in the future, approval of some projects in these areas could be in jeopardy if state implementation plans did not make adequate allowances for emissions that could result from growth in aviation-related activities or include provisions for airports to offset future increases. FAA noted that approval of projects is complicated by the fact that it is often difficult to determine if a development project complies with the state implementation plan because some plans do not contain an aviation emission component, while other plans use a model or methodology to calculate aviation emissions that is incompatible with FAA’s model to determine a project’s compliance with air quality requirements. In addition, FAA noted that approval of some projects may be complicated by an increased focus on air quality by community groups that oppose airport projects, the insistence of EPA and/or state and local air quality agencies on mitigation measures when FAA has determined that proposed projects will reduce emissions, and the general need to better understand aviation emissions. According to FAA, approval of airport construction projects may be further complicated by differences among federal and state air quality standards, especially when state standards are more restrictive, and differences among EPA and state/local air quality agencies on the appropriate analysis and mitigation measures. Also, FAA officials have noted an increasing trend for communities to demand under the National Environmental Policy Act that FAA undertake and disclose the effects of air toxics and health effects studies. Finally, although emissions from construction activities are temporary, if they are above allowable levels, FAA is required to undertake and issue a full determination that the project/activity will conform to the state implementation plan. FAA, EPA, and some states have developed programs to reduce emissions from aviation-related activities and established jointly with the aviation industry a process that has tried to reach a voluntary consensus on how to further reduce emissions. For example, as part of its Inherently Low- Emission Airport Vehicle Pilot Program, required by Congress in 2000, FAA awarded federal grants of up to $2 million to each of 10 airports for alternative fuel vehicles and infrastructure. FAA is using the program to evaluate the vehicles’ reliability, performance, and cost-effectiveness in the airport environment. FAA initially anticipated that the program would reduce emissions by 22,584 tons of ozone, 314,840 tons of carbon monoxide, 384 tons of particulates, and 924 tons of sulfur dioxide during the projected lifetime of the airport equipment. To achieve this reduction, FAA expected the airports to purchase about 1,600 pieces of alternative fuel ground support equipment and 600 alternative fuel ground access vehicles, such as airport cars, buses, and shuttles. As of October 2002, FAA reported a slower-than-expected start-up of the program, with only five airports (Baltimore-Washington International, Dallas/Forth Worth International, Baton Rouge Metropolitan, Sacramento International, and Denver International) making notable progress on the program. According to FAA, the effects of the events of September 11, 2001, have caused unforeseen delays and acquisition deferrals for many low-emission vehicle projects, particularly those that rely on airline financing to convert ground support equipment to alternative fuels. Although FAA plans to provide $17.3 million for the Inherently Low- Emission Airport Vehicle Pilot Program, airports and air carriers expressed the need for more federal funding to reduce emissions. Some airports have said that they would like flexibility in how the Airport Improvement Program or passenger facility charge funds can be used to mitigate or offset emissions from expansion projects. For instance, Sacramento Airport officials stated that they would like the city’s light rail system to be connected to the airport to reduce emissions from ground access vehicles. However, Airport Improvement Program or passenger facility charge funds cannot be used for emission mitigation projects located outside airport property. According to FAA, DOT’s Congestion Mitigation and Air Quality grant program can be used to finance emission mitigation projects located outside of airport property. Some states also have emission reduction assistance programs that are available to airports. The California Environmental Protection Agency developed the Carl Moyer Program, which is an incentive-based program that covers the incremental cost of purchasing airport vehicles with cleaner engines, including ground support equipment at airports. The program taps into available new environmental technologies to help the state advance clean air goals. It provides funds to private companies or public agencies to offset the incremental cost of purchasing the cleaner engines. The Texas Natural Resource Conservation Commission also established incentive funds for emission reduction efforts, similar to California’s program. As in California, the funds are not specifically designated for emission reductions at airports, but air carriers that are not participating in the agreement with the Commission to voluntarily reduce ground support equipment emissions can receive grants to convert their ground support equipment. Airlines that are part of the voluntary agreement would not be eligible for the incentive funds. Some airport operators we spoke with would like EPA to set up a process in which airports could obtain “credit” for the amount of emissions reduced by their voluntary efforts; the credits can be “banked” by the airport to use at a future date to offset expected increases in emissions or they can be sold to other nonairport entities in the region that are required to offset emissions. The airport operators also indicated that having such a program encourages airport sponsors to undertake efforts to reduce emissions. Such an emission credit program is available in Washington State. Airports there can implement emission reduction efforts and obtain emission credits, which they can save and use to offset increased emissions from future expansion projects. Thus far, such a system has been adopted at one location, Seattle–Tacoma International Airport, which worked with the local clean air agency to establish a credit program for voluntary emission reduction actions. If airports are not allowed to save emission credits, any voluntary reductions will lower their emission baseline, which is used to calculate the impact of future emissions, and limit their options for any emission reductions required to obtain approval for future projects. Because of this situation, some airport officials told us that they have waited to initiate emission reduction efforts until the efforts are needed to gain approval for an expansion project. EPA encourages airports to contact their state and local air quality agencies and negotiate emission credit agreements, as was done by Seattle-Tacoma International Airport. However, according to FAA officials, this localized case-by-case approach to issuing emission credit is inefficient. Instead, FAA supports a consistent national approach that it believes would lessen the burden on airports to obtain emission credits from their respective states. In 1998, FAA and EPA established a process—known as the stakeholders group—which includes representatives from state environmental agencies, airports, air carriers, and the aerospace industry to discuss voluntary efforts to lower nitrogen oxides and other emissions. They established the process because federal and industry officials told us that the current approach to reducing emissions—uncoordinated efforts by individual airports and states—was inefficient and possibly ineffective from a nationwide perspective. For example, some federal officials believe the current approach encourages airlines to move their more polluting equipment to airports that do not require cleaner vehicles, and the aviation industry is concerned about the impact that differing state requirements might have on their operations. According to EPA, another reason for establishing the process was concerns by EPA, state environmental agencies, and environmental groups about international emissions standards, particularly standards for nitrogen oxides. The stakeholders group decided to focus on achieving lower aircraft emissions through a voluntary program because this strategy offered the potential for achieving desired goals with less effort and time than a regulatory approach. Initially, the group’s discussions focused on emission reduction retrofit kits, which could be applied to some existing aircraft engines, but this was found to not be technically feasible. However, as the process evolved, the stakeholders expanded the focus to evaluating various emission reduction strategies for aircraft and ground support equipment. According to participants, the group is currently working to establish a national voluntary agreement for reducing ground service equipment emissions in the nearer term, similar to the agreement in California. In the longer term, the group is considering reductions in aircraft emissions through an approach known as “environmental design space” that recognizes the need to balance such reductions with other competing goals such as noise reduction, while assuring safety and reliability. FAA also noted that airport operators used the stakeholders group to highlight the need for more guidance on the process for ensuring that federal actions, such as the construction of new runways, conform to the appropriate state implementation plans. FAA and EPA issued guidance on the process in September 2002. The group had also commissioned a study to establish a baseline of aviation-related emissions and another study of options for reducing them. However, the study will not be completed because of resource constraints, according to participants. FAA noted that the progress of the stakeholders group has been impeded by the impact of the events of September 11, 2001, on the airlines and the complex nature of addressing all stakeholders’ viewpoints to achieve consensus on a framework that can be applied nationally. The activities of the group were suspended after September 11, but resumed in May 2002. According to one member of the group, many participants have been frustrated by the group’s slow progress, but they hope to define a nationwide program to reduce emissions from ground service equipment in 2003 and continue discussion of aircraft emission reduction options. However, the group has not defined specific objectives or established time frames for achieving its goal of reducing aviation-related emissions. Furthermore, the group’s activities may be limited by the financial situation of participating air carriers. In late 2003, EPA plans to begin implementing a more stringent standard for ozone emissions, which could require more sources, including airports, to tighten controls on nitrogen oxides and some types of volatile organic compound emissions, which contribute to ozone formation. The new standard calls for concentrations of ozone not to exceed .08 parts per million over 8-hour blocks of time; the current standard requires concentrations not to exceed .12 parts per million over 1-hour blocks of time. Some state air quality officials that we spoke to believe that the continued growth of aviation-related ozone precursor emissions, coupled with such emissions from other sources, may affect their ability to meet to the new standard. The implementation of the 8-hour standard for ozone could have significant implications for airports. Currently, 26 major commercial airports are located in nonattainment areas for ozone. EPA has yet to designate and classify which areas will not be in attainment with the 8- hour standard. However, the agency estimates that under the 8-hour standard, areas containing 12 additional airports could be designated as nonattainment areas. Airports in these areas could be constrained in their ability to initiate development projects if they did not comply with the state implementation plans. EPA, however, believes that the new 8-hour standard provides an opportunity for the airports and the states that have not addressed airport emissions in their state implementation plans to identify airport emission growth rates when new plans are developed under the 8-hour standard. Among the 13 state air quality officials we surveyed, 5 expect that aviation emissions will somewhat or moderately hinder their state’s ability to demonstrate compliance with EPA’s new 8-hour ozone emission standard, and 3 stated that aviation emissions will greatly hinder their ability to comply. Some of these officials also said they are uncertain how their state will meet the new standards. Because the new 8-hour standard is more stringent, the states will need to develop more rigorous and innovative control measures for all sources and may have to rely on the federal government to reduce emissions from sources over which the state does not have jurisdiction, such as aircraft engines. Other countries use many of the same measures to reduce emissions at airports as the United States and, in addition, two countries have imposed landing fees based on the amount of nitrogen oxides emissions produced by aircraft. Emission-based landing fees and other market-based methods are currently being studied by ICAO and the former have been implemented in Switzerland and Sweden. Emission-based landing fees, although considered for Boston Logan International Airport, have not been implemented at any U.S. airports and many in the U.S. aviation community question their effectiveness. ICAO established a working group to identify and evaluate the potential role of market-based options, including emission charges, fuel taxes, and emission-trading regimes, in reducing aviation-related emissions. Thus far, the working group has concentrated on carbon dioxide emissions and has concluded that the aviation sector’s participation in an emission- trading system would be a cost-effective measure to reduce carbon dioxide in the long term. The ICAO Assembly, the organization’s highest body, has endorsed the development of an open emission-trading system for international aviation and has instructed its Committee on Aviation Environmental Protection to develop guidelines for open emission trading. The ICAO committee has also been studying emission charges or taxes as well as evaluating voluntary programs to reduce emissions. ICAO’s current policy, adopted in 1996, recommends that emission-based fees be in the form of charges rather than taxes and that the funds collected should be applied to mitigating the impact of aircraft engine emissions. Switzerland was the first country to implement a market-based system for reducing aviation-related nitrogen oxides and volatile organic compound emissions. In 1995, the Swiss federal government enacted legislation that allowed airports to impose emission charges on aircraft. In September 1997, the Zurich airport used this authority to establish emission-based landing fees as an incentive for air carriers to reduce emissions from aircraft using the airport. The use of emission-based landing fees has expanded to other airports in Switzerland and Sweden. The Geneva, Switzerland, airport implemented an emission-based landing fee similar to the fee scheme used in the Zurich airport in November 1998. Several Swedish airports also implemented emission fees after the Swedish Civil Aviation Administration approved such charges in January 1998. Similar to the system at Zurich airport, the Swedish airports reduced the landing charges so that income from emission charges is not considered an additional source of revenue. The establishment of emission-based landing fees in Switzerland and Sweden has affected the operations of airlines with frequent flights to airports in these countries. According to a representative of a jet engine manufacturer, a Swiss airline purchased a number of new aircraft equipped with engines designed to emit lower amounts of nitrogen oxides. The representative said that the airline wanted the engines in order to reduce its landing fees at Swiss airports. However, the airline filed for bankruptcy in 2001 and has ceased operations. Only a few other airlines have expressed interest in equipping their new aircraft with engines that emit less nitrogen oxides because they are more expensive and less fuel- efficient and have higher operating costs. As of December 2002, no other airlines had purchased such engines. No conclusive studies on the effectiveness of these emission-based landing fees have been completed. According to the Zurich Airport Authority, results of the emission-based landing fee can be shown only in the long term, making it difficult to quantify whether emissions such as nitrogen oxides or volatile organic compounds have been reduced. (FAA officials stated that the effects of emission-based fees can be estimated using existing models. For example, a 2001 ICAO working paper on market- based options for reducing carbon dioxide emissions found that enroute emissions charges would be insufficient to meet reduction targets.) Nevertheless, an aviation expert said that the emission-based landing fees have caused airlines to begin considering the cost of nitrogen oxides and volatile organic compound emissions as part of their business decisions. Emission-based landing fees have not been introduced at any U.S. airports. Boston Logan International Airport considered implementing such fees to reduce emissions, but a 2001 study commissioned by the Massachusetts Port Authority, which operates the airport, determined them to be ineffective. The study found that emission-based landing fees would be a small portion of commercial air carriers’ operating expenses and would be unlikely to affect their operational, purchasing, or leasing behavior substantially enough for them to consider using lower nitrogen-oxides- emitting aircraft and engines. Thus, the study concluded, the emission- based landing fees would not significantly induce commercial airlines to use aircraft engines emitting lower levels of nitrogen oxides. Although research and development efforts by NASA and aircraft and engine manufacturers have led to engine and airframe improvements that have increased fuel efficiency and lowered carbon dioxide and hydrocarbon emissions, trade-offs among several factors, including engine performance, have also resulted in increased nitrogen oxides emissions. Our analysis of data on aircraft emissions during landings and takeoffs indicates that the newest generation of aircraft engines, while meeting international standards, can produce considerably more nitrogen oxides emissions than the older versions they are replacing. Engine options for some aircraft are now being introduced that reduce nitrogen oxides emissions. Additionally, NASA has ongoing research into technologies that could reduce nitrogen oxides emissions from jet engines to well below current standards. However, aviation industry representatives are unsure whether the technologies will ever be developed to the point where they can be incorporated into future production engines because of uncertainties about funding and other factors. Given the long lifespan of aircraft, even if the technologies are developed, it could be decades before enough airplanes are replaced to have a measurable effect on reducing nitrogen oxides. As a result, both the environmental and aviation communities have expressed concerns that emissions from aircraft, particularly nitrogen oxides, need to be further reduced. Improvements in jet engine design have led to increases in fuel efficiency and reductions in most emissions, particularly emissions from aircraft flying at cruise altitudes. Historically, the improvements in fuel consumption for new aircraft designs have averaged about 1 percent per year. The aviation industry and NASA, which are developing fuel reduction technologies, expect this rate to continue for the next two decades. Air carriers’ desire to control fuel costs provided the impetus for these efforts. (Appendix VI provides a brief overview of fuel reduction technologies.) According to aircraft design experts, fuel consumption is the single biggest factor affecting the amount of most aircraft emissions. Table 1 shows the amount of emissions produced by a typical aircraft turbine engine during cruising operations for each 1,000 grams of fuel burned. According to aviation experts, new aircraft designs are reducing carbon dioxide emissions by about 1 percent per year—the same rate at which fuel consumption is being reduced. ICAO expects this carbon dioxide and fuel reduction trend to continue for the next 20 years. Carbon monoxide and hydrocarbon cruise emissions are declining even faster than the fuel reduction rates. These emissions, which are formed when a portion of the fuel is only partially combusted, are much easier to minimize with the hotter engine temperatures of the new more fuel-efficient engine designs. A byproduct of the improvements in jet engine design has been an increase in nitrogen oxides emissions during landings and takeoffs and while cruising, according to aviation industry experts. The new engine designs are capable of operating at higher temperatures and producing more power with greater fuel efficiency and lower carbon monoxide emissions. However, as engine-operating temperatures increase so do nitrogen oxides emissions. This phenomenon is most pronounced during landings and takeoffs, when engine power settings are at their highest. It is during the landing/takeoff cycle that nitrogen oxides emissions have the biggest impact on local air quality. Our analysis of aircraft landing/takeoff emissions shows that newer aircraft produce considerably more nitrogen oxides than older models. We identified examples of aircraft models and engines introduced in the last 5 years and compared their emissions with emissions from older aircraft they might replace. We found, for example, that although the newer Boeing 737 series aircraft are more fuel-efficient, are capable of flying longer distances (or with more weight), emit less carbon monoxide and hydrocarbons, and produce less takeoff noise than their predecessors, they also produce 47 percent more nitrogen oxides during landing/takeoff (see table 2). Significantly higher emissions of nitrogen oxides during landing/takeoff for the aircraft introduced in the last 5 years also occur in the largest aircraft. For example, the Boeing 777, the newest of the large jets, emits significantly more nitrogen oxides than comparable older aircraft. Table 3 compares a passenger model Boeing 747-400 with the Boeing 777 model and engines that it is most comparable to in seating capacity and range. Even before we adjusted for the greater seating capacity of the larger Boeing 747-400, we found that the most comparable Boeing 777—the 200ER model—produces 34 percent more nitrogen oxides emissions, even though ICAO data shows that the Boeing 777 is quieter and more fuel- efficient than the older aircraft it is replacing. For example, on a per seat basis, the Boeing 777 can be as much as 30 percent more fuel-efficient than older model Boeing 747s. As shown in table 4, the percentage increase in nitrogen oxides during landing/takeoff is 57 percent when the two aircraft are compared on a per seat basis (the amount of emissions divided by the number of seats on the aircraft). EPA and FAA regulate nitrogen oxides emissions and other emissions for U.S. commercial aircraft by requiring engine designs to meet ICAO standards for these emissions. Prior to production, all new engine designs are tested to determine the amount of nitrogen oxides and other emission characteristics. Only engines that meet the standards are certified for production. ICAO standards for nitrogen oxides were first adopted in 1981 and more stringent standards were adopted in 1993 (20 percent more stringent, effective 1996) and again in 1998 (16 percent more stringent, effective 2004). ICAO working groups are assessing whether or not the standards for nitrogen oxides emissions should be made more stringent than the standards that will take effect in 2004. Options being considered could make the standards between 5 percent and 30 percent more stringent between 2008 and 2012. Under ICAO standards, newly designed engines and modified versions of older designs are allowed to produce significantly more nitrogen oxides than their predecessors. This is because the ICAO standards recognize that nitrogen oxides emissions are a function of engine power capability and operating pressure. Therefore, the standards allow for higher nitrogen oxides emissions for engines that (1) operate at higher-pressure ratios, which increase their fuel efficiency and (2) produce more power. For example, the most common updated Boeing 737-700 aircraft model and engine produces 41 percent more nitrogen oxides during landing/takeoff than the most common older version it is replacing (see table 5). Both engines will meet the new ICAO standard, which will go into effect in 2004 (the old engine betters the standard by about 15 percent, the new one by about 10 percent). A lower nitrogen oxides producing engine is available for the Boeing 737-700. This engine produces 18.5 percent more nitrogen oxides than the older Boeing 737-700 that it is most comparable to in power and versatility. However, this engine is less common in the fleet that then the more powerful one that offers more aircraft versatility. The database we use shows that in the U.S. fleet there were 8 Boeing 737-700s with the lower nitrogen oxides emitting engines and 118 with the more powerful engines. There is an ongoing debate between the aviation and environmental communities over the best method for developing nitrogen oxides certification standards. Some in the aviation community want to maintain the current system under which the standards are made more stringent only when the engine manufacturers have produced engines that meet the new standards and new standards only apply to newly certified engines. (An industry official identified only two older types of engines that would not meet the more stringent 2004 nitrogen oxides standards.) Officials for the aviation industry said that it would be inadvisable to force more aggressive nitrogen oxides standards because new engine development programs are already complex and have many business and schedule risks. These officials added that the environmental regulatory process lacks cost-benefits data to defend a more aggressive approach that could result in extreme financial harm for engine and aircraft manufacturers if the approach delayed a new program. Further, some believe that if reductions in nitrogen oxides were to become a higher priority, it would be better to have market-based incentives that reward lower nitrogen oxides emissions than have aggressive and rigid pass/fail regulatory barriers. Moreover, some federal, state, and local environmental officials believe more incentives are needed to reduce aircraft nitrogen oxides emissions beyond the ICAO certification standards that are to take effect in 2004. They say that the current system gives little value to reducing nitrogen oxides in the many trade-offs among emissions, fuel-consumption, and other factors made during engine design. They reason that if there were more incentives to reduce nitrogen oxides emissions beyond the certification requirements, these incentives would accelerate innovations that minimize degradations in other engine performance characteristics such as fuel efficiency. While NASA and engine manufacturers have made continuous improvements for decades in technologies that have improved fuel efficiency, decreased noise, and decreased all emissions including nitrogen oxides, the design of the newest generation of engines has resulted in trade-offs that favor fuel efficiency and increase nitrogen oxides. Two engine manufacturers have responded to this problem by developing options for several new engines that reduce nitrogen oxides. (General Electric has developed a “dual annular combustor” technology for one of its CFM56 engines and Pratt Whitney has developed a “Technology for Affordable Low NOx” for some of its engines. This TALON technology is being used on some aircraft in the U.S. fleet.) According to NASA, about 100 engines using one of these technology options are currently in service on passenger and cargo aircraft. According to industry officials, knowledge gained from developing these options is contributing to ongoing nitrogen oxides reduction research. NASA, in association with jet engine manufacturers and the academic community, is working on several technologies to reduce nitrogen oxides emissions, although it is unclear if they can be introduced on commercial aircraft in the foreseeable future. If successfully developed and implemented, these technologies could significantly lower the emission of nitrogen oxides during landing and takeoff in new aircraft in stages over the next 30 years. However, the development of more fuel-efficient engines by NASA and the engine manufacturers, which are resulting in higher nitrogen oxides emissions, and the lack of economic incentives for airlines to support efforts to reduce nitrogen oxides emissions make the possibility of reaching these goals uncertain. In the last several years, increases in nitrogen oxides emissions from the more fuel-efficient engines have outpaced improvements made to reduce these emissions. Appendix VI provides more information on research to reduce nitrogen oxides emissions. Adding to the uncertainty of introducing technologies to reduce nitrogen oxides is the limited federal funding for this research effort. NASA officials told us that in the past they developed their research to the full engine test level before engine manufacturers would take over responsibility for integrating the improvements into production-ready engines. However, budget cuts made in their emission research programs beginning in fiscal year 2000 have resulted in them ending their research at the engine component level below full engine testing. Figure 2 shows the funding for this program. Industry officials and aviation experts agree on the importance of NASA’s research and that NASA is focusing on the right mix of near-term and long- term technologies, but are critical of the amount of funding dedicated to nitrogen oxides reduction research. NASA’s research to reduce nitrogen oxides is a component of its Ultra Efficient Engine Technology Program. The goal of this program is to develop technologies that will enable U.S. manufacturers to compete in the global marketplace for new commercial gas turbine engines. The current program is funded at $50 million per year. Industry representatives stated that shrinking budgets have made it difficult for NASA to maintain a level of effort at a critical mass for each project within the Ultra Efficient Engine Technology Program. Furthermore, they added that engine manufacturers could not afford to work with immature technology when they are engaged in new engine development projects. This is because new engine developments are tied into projects with the airlines, and the engines must meet tight cost, schedule, and performance goals if they are to win market share. The Ultra Efficient Engine Technology Program is a scaled-back version of a larger aeronautical research program that was terminated in fiscal year 2000. NASA officials said that budget cuts have reduced research in the current program by about 40 percent from the previous program. In the previous program, research was typically developed to the point where the technology was integrated into the full engine system. In the current program, funding is only available to incorporate the technology into engine components. The National Research Council has concluded that the current funding level jeopardizes achieving program results and does not carry the research far enough for the engine manufacturing industry to readily adopt it. As a result of the uncertainties surrounding emission reduction technology research, it is unclear when new production aircraft will, in the aggregate, start lowering landing/takeoff nitrogen oxides emissions on a per seat basis during the landing/takeoff cycle. Because of the 30-year projected life of new commercial aircraft, it could take decades before future new aircraft can contribute to nitrogen oxides reductions. Both the environmental and aviation communities have voiced concerns about the need to better control the growth of aircraft emissions, particularly nitrogen oxides. Air quality officials from the 13 states that have airports in nonattainment areas told us that emission standards for aircraft should be made more stringent for a number of reasons. For example, several of those officials said that available control measures for other air pollution sources have been nearly exhausted. They noted that aircraft have not been as strictly regulated as other sources, such as automobiles, and that reductions from aircraft may be needed in the future for some areas to maintain attainment of the Clean Air Act’s standards. Likewise, in 2002, the National Academy of Science’s National Research Council reported that the advances that have led to increased efficiencies in individual airplanes are not sufficient to decrease the total emissions of the global fleet, which is increasing in response to accelerating demand. In the same vein, the Intergovernmental Panel on Climate Change reported in 1999 that “although improvements in aircraft and engine technology and in the efficiency of the air traffic control system will bring environmental benefits, these will not fully offset the effects of the increased emissions resulting from the projected growth in aviation.” Concerns about aircraft emissions have prompted calls for an improved approach for controlling them. For example, the National Research Council has recommended that the U.S. government carry out its responsibilities for mitigating the environmental effect of aircraft emissions and noise with a balanced approach that includes interagency cooperation in close collaboration with the private sector and university researchers. The Council emphasized that the success of this approach requires commitment and leadership at the highest level as well as a national strategy and plan that, among other things, coordinates research and technology goals, budgets, and expenditures with national environmental goals. Along the same lines, a recent industry article on the environmental effectiveness of ICAO emission standards suggested that a programmatic framework is required to guide the development of a consensus on policy options for further reducing aircraft emissions. Among the elements of the framework would be establishing the environmental need, the technical capability, the economic viability, and the regulatory consistency of each option. Aviation’s impact on local air quality is expected to grow as a result of projected increases in air travel. In addition, more attention will be focused on finding additional ways to reduce emissions from airports to enable localities to meet more stringent ozone standards, which go into effect in late 2003. In 1998, FAA, EPA, and industry officials established a stakeholders group to develop and implement a voluntary, nationwide program to reduce aviation-related nitrogen oxides emissions because they found the current approach—uncoordinated efforts by individual airports and states—inefficient for air carriers and potentially ineffective in reducing emissions nationwide. However, the stakeholders group has progressed slowly because of the complex nature of achieving consensus on all issues and, thus far, has not defined specific objectives or established time frames for achieving emissions reductions. Despite its participation in the stakeholder group, FAA has not developed a long-term strategic framework to deal with these challenges. Moreover, FAA lacks a thorough study on the extent and impact of aviation emissions on local air quality. Without such management tools, FAA cannot assess the status or the effectiveness of its efforts to improve air quality. The study on aviation emissions prepared by the Intergovernmental Panel on Climate Change on aviation’s effect on the global atmosphere provides a model for a study that FAA could perform to develop baseline information and lay a foundation for a strategic framework. Such a study could accomplish the goals of the study that the stakeholders group commissioned, but never completed, as well as create an opportunity for making public the substance of its deliberations and for incorporating this substance in a plan for reducing emissions. Once completed, such a study would provide baseline information for setting goals and time frames to measure progress in reducing aviation-related emissions. We recommend that the Secretary, DOT, direct the Administrator of FAA, in consultation with the Administrator of EPA and Administrator of NASA, to develop a strategic framework for addressing emissions from aviation- related sources. In developing this framework, the Administrator should coordinate with the airline industry, aircraft and engine manufacturers, airports, and the states with airports in areas not in attainment of air quality standards. Among the issues that the framework should address are the need for baseline information on the extent and impact of aviation- related emissions, particularly nitrogen oxides emissions; the interrelationship among emissions and between emissions and noise; options for reducing aviation-related emissions, including the feasibility, cost, and emission reducing potential of these options; goals and time frames for achieving any needed emission reductions; the roles of NASA, other government agencies, and the aviation industry in developing and implementing programs for achieving needed emission reductions; and coordination of emission reduction proposals with members of ICAO. Upon its completion, the Administrator, FAA, should communicate the plan to the appropriate congressional committees and report to them on its implementation on a regular basis. We provided a draft of this report to the Department of Transportation, the Environmental Protection Agency, and the National Aeronautics and Space Administration for review and comment. FAA’s Director, Office of Environment and Energy, and senior managers in EPA’s Office of Air and Radiation provided oral comments and NASA’s Deputy Director provided written comments. (See appendix VIII.) The three agencies generally concurred with our findings and recommendation and provided technical corrections, which we incorporated as appropriate. In addition, FAA indicated that our report provides a helpful overview on the aviation emissions issue from the perspective of multiple stakeholders dealing with this important issue. FAA also indicated that it is providing heightened attention to aviation emissions through multiple efforts including improving data and modeling, working with the international community on improved standards, and considering alternative approaches to encourage reductions in aviation-related, ground-based and aircraft emissions. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 5 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Secretary of Transportation; the Administrator, FAA; the Administrator, EPA; and the Administrator, NASA. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please call me at (202) 512-3650 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IX. The Chairman of the Subcommittee on Aviation, House Committee on Transportation and Infrastructure asked us to provide information on the nature and scope of aviation’s impact on air quality and the opportunities that exist to reduce emissions from aviation activities. Specifically, our research focused on (1) what efforts are being undertaken to reduce emissions from airport activities and what the outcomes are of these efforts, (2) what additional efforts are being undertaken by other countries to reduce aviation-related emissions, and (3) how improvements in aircraft and engine design have affected aircraft emissions. To address the three questions, we interviewed and collected material from federal officials at the Federal Aviation Administration (FAA), Environmental Protection Agency (EPA), and National Aeronautics and Space Administration (NASA). We also interviewed and collected information from representatives of aviation associations, airlines, and aircraft manufacturers. We also interviewed officials from airports, state and local governments, and nongovernmental organizations. In addition, we reviewed our previous studies and those of EPA, the Natural Resources Defense Council, the International Panel on Climate Control, and other aviation-related environmental studies. To address the first research question, we identified the nation’s 50 busiest commercial service airports and determined that 43 of these airports are located in areas designated as nonattainment or maintenance with respect to requirements of the Clean Air Act. We reviewed and summarized environmental review documents submitted from 1997 through 2001 for the 43 airports to identify the nature of emissions from aviation activities and efforts to mitigate them. We also reviewed applicable sections of state implementation plans for the 13 states in which the 43 airports are located to identify emission-related sources and determine the nature of mitigation measures being undertaken. We also conducted comprehensive computer literature searches to identify the environmental effects of airport operations. To also address the first research question and to provide information on the roles and responsibilities of states in relation to aviation-related emissions, we identified 13 states with airports located in air quality problem areas and conducted a telephone survey with state air quality authorities in these areas to obtain information on oversight/regulatory responsibilities for airport activities. We selected the states by first identifying the top 50 busiest commercial service airports on the basis of the number of air carrier landings and takeoffs in fiscal year 2001. In those states, 26 airports were identified as being located in areas designated as nonattainment for ozone. The 26 airports are located in the following 13 states: Arizona, California, Georgia, Kentucky, Maryland, Massachusetts, Missouri, New Jersey, New York, Pennsylvania, Texas, Illinois, and Virginia. We reviewed applicable sections of the Clean Air Act, the National Environmental Policy Act, states’ air quality laws, and International Civil Aviation Organization (ICAO) policies that defined air emissions standards applicable to aviation-related activities and agencies’ role and responsibilities for administering them. For the first research question, we also selected seven airports for case studies—Los Angeles International, Boston Logan International, Sacramento International, Dallas/Fort Worth International, Chicago O’Hare International, George Bush International/Houston, and Atlanta Hartsfield airports. We selected these airports on the basis of passenger traffic, air quality status, and initiatives undertaken to deal with airport- related emissions. At each location, we interviewed and gathered data from officials representing FAA and EPA regional offices, airports, state and local governments, and nongovernmental organizations on efforts to reduce emissions. To address the second research question, we identified international efforts to reduce aviation-related emissions through our interviews with FAA, Department of State, ICAO, airport, airline, and nongovernmental agency officials. We conducted comprehensive computer literature searches to identify other international airports and to gather information on the efforts being undertaken by these airports to reduce aviation- related emissions. Our searches identified aviation reduction programs at European airports, including Switzerland and Sweden. We reviewed materials from Swiss and Swedish federal civil aviation officials on these efforts. We also reviewed proposed European Unions policies on reducing aviation-related emissions. Finally, to address the third research question, we interviewed jet engine manufacturers, NASA researchers, and a university researcher to obtain information on efforts to reduce aircraft emissions. In addition, we calculated the landing and takeoff emissions for every aircraft model and engine combination in the U.S. 2001 commercial fleet for which data were available. Next, we looked for emission trends by identifying instances in which new model/engine combinations had been introduced in the last 5 years. We then compared the landing/takeoff emission characteristics of these newer aircraft with the emissions of the older aircraft they were most likely to replace. We identified examples of emissions trends for new aircraft. We did not perform a complete analysis of all trends. In performing this analysis, we obtained the following information on every aircraft in the U.S. commercial aircraft fleet: specific model and engine, year 2001 landing/takeoff counts, aircraft age, and seating capacity. This information came from AvSoft, a company that specializes in detailed data on commercial aircraft. We summarized this information for each specific model and engine combination. We then calculated the landing/takeoff emissions for each of these combinations using the Emissions and Dispersion Modeling System (EDMS), version 4.01 software developed by FAA for this purpose. EDMS software calculates landing/takeoff emissions for four major criteria pollutants: carbon monoxide, volatile organic compounds, nitrogen oxides, and sulfur dioxides. The calculations take into account characteristics of specific aircraft model/engine combinations as well as airport-specific variations in the landing/takeoff cycle. We calculated the emissions for a representative “generic” airport using EDMS default values. Key values used in our EDMS calculations were emission ceiling height: below 3,000 feet; taxi-time: 15 minutes; and takeoff weight: EDMS default value. To determine the reliability of the software and data we used, we reviewed FAA’s and AvSoft’s quality controls, customer feedback information, and self-assessments. A weakness AvSoft identified with the data we used was a tendency to undercount the landings/takeoffs for smaller aircraft (aircraft with 70 seats or less). In addition, the EDMS software does not have complete information on some of the less common aircraft models and engines. This weakness, however, did not affect the trends we identified because of the limited use of these models and engines. On the basis of our experience working with the data and the software, we determined that the vendors were providing reliable products for the purposes for which we used them and that additional data and software reliability assessments were not needed to support our conclusions. During the review, the following aviation experts reviewed our methods and report drafts for accuracy and balance: John Paul Clarke of the Massachusetts Institute of Technology; Mary Vigilante of Synergy Consulting, Inc.; and Ian Waitz of the Massachusetts Institute of Technology. Most emissions associated with aviation come from burning fossil fuels that power aircraft, the equipment that services them, and the vehicles that transport passengers to and from airports. The primary types of pollutants emitted by aircraft and airport-related sources are volatile organic compounds, carbon monoxide, nitrogen oxides, particulate matter, sulfur dioxide, toxic substances such as benzene and formaldehyde, and carbon dioxide, which in the upper atmosphere is a greenhouse gas that can contribute to climate change. When combined with some types of volatile organic compounds in the atmosphere, carbon dioxide forms ozone, which is the most significant air pollutant in many urban areas as well as a greenhouse gas in the upper atmosphere. Particulate matter emissions result from the incomplete combustion of fuel. High-power aircraft operations, such as takeoffs and climb outs, produce the highest rate of particulate matter emission due to the high fuel consumption under those conditions. Sulfur dioxide is emitted when sulfur in the fuel combines with oxygen during the combustion process. Fuels with higher sulfur contents produce higher amounts of sulfur dioxide than low-sulfur fuels. Ozone and other air pollutants can cause a variety of adverse health and environmental effects. Aircraft emit pollutants both at ground level as well as over a range of altitudes. At most U.S. airports, aircraft can be a major source of air pollutants. The major air pollutants from aircraft engines are nitrogen oxides, carbon monoxide, sulfur dioxide, particulate matter, and volatile organic compounds. The burning of aviation fuel also produces carbon dioxide, which is not considered a pollutant in the lower atmosphere but is a primary greenhouse gas responsible for climate change. During the landing and takeoff cycles, and at cruising altitudes, aircraft produce different levels of air pollutant emissions. Emission rates for volatile organic compounds and carbon monoxide are highest when aircraft engines are operating at low power, such as when idling or taxiing. Conversely, nitrogen oxides emissions rise with an increasing power level and combustion temperature. Thus, the highest nitrogen oxides emissions occur during aircraft takeoff and climb out. In addition, aircraft have mounted auxiliary power units that are sometimes used to provide electricity and air conditioning while aircraft are parked at terminal gates and these units emit low levels of the same pollutants as aircraft engines. When flying at cruising altitudes, aircraft emissions, including carbon dioxide, nitrogen oxides, and aerosols that are involved in forming contrails and cirrus clouds, contribute to climate change. Ground support equipment—which provide aircraft with such services as aircraft towing, baggage handling, maintenance/repair, refueling, and food service—is also a source of emissions at airports. This equipment is usually owned and operated by airlines, airports, or their contractors. According to EPA, the average age of ground support equipment is about 10 years, although some of the equipment can last more than 30 years with periodic engine replacement. Most ground support equipment is powered by either diesel or gasoline engines, and older engines pollute more than newer engines. Emissions from ground support equipment include volatile organic compounds, carbon monoxide, nitrogen oxides, and particulate matter. At some airports, airlines and the airport operators are introducing electric and alternative-fuel powered ground support equipment. Emissions from passenger vehicles and trucks, referred to as ground access vehicles, are an important consideration at airports. Heavy traffic and congestion in and around airports result from the influx of personal vehicles, taxis and shuttles discharging and picking up passengers, and trucks hauling airfreight and airport supplies. Such traffic generates significant amounts of the emissions including carbon monoxide, volatile organic compounds, and nitrogen oxides. Several states that we surveyed indicated that automobiles are the major source of volatile organic compounds, carbon monoxide, particulate matter, and nitrogen oxides in areas with air quality problems at airports. This situation has occurred despite the fact that automobile emissions have been reduced on a per vehicle basis by 98 percent in the past 25 years. Other sources of emissions at airports include construction activities, electric power generating plants, and maintenance operations. The air pollutants emitted by these activities can include particulate matter, nitrogen oxides, carbon monoxide, and sulfur dioxide. The information available on the relative contribution of aviation-related activities to total emissions in an area is limited, but it indicates that these activities account for a small amount of air pollution and the proportion attributed to airports is likely to grow over time. According to EPA, aircraft, which are the only source of emissions unique to airports, currently account for about 0.6 percent of nitrogen oxides, 0.5 percent of carbon monoxide, and 0.4 percent of the volatile organic compounds emitted in the United States from mobile sources. In cities with major airports, aircraft-related emissions could be higher or lower. In a 1999 study of 19 airports located in 10 cities, EPA found that the proportion of nitrogen oxides emissions from mobile sources attributed to aircraft ranged from 0.6 percent to 3.6 percent in 1990. EPA also found that aircraft accounted for 0.2 percent to 2.8 percent of volatile organic compound emissions from mobile sources in the 10 cities during the period. From information contained in a recent study of emissions at Dallas/Fort Worth International Airport we estimated that aircraft produced about 3 percent of the nitrogen oxides and about 5 percent of the carbon monoxide present in the metropolitan area. A 1999 study of emissions at Chicago O’Hare International Airport found that aircraft and the airport as a whole emitted about 1.6 percent and 2.6 percent of the total volatile organic compound emissions, respectively, within a 10-mile radius of the airport’s terminal area and that nonairport sources were considerably more important to local air quality than aircraft. In addition, a 2001 report on an air quality initiative for Boston Logan International Airport stated that the airport contributed less than 1 percent of the ozone-forming nitrogen oxides and volatile organic compound emissions in the Boston area. Little research has been done on how much of total area emissions (called an emissions inventory) are attributable to ground support equipment and airport-related road traffic, because they are categorized as nonroad and onroad mobile sources, both of which are already accounted for in emissions inventories. However, our analysis of the Dallas/Fort Worth International Airport emissions inventory indicated that ground support equipment contributed almost 3 percent of the nitrogen oxides emissions for the area. When all airport-related emissions are added together, we estimated that the Dallas/Fort Worth International Airport was responsible for 6 percent of nitrogen oxides in the metropolitan area. The amount of emissions attributable to each source varies by airport. According to a 1997 study of four airports, ground access vehicles were the most significant source of mobile emissions, responsible for 45 to 68 percent of the airports’ volatile organic compounds and 27 to 63 percent of the nitrogen oxides emitted from mobile sources. Aircraft operations were found responsible for the next largest share of emissions from mobile sources, with total contributions of 15 to 38 percent and 26 to 37 percent for volatile organic compounds and nitrogen oxides, respectively. Ground support equipment accounted for 12 to 13 percent of total emissions from volatile organic compounds and 14 to 20 percent of total nitrogen oxides from mobile sources at the airports. The report also found that auxiliary power units for aircraft contributed a small amount of the emissions from volatile organic compounds and 9 to 20 percent of total nitrogen oxides emissions from mobile sources. According to the report, data on particulate matter emissions is not available for aircraft and auxiliary power units, but ground access vehicles contribute one type of particulate matter at 1.3 to 2.7 the rate emitted by ground support equipment. Some pollutants associated with aviation activities can increase the risk of a variety of health and environmental impacts. However, attributing these impacts to any particular source is extremely difficult because of the multiplicity of pollution sources in urban areas and the complexities involved in determining the exact causes of disease and environmental damage. The limited amount of research available indicates that the impact of the pollutants associated with airport activities is no more pronounced in the areas near airports than it is in other urban areas. Nevertheless, the cumulative impact of pollution from all sources can affect health and the environment. The pollutant of most concern in the United States and other industrial countries is ozone, which is formed when nitrogen oxides, some types of volatile organic compounds, and other chemicals are combined and heated in the presence of light in the atmosphere. Ozone been shown to aggravate respiratory aliments, such as bronchitis and asthma. Research has indicated that certain levels of ozone affect not only people with impaired respiratory systems, but healthy adults and children as well. Exposure to ozone for several hours at relatively low concentrations has been found to significantly reduce lung function and induce respiratory inflammation in normal, healthy people during exercise. In addition, according to EPA, there is growing public concern over emissions of air toxics, which include benzene, formaldehyde, and particulate matter, because of their potential adverse effects on health. Some of these emissions are associated with aviation activities. EPA’s 1996 National Toxics Inventory indicates that amounts of hazardous air pollutants produced by aircraft are small relative to other sources such as on-road vehicles. However, EPA’s national estimates are based on limited data, and very little data is available on toxic and particulate matter emissions in the vicinity of airports. A study of emissions at Los Angeles International Airport is expected to shed some light on the subject. In addition, FAA is involved in a study on identifying methods to measure aircraft particulate matter emissions. In the upper atmosphere, aircraft emissions of carbon dioxide and other greenhouse gases can contribute to climate change. Greenhouse gases can trap heat, potentially increasing the temperature of the earth’s surface and leading to changes in climate that could result in such harmful effects as coastal flooding and the melting of glaciers and ice sheets. According to a 1999 report by the Intergovernmental Panel on Climate Change, conducted under the auspices of the United Nations, global aircraft emissions in general accounted for approximately 3.5 percent of the warming generated by human activities. Jet aircraft are also the largest source of emissions generated by human activity that are deposited directly into the upper atmosphere. Carbon dioxide is the primary aircraft emission; it survives in the atmosphere for over 100 years and contributes to climate change. In addition, other gases and particles emitted by jet aircraft including water vapor, nitrogen oxides, soot, contrails, and sulfate combined with carbon dioxide can have two to four times as great an effect on the atmosphere as carbon dioxide alone, although some scientists believe that this effect requires further study. The Intergovernmental Panel on Climate Change concluded that aircraft emissions are likely to grow at 3 percent per year and that the growing demand for air travel will continue to outpace emission reductions achieved through technological improvements, such as lower emitting jet engines. Table 6 summarizes the possible environmental effects of the major pollutants associated with aviation related activities on the human health and the environment. The federal government and the states have responsibility for regulating sources of aviation emissions under the Clean Air Act, which was established to improve and protect air quality for human health and the environment. In addition, a United Nations entity, the International Civil Aviation Organization (ICAO), establishes international aircraft emissions standards, studies aviation emissions-related issues, and provides guidance for controlling these emissions. ICAO includes 188 member countries, which have agreed to adopt, to the extent possible, standards set by ICAO. For aircraft or aircraft engine emissions, the Clean Air Act gives EPA the authority to establish emission standards. EPA, in consultation with FAA, has chosen to adopt the international emissions standards established by ICAO. FAA serves as the United States’ representative to ICAO’s Committee on Aviation Environmental Protection, which is responsible for assessing aviation’s impact on the environment and establishing the scientific and technological basis for new gaseous emissions standards for aircraft engines. The committee has established several working groups to identify and evaluate emissions-reduction technology and operational measures and market-based options to reduce emissions. Both FAA and EPA participate in these working groups. In addition, FAA is responsible for monitoring and enforcing U.S. manufacturers’ compliance with aircraft emissions standards, which it does in part through its process for certifying new aircraft engines. In addition, the federal government plays a role in developing technologies to reduce aircraft emissions. NASA, in partnership with the aviation industry and universities, conducts research into improving the capabilities and efficiency of commercial aircraft. Part of this effort includes developing more fuel efficient and lower emitting engines. Over the years, NASA has been credited with contributing to technologies that have significantly lowered the amount of fuel consumed by jet engines; this in turn has reduced some emissions, particularly the greenhouse gas, carbon dioxide. Under the Clean Air Act, EPA has jurisdiction for establishing national standards for all other mobile sources of emissions, including those associated with airport operations—such as ground support equipment and ground access vehicles such as automobiles, trucks, and buses operating on airport property. In establishing these emissions standards, EPA is to take into consideration the time it takes to develop the necessary technology and the cost of compliance. The Clean Air Act also directs EPA to establish national standards for ambient air quality, and these standards can affect airport operations and expansion plans. EPA has set National Ambient Air Quality Standards for carbon monoxide, lead, nitrogen dioxide, particulate matter, ozone, and sulfur dioxide. EPA has labeled them criteria pollutants because the permissible levels established for them are based on “criteria” or information on the effects on public health or welfare that may be expected from their presence. The criteria pollutants are directly or indirectly generated by multiple sources, including airport activities. Local areas not meeting the standards for criteria pollutants are referred to as nonattainment areas. The act groups nonattainment areas into classifications based on the extent to which the standards for each criteria pollutant are exceeded and establishes specific pollution controls and attainment dates for each classification. The act has set 2010 as the deadline for extreme ozone nonattainment areas to meet the standards. (California is currently the only state with such an area). The Clean Air Act also authorizes EPA to set ambient air quality standards; however, the states, which can adopt EPA’s or their own more stringent standards, are responsible for establishing procedures to attain and maintain the standards. Under the act, states that have areas in nonattainment, must adopt plans—known as state implementation plans— for attaining and maintaining air quality standards and submit the plans to EPA for approval. State implementation plans are based on analyses of emissions from all sources in the area and computer models to determine whether air quality violations will occur. If data from these analyses indicate that air quality standards would be exceeded, the states are required to impose controls on existing emission sources to ensure that emissions do not exceed the standards. States can require control measures on airport emissions sources for which they are not preempted from regulating, such as power plants and ground access vehicles, and, to a limited extent, ground support equipment. However, states cannot control emissions from sources they are preempted from regulating including aircraft, marine vessels, and locomotives. If a state fails to submit or implement an adequate implementation plan, EPA can impose an implementation plan. FAA is responsible for ensuring that its actions supporting airport development projects—such as providing funding for those projects— comply with federal environmental requirements, including those pertaining to air quality. The National Environmental Policy Act of 1969 sets forth a broad national policy intended to protect the quality of the environment. The act requires that federal actions receive an environmental review, which includes the impact on air quality, before federal decisions are made and actions are taken. For example, federally- funded proposals to construct airport runways require action by FAA. For airport projects, FAA is the lead agency responsible for the environmental reviews and for the approval of the airports’ proposed design. EPA examines the environmental review documents prepared by FAA and other federal agencies. The “general conformity rule” of the Clean Air Act directs federal agencies, such as FAA to ensure that federal actions at airports not delay the attainment or maintenance of ambient air quality standards. Therefore, FAA must determine, usually as part of the environmental review, that the estimated amount of emissions caused by a proposed federal action at an airport comply with the state implementation plan for meeting the standards. FAA cannot approve an action unless it complies with the plan. In order to demonstrate compliance, the airport could be required to implement emission control measures, such as converting airport vehicles to alternative lower emitting fuels. To help carry out its responsibilities under the Clean Air Act and the National Environmental Policy Act, FAA developed the Emissions and Dispersion Modeling System, which is a computer model that estimates the amount and type of emissions from airport activities. FAA, airports, and others use the model to assess the local air quality impacts of airport development projects. Typically, the model is used to estimate the amount of emissions produced by aircraft, ground support equipment, and other sources operating at the airport or in the nearby vicinity. The model also reflects the way these airport emissions are dispersed in the atmosphere due to wind and other factors. The dispersion analysis is intended to assess the concentrations of the emissions at or near the airport and, thereby, help to indicate the effect of the emissions on local air quality. FAA is also engaged in several research projects to improve the understanding of aircraft emissions and methods for quantifying them. For example, FAA is working with the Society of Automotive Engineers to develop a protocol for measuring particulate matter emissions from aircraft. FAA is also studying ways to increase the accuracy of aircraft emission dispersion models and is analyzing the air quality impact of aircraft operations at or above 3000 feet. Three states with major commercial airports in nonattainment areas— California, Texas, and Massachusetts—have targeted airports for emissions reductions. California has more major commercial airports—seven—than any other state, and all of them are located in nonattainment areas for ozone. Although none of the airports are a major source of ozone precursors such as nitrogen oxides and volatile organic compounds, California air quality authorities have turned their attention to airports as a source of reductions needed to reach and maintain attainment of ozone standards because they believe they have exhausted other sources, including large sources such as power plants and small sources like lawn mowers. The Los Angeles region is the only one in the country classified as an extreme nonattainment area for ozone. According to state environmental officials, emissions from all airport activities contributed about 1 to 2 percent of the pollution in the Los Angeles region in 2000, and this is projected to increase to nearly 4 percent by 2020. State environmental officials attribute this projected increase in the airports’ ozone contribution to an expected doubling of aircraft emissions coupled with a 50 percent decrease in emissions from other sources. These projections do not take into account the reductions in aircraft activity as a result of the events of September 11, 2001, and the financial uncertainties of the airline industry. Because of the severity of the nonattainment level in the Los Angeles area, the state requires reductions from all sources, including airports, by 2010. Along with Los Angeles’ local air quality agency, the California Air Resources Board has negotiated with EPA and airlines for a memorandum of understanding for voluntary emission reductions from ground support equipment. According to California Air Resources Board officials, emission reductions would be achieved by replacing older, high polluting ground support equipment with new cleaner gas and diesel fueled equipment or equipment operating with alternative energy sources, such as electricity. In doing so, the officials expect an 80 percent reduction of emissions from ground support equipment that are used at five airports— Los Angeles International, Burbank, Ontario International, Long Beach, and John Wayne—in the Los Angeles region by 2010. California’s efforts to cut emissions from ground support equipment in the Los Angeles area are part of an aggressive statewide campaign to reduce airport pollution. In addition to using its limited authority under the Clean Air Act to implement airport related emissions reductions, the state has also established criteria for issuing air quality certifications provided for in federal law. Under this law, before federal funds are allocated for projects involving a new airport, a new runway, or a major runway extension, the state governor must certify that there is reasonable assurance that the project will be “located, designed, constructed, and operated in compliance with applicable air and water quality standards.” The state has developed a unique set of criteria for determining whether a proposed airport expansion project would have an impact on the environment. If the project exceeds one of the criteria, the airport is required to implement emissions mitigation measures in order to attain certification. For example, the certification for a runway project was invoked when the Sacramento International Airport planned to increase the number of parking spaces. The criteria on which the certification was based included annual increases of more than 7 million passengers or 139,000 aircraft operations (i.e., landings and takeoffs) or a permanent increase of more than 4,200 parking spaces. The airport’s plans exceeded the number of parking spaces and, as a result, were required to implement emission mitigation measures in order to build the parking spaces. According to state officials, California is the only state to develop such criteria for certifying airport expansion projects. As of December 2002, three airports in California—Sacramento International, San Jose International, and Ontario International—have initiated expansion projects that required state certification. Texas has four regions in nonattainment of national air quality standards for ozone, but the Houston and Dallas/Fort Worth regions have required the most extensive emission control measures for reaching attainment. These two regions contain the state’s four largest airports—Dallas/Fort Worth International, Dallas Love Field, George Bush International/ Houston, and Houston Hobby—all of which are among the nation’s 50 busiest airports. The Houston area has one of the worst ozone problems in the country and has been designated as a severe nonattainment area, requiring substantial control measures in order to comply with the Clean Air Act. Dallas-Fort Worth, on the other hand, has a much less serious ozone problem but has been penalized by EPA for not meeting its attainment schedule. EPA classified the Dallas/Fort Worth region as a moderate ozone nonattainment area in the early 1990s, which meant that the region was required to demonstrate attainment of the 1-hour ozone standard by November 1996. However, air quality data from the region showed that the area failed to meet the attainment goal in 1996, which resulted in EPA reclassifying the severity level of the region from moderate to serious. The downgrading of the Dallas region’s classification forced state and local authorities to develop a new state implementation plan with more extensive control measures. The state’s environmental agency, the Texas Natural Resource Conservation Commission, included emissions from airport activities among the top ten highest sources of nitrogen oxides emissions from nonroad mobile sources in both the Dallas-Fort Worth and Houston regional areas. Noting that the emissions inventories for both Houston and Dallas-Fort Worth placed airports in the top 10 sources for nitrogen oxides emissions of nonroad mobile sources, which contribute to ozone formation, the Texas Natural Resource Conservation Commission determined that control measures for each area were warranted. For Dallas-Fort Worth, the commission revised the state implementation plan for the area to include reduction of nitrogen oxides emissions from ground support equipment at both major commercial airports in the area—Dallas/Forth Worth International and Dallas Love Field. The plan called for a 90 percent reduction of nitrogen oxides emissions from ground support equipment by 2005. The airline industry challenged the state rule by filing a lawsuit, citing the Clean Air Act’s preemption rule, which it argued prohibited states and local authorities from regulating ground support equipment. The lawsuit was dropped in October 2000 when the commission, the cities of Dallas and Fort Worth (which operates the major airports), and the affected airlines—American, Delta, and Southwest—reached a voluntary agreement to achieve a 90 percent reduction in nitrogen oxides emissions attributable to ground support equipment or other equipment by 2005. The commission brokered a similar agreement with the city of Houston as its operator of the airports and the affected airlines. Under both the Dallas/Fort Worth and Houston agreements, the affected carriers voluntarily agreed to reductions equivalent to 75 percent of nitrogen oxides emitted from ground service equipment and the cities—Dallas- Forth Worth, and Houston—as the operators of the airports agreed to be responsible for the remaining 15 percent to achieve the 90 percent reduction. The Boston area is classified as a serious ozone nonattainment area and state environmental officials are under increasing pressure by citizens, community groups, and industry to control emissions from Boston’s Logan International Airport. State environmental officials have estimated that while only a small amount of total nitrogen oxides emissions in the area are attributable to aircraft, these emissions will continue to increase. They estimate that other emission sources at the airport, such as ground support equipment, will eventually begin to decrease as they are replaced by lower polluting equipment. The Boston airport is also consistently ranked as the airport with the second highest number of air travel delays in the nation. These air travel delays add to regional air quality problems because idling aircraft contribute to pollution. To meet a growing travel demand, Boston airport officials have proposed building a new runway to allow the airport to improve operating efficiency, thereby reducing emissions from idling aircraft. As part of this proposal, the airport also agreed that emissions would not exceed 1999 levels. To address airport operation delays and reduce emissions, airport officials have considered three strategies—peak period pricing, emissions credit trading, and reducing emissions from ground support equipment. Peak period pricing is a demand management strategy that raises landing fees during designated air traffic peak hours, which is expected to induce some air carriers to discontinue or reduce operations during peak periods. With fewer aircraft waiting to taxi and land during peak periods, emissions from aircraft would be reduced and regional air quality would be improved. An emissions credit trading program is designed to allow facilities to meet emission reduction goals by trading and transferring air emission credits with emission sources that surpassed their allotted targets. Used by EPA to reduce pollutants that contribute to acid rain, the emission credit trading program allows sources, such as industry, the flexibility to meet their reduction obligations in a more cost effective manner. Because emission credits are considered “additional” or “surplus” to those that are regulated and otherwise reduced under federal and state laws, they aid in achieving an overall decline in emissions regionwide, according to Boston airport officials. Similar to situations at the major airports in both California and Texas, state and airport officials have also focused on reducing emissions from ground support equipment. In the wake of the events of September 11, 2001, which resulted in a reduction of flights and emissions at the Boston airport, the airport’s operator—Massachusetts Port Authority—believes that peak pricing and emissions trading will not be needed to keep emissions below 1999 levels for several years. The Port Authority, however, continues to work with airport tenants to implement voluntary emission reduction strategies. In addition, in an August 2002 Record of Decision approving plans for a new runway and taxiways, FAA directed the Port Authority to develop and submit a plan for peak period pricing or other demand management strategies to reduce delays, which the Port Authority had committed to complete this plan as part of the state environmental review process, before initiating construction. In the Record of Decision, FAA pointed out that the program would have to comply with applicable federal constitutional and other requirements. Many of the nation’s busiest airports, in conjunction with air carriers, have voluntarily implemented control measures to reduce emissions by activities that include modifying the operating procedures of aircraft, using alternative fuels to run ground support equipment, and reducing the number of passenger vehicles entering and exiting the airport. Although airports have no control over emissions from aircraft, they can encourage air carriers to reduce emissions as much as possible through modified operating procedures. For example, limiting the number of running engines during taxiing of aircraft can reduce the emission of nitrogen oxides and volatile organic compounds. According to airport officials at the Boston Logan International Airport, some pilots use single- engine taxiing with some aircraft to reduce emissions. Another example is reducing the use of engine reverse thrust to slow an aircraft to taxi speed after it lands. This procedure reduces nitrogen oxides emissions, but it may occur at the expense of slightly higher emissions of volatile organic compounds if the taxi time is increased because a runway turnoff is missed. Many factors are involved in the decision to use reverse thrust, including runway length and width, runway surface and taxiway conditions, weather conditions, and aircraft type. Modifying the operating procedures of aircraft does not require additional equipment or aircraft modifications, but it is done at the discretion of the pilot. Under federal regulations, the commanding pilot of the aircraft is responsible for the safety of the passengers, crewmembers, cargo, and the airplane, and any procedure that modifies aircraft operation is at the discretion of the pilot. In addition, modifications to operating procedures may not be feasible in all weather conditions, with all aircraft, and/or at all airports. Most ground support equipment used by air carriers at airports is fueled by gasoline or diesel. Replacing that equipment with cleaner-burning gas or diesel engines or equipment powered by alternative fuels—such as electricity, liquefied petroleum gas, and compressed natural gas—could result in reduced emissions. A reliable and comprehensive database of the ground support equipment in use does not exist; however, according to FAA, there are about 72,000 pieces of such equipment in operation. The Air Transport Association estimated that of the pieces of ground support equipment in used in 1999, about 30 to 40 percent operate on diesel fuel; 50 to 60 percent operate on gasoline; and about 10 percent use alternative fuels. Several airports we visited, including Los Angeles International, Sacramento International, Dallas/Fort Worth International, Boston Logan International, and Atlanta Hartsfield, provided air carriers with the infrastructure necessary to operate alternatively fueled ground support equipment, and some carrier have begun converting their fleets of ground support equipment to alternative fuels. Los Angeles International, for instance, provided a varied alternative fuel infrastructure, including both compressed and liquefied natural gas refueling stations and electric charging stations, which offered air carriers different options to use alternative fueled equipment. Airport officials told us that air carriers have been using the alternative fuel stations to refuel their ground support equipment. FAA reported that replacing conventionally-fueled ground support equipment with alternatively-fueled equipment is the most cost effective way to reduce emissions at airports. Additionally, equipment originally designed to use the alternative fuels has less impact on the environment than equipment that is converted from using a conventional fuel to an alternative fuel; however, it is also more costly up front, and alternative fuel technology does not currently exist for some types of ground support equipment. Airports and air carriers use about 24 different types of ground support equipment, such as cargo loaders, aircraft pushback tractors, baggage tugs, and service trucks; and according to aviation industry officials, conversion of equipment from conventional to alternative fuel has had a mixed result in terms of operating the equipment. According to airline officials, liquefied petroleum and compressed natural gas vehicles require larger fuel tanks and are harder to operate; the cost for the alternative fuel infrastructure engines for ground support equipment is also very expensive. Air carriers and airports commonly have had to use a mixed fleet of liquefied petroleum and compressed natural gas and electric ground support equipment because of limitations of the various types of alternative fuel sources. For example, electricity has not been sufficiently powerful to run some of the ground service equipment that bear significant loads. In addition, some types of electric equipment do not work well in cold weather conditions. According to the Air Transport Association, for these and other reasons, no one equipment size or type fits all airlines’ needs. A trend at airports is to provide electricity and air conditioning service for aircraft at the gates, which can permit a reduction in the use of aircraft auxiliary power units and thereby reduce emissions, according to FAA. Airports are not required to install boarding gates that provide electricity to parked aircraft, but an FAA report notes that some airports have been proactive in reducing emissions and have invested in these electric gates. The report explains that electric gates operate at greater energy efficiency than auxiliary power units, which support aircraft with power and ventilation systems when they are parked at the gates, and can substantially reduce emissions. Many airports, including Los Angeles International, Sacramento International, Dallas/Fort Worth International, and Boston Logan International provide electric power for parked aircraft, which allows aircraft to turn off their auxiliary power units while maintenance and cleaning crews prepare the aircraft for the next flight. However, air carriers are not required to use the electric gates, and some chose not to use them because they hinder the efficiency of their operations. For instance, one airline that specializes in getting its aircraft into and out of airports quickly—in 20 minutes or less—rarely uses the electricity provided by the airport, instead running the auxiliary power unit the entire time aircraft are at the gate, according to officials of that airline. These officials note that electric gates are only useful for those aircraft that are parked for 30 to 45 minutes or longer before they take off because of the time it takes to hook the aircraft up to the system. Although EPA already regulates emissions from most passenger vehicles and trucks, options are available to further reduce emissions from theses sources at airports. Vehicles making trips to and from airports include employee and private passenger vehicles, airport and tenant-owned fleet vehicles, public transport vehicles and shuttles, and cargo vehicles for deliveries. All the airports we visited have implemented or are in the process of implementing emission reduction efforts for this emissions source. Some emission reduction measures that airports have applied to such ground access vehicles include the following: Dallas/Fort Worth International airport has consolidated its rental car facilities and, according to airport officials, the consolidation effort has reduced rental car related emissions by 95 percent. In addition, the single shuttle service that resulted from consolidating the rental car facilities uses alternative fuel shuttles. George Bush Intercontinental/Houston plans to consolidate its rental car facilities; and Los Angeles International, Atlanta Hartsfield, and Boston Logan International are also considering the option. Dallas/Fort Worth International, Los Angeles International, and Sacramento International all have promoted some kind of employee/tenant commuter rideshare program. According to Los Angeles International Airport officials, about 25 percent of airport employees participate in a commuter rideshare program. Los Angeles International restructured its airport shuttle-van program in 1999 by reducing the number of shuttle vans authorized to make passenger pickups at the airport and requiring them to phase-in alternative fuel vehicles into their fleets. The airport expects all of the authorized operators to use alternative fuel vehicles by 2003. The airport is also considering requiring taxicabs serving the airport to operate on natural gas. Both Chicago O’Hare International and Dallas/Fort Worth International airports have built an electric automated transport system, also known as a “people mover,” within the airport property to transport passengers between terminals. Chicago O’Hare International airport also offers direct rail service to the city center and provides alternative transportation to passengers and airport employees entering/exiting the airport. Los Angeles International provides alternative public transportation with a bus service that travels between the airport and the park-and-ride station at the Van Nuys Airport. Airports have also reduced emissions from other sources, such as their on- site utilities plants. Los Angeles International airport’s central utilities plant operates under a cogeneration energy saving system, which simultaneously generates electrical power and steam. Some electrical power is sold to the local electric company, and the steam provides heating and air conditioning (by powering steam refrigeration chillers) for the airport’s buildings and central terminal area. According to airport officials, Los Angeles International receives more than $3 million in emissions credit each year for the emission controls achieved with its central utilities plant. Dallas/Fort Worth International airport also generates electricity with its solar power generators, which produce lower emissions than traditional powered generators. Airport officials stated that they have the capacity to build cogeneration plants using solar power and sell the power/surplus electricity to the state as well. The airport is trying to negotiate with federal agencies to receive credits for the amount of emission reductions achieved by using solar power energy and selling surplus electricity to the state. If successful, the airport could use these credits to gain approval of future expansion projects that increase emissions. Fuel efficiency improvements involve every aspect of an aircraft’s design. Traditionally, about 40 percent of the improvements have come from airframe improvements and 60 percent from propulsive and engine improvements. Airframe improvements include improving the aerodynamic shape and structural efficiency (for example, reduced aircraft weight). Propulsive improvements have primarily resulted from increasing the size of the bypass fan and improving the shape of the bypass fan blades. Engine improvements have centered on increasing the pressure of the air that goes through the engine core (the engine operating pressure). The increased engine operating pressures allow more work to be extracted from a unit of fuel, thereby improving fuel consumption. One of the first major technology breakthroughs with commercial jet engines occurred in the mid-1960s with the introduction of the turbofan jet engine (see figure 3). This design uses a bypass fan in front of the jet engine core to move much of the propulsive air and bypass the core of the jet that contains the compressor, combustor, and turbine. The primary motivation for this advancement was increased fuel efficiency. However, the reduced noise of this new design was an additional benefit. Noise was reduced because the bypass air moves at a slower speed than the air going through the core. Further noise reductions have evolved over time by increasing the size of the bypass fans and improving the shapes of the bypass fan blades. Researchers at NASA have indicated they are facing diminishing returns as they seek to reduce noise by further improving bypass fans and aircraft surfaces. They are also exploring more advanced technologies such as using electronics to actively control noise. NASA, in association with jet engine manufacturers and the academic community, is working on several technologies to reduce nitrogen oxides emissions. NASA’s research to reduce nitrogen oxide emissions is a component of its Ultra Efficient Engine Technology Program. The goal of this program is to develop technologies that will enable U.S. manufacturers to compete in the global marketplace for new commercial gas turbine engines. An important aspect of this program is reducing jet engine emissions of nitrogen oxides. NASA has set what it considers ambitious goals for its nitrogen oxides reduction research. These goals include the following: Demonstrate combustion technology, in a NASA test laboratory, that will reduce nitrogen oxides 70 percent relative to today’s standard. This equates to a 20-50 percent reduction compared with the best engines being produced today. Demonstrate these technologies in engine combustor components by 2005. Hand off the technologies to manufacturers in a timely fashion so they can be incorporated in new engines in the 2007-2010 time frame. Study long-term concepts that could greatly reduce or eliminate nitrogen oxides emissions in the 2025-2050 time frame. According to representatives from jet engine manufacturers, nitrogen oxides reduction research is complex and time consuming and requires specialized and expensive test equipment. They also said that basic research needed to understand the formation of nitrogen oxides in jet engines and to make significant changes to current engine designs is so expensive and lacking in marketplace investment rewards that no significant or sustained basic research in this area would take place without NASA taking the lead. Adding to the complexities of this research is the extreme variation in jet engine designs. Other research and development by NASA and engine manufacturer is constantly raising engine-operating pressures as a way of improving fuel consumption and reducing greenhouse gas emissions. However, these developments tend to increase nitrogen oxides emissions, and further modifying engine designs to reduce nitrogen oxides has a direct impact on every other aspect of engine design: safety, operability, service life, operating costs, maintenance costs, and production costs. Jet engine manufacturers are taking divergent design approaches as they research how to maintain these other high-priority design characteristics while reducing nitrogen oxides emissions. As a result, NASA divides its resources over numerous projects. NASA’s Ultra Efficient Engine Technology Program is scheduled to complete research and technology on aircraft engine combustor refinements that reduce the formation of nitrogen oxides so that the refinements can be introduced on aircraft by 2010. Because of the 30-year projected life of commercial aircraft, it could take decades before enough lower emitting aircraft are introduced in the commercial fleet to contribute to significant reductions in nitrogen oxides. NASA’s nitrogen oxides research under the Ultra Efficient Engine Technology Program is centered on developing lean-burning rather than rich burning combustors that are in commercial service today. These lean-burning combustors will increase fuel/air mixing rates that, when combined with the lean fuel/air ratios, will reduce temperatures locally in the combustor and thus reduce the nitrogen oxides emissions generated. Because of funding constraints, NASA does not plan to implement the next phase of development, which is to examine the combustor improvements in a full engine test environment. NASA is relying on the engine manufacturers to implement this full engine development. Both NASA and aviation industry engineers said that this full engine development phase will be far more complex and involve many more design trade-offs than the combustor development phase. Additionally, they acknowledged that some of the nitrogen oxides reductions achieved during the combustor development phase would be lost during the full engine development phase. NASA researchers indicated these losses could be particularly severe because engine manufacturers are concurrently making other design changes to their engines to minimize fuel consumption and these changes will increase nitrogen oxides emissions. Consequently, NASA researchers are not sure how many of the improvements they expect to achieve by 2005 will survive as the engine manufacturers take over responsibility for completing the development of these improvements in a full engine test environment and then integrate these improvements into production-ready engines. NASA is also working on a long-term revolutionary jet engine design that could significantly reduce all emissions including nitrogen oxides while also reducing fuel consumption. Under its “intelligent propulsions controls” design concept, engine functions are more precisely controlled using computers. For example, with this design, the number of ports delivering fuel to the engine combustion chamber would be greatly increased, and each port would be computer controlled. NASA officials are optimistic about the potential of this concept, but they added that research is in the early stages and that it will probably take 20 years or more to develop. NASA’s overall long-term research plan calls for spending about $20 million per year over the next 5-year period to explore improved fuel burn and nitrogen oxides emission reduction technologies. NASA researchers are also studying the possibility of developing zero emissions (except water) hydrogen-fueled aircraft with an electric propulsion system. While they note that there would have to be many breakthroughs in hydrogen storage and fuel cell technologies and high- powered lightweight electric motors before a hydrogen-fueled commercial airliner is feasible, they believe many of the needed breakthroughs could occur in the next 50 years. NASA is also researching nonengine methods that will indirectly reduce nitrogen oxides (and all other emissions) by reducing fuel consumption. This work includes more efficient airframes through aerodynamic improvements, structural improvements (i.e., reducing aircraft weight), and operational efficiencies (i.e., more fuel efficient flight routes, reduced taxi time). Historically, 40 percent of aviation fuel improvements have come from such efficiency improvements. Aviation emission experts emphasize that it is important that research into these types of improvements continue along with the engine research. The advantage of these improvements is that all emissions are reduced simultaneously without having to make emission trade-offs. Using the Emissions and Dispersion Modeling System (version 4.01) computer model developed by FAA and fleet data obtained from AvSoft, we calculated the landing/takeoff emissions for every aircraft model and engine combination in the U.S. commercial aircraft fleet during 2001. (See appendix I for additional information on our methodology.) Tables 7 and 8 provide additional information on our comparison of older and newest model Boeing 737s. As shown below, older model Boeing 737s, produced in 1969-1998, averaged 12.1 pounds of nitrogen oxides per landing/takeoff (see table 7), while the newest model Boeing 737s, produced in 1997-2201, averaged 17.9 pounds of nitrogen oxides per landing/takeoff (see table 8). Tables 9, 10, and 11 provide additional information about the calculations and commercial fleet for data presented earlier in this report. In addition to the individuals named above, Carolyn Boyce, Joyce Evans, David Hooper, David Ireland, Art James, Jennifer Kim, Eileen Larence, Edward Laughlin, Donna Leiss, Jena Sinkfield, Larry Thomas, and Gail Traynham made key contributions to this report.
Although noise has long been a problem around airports, the anticipated growth in demand for air travel has also raised questions about the effect of airport operations on air quality. Aviation-related emissions of nitrogen oxides, which contribute to the formation of ozone, have been of particular concern to many airport operators. A federal study at 19 airports estimated that, by 2010, aircraft emissions have the potential to significantly contribute to air pollution in the areas around these airports. GAO agreed to review efforts in the United States and other countries to reduce emissions at airports and the effect of improvements in aircraft and engine design on emissions. Many airports have taken measures to reduce emissions, such as converting airport ground vehicles from diesel or gasoline to cleaner alternative fuels. While the actual impact of these measures is unknown, some measures (such as shifting to cleaner alternative fuels) have the potential to significantly reduce emissions, such as nitrogen oxides. In some cases--such as at Los Angeles and Dallas/Fort Worth airports--the emission reduction measures have been imposed by federal or state agencies to bring severely polluted areas into attainment with the Clean Air Act's air quality standards or to offset expected increases in emissions from airport expansion projects. Many industry and government officials that GAO contacted said that new, stricter federal air quality standards that will go into effect in 2003, combined with a boost in emissions due to an expected increase in air travel, could cause airports to be subject to more federal emission control requirements. In 1998, a group of government and industry stakeholders was established to develop a voluntary nationwide program to reduce aviation-related emissions; however, thus far, the group has not agreed to specific objectives or elements of a program. Other countries use many of the same measures as the United States to reduce emissions at airports. Two countries have imposed landing fees based on the amount of emissions produced by aircraft. However, U.S. officials question the effectiveness of these fees. Research and development efforts by the federal government and the aircraft industry have improved fuel efficiency and reduced many emissions from aircraft, including hydrocarbons and carbon monoxide, but have increased emissions of nitrogen oxides, which are a precursor to ozone formation. As a result, many new aircraft are emitting more nitrogen oxides than the older aircraft they are replacing. For example, GAO's analysis of aircraft emission data shows that the engines employed on the newest models of a widely used jet aircraft, while meeting current standards for nitrogen oxide emissions, average over 40 percent more nitrogen oxides during landings and takeoffs than the engines used on the older models. Technologies are available to limit nitrogen oxide emissions from some other newer aircraft models. Many state and federal officials GAO contacted said that, in the long term, nitrogen oxide emissions from aircraft will need to be reduced as part of broader emission reduction efforts in order for some areas to meet federal ozone standards.
History is a good teacher, and to solve the problems of today, it is instructive to look to the past. The problems with the department’s financial management operations date back decades, and previous attempts at reform have largely proven to be unsuccessful. These problems adversely affect DOD’s ability to control costs, ensure basic accountability, anticipate future costs and claims on the budget, such as for health care, weapon systems, and environmental liabilities, measure performance, maintain funds control, prevent fraud, and address pressing management issues. Problems with the department’s financial management operations go far beyond its accounting and finance systems and processes. The department continues to rely on a far-flung, complex network of finance, logistics, personnel, acquisition, and other management information systems— 80 percent of which are not under the control of the DOD Comptroller— to gather the financial data needed to support day-to-day management decision-making. This network was not designed, but rather has evolved into the overly complex and error-prone operation that exists today, including (1) little standardization across DOD components, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, (4) manual data entry into multiple systems, and (5) a large number of data translations and interfaces which combine to exacerbate problems with data integrity. DOD determined, for example, that efforts to reconcile a single contract involving 162 payments resulted in an estimated 15,000 adjustments. Many of the department’s business processes in operation today are mired in old, inefficient processes and legacy systems, some of which go back to the 1950s and 1960s. For example, the department relies on the Mechanization of Contract Administration Services (MOCAS) system to process a substantial portion of DOD contract payment transactions for all DOD organizations, which totaled about $78 billion in fiscal year 2001. When MOCAS was first implemented in 1968, “mechanization” was a high tech word. Past efforts to replace MOCAS have failed. Most recently, in 1994, DOD began acquiring the Standard Procurement System (SPS) to replace the contract administration functions currently performed by MOCAS. However, our July 2001 and February 2002 reporting on DOD’s $3.7 billion investment in SPS showed that this substantial investment was not economically justified and raised questions as to whether further investment in SPS was justified. For the foreseeable future, DOD will continue to be saddled with MOCAS. Moving to the 1970s, we, the Defense Inspector General, and the military service audit organizations, issued numerous reports detailing serious problems with the department’s financial management operations. For example, between 1975 and 1981, we issued more than 75 reports documenting serious problems with DOD’s existing cost, property, fund control, and payroll accounting systems. In the 1980s, we found that despite the billions of dollars invested in individual systems, these efforts too fell far short of the mark, with extensive schedule delays and cost overruns. For example, in 1989, our report on eight major DOD system development efforts—including two major accounting systems—under way at that time, showed that system development cost estimates doubled, two of the eight efforts were abandoned, and the remaining six efforts experienced delays of from 3 to 7 years. Beginning in the 1990s, following passage of the Chief Financial Officers (CFO) Act of 1990, there was a recognition in DOD that broad-based financial management reform was needed. Over the past 12 years, the department has initiated several departmentwide reform initiatives intended to fundamentally reform its financial operations as well as other key business support processes, including the Corporate Information Management initiative, the Defense Business Operations Fund, and the Defense Reform Initiative. These efforts, which I will highlight today, have proven to be unsuccessful despite good intentions and significant effort. The conditions that led to these previous attempts at reform remain largely unchanged today. Corporate Information Management. The Corporate Information Management (CIM), initiative, begun in 1989, was expected to save billions of dollars by streamlining operations and implementing standard information systems. CIM was expected to reform all DOD’s functional areas, including finance, procurement, material management, and human resources through consolidating, standardizing, and integrating information systems. DOD also expected CIM to replace approximately 2,000 duplicative systems. Over the years, we made numerous recommendations to improve CIM’s management, but these recommendations were largely not addressed. Instead, DOD spent billions of dollars with little sound analytical justification. We reported in 1997,that 8 years after beginning CIM, and spending about $20 billion on the initiative, expected savings had yet to materialize. The initiative was eventually abandoned. Defense Business Operations Fund. In October 1991, DOD established a new entity, the Defense Business Operations Fund by consolidating nine existing industrial and stock funds and five other activities operated throughout DOD. Through this consolidation, the fund was intended to bring greater visibility and management to the overall cost of carrying out certain critical DOD business operations. However, from its inception, the fund was plagued by management problems. In 1996, DOD announced the fund’s elimination. In its place, DOD established four working capital funds. These new working capital funds inherited their predecessor’s operational and financial reporting problems. Defense Reform Initiative (DRI). In announcing the DRI program in November 1997, the then Secretary of Defense stated that his goal was “to ignite a revolution in business affairs.” DRI represented a set of proposed actions aimed at improving the effectiveness and efficiency of DOD’s business operations, particularly in areas that have been long-standing problems—including financial management. In July 2000, we reportedthat while DRI got off to a good start and made progress in implementing many of the component initiatives, DRI did not meet expected timeframes and goals, and the extent to which savings from these initiatives will be realized is yet to be determined. GAO is currently examining the extent to which DRI efforts begun under the previous administration are continuing. The past has clearly taught us that addressing the department’s serious financial management problems will not be easy. Early in his tenure, Secretary Rumsfeld commissioned a new study of the department’s financial management operations. The report on the results of the study, Transforming Department of Defense Financial Management: A Strategy for Change, was issued on April 13, 2001. The report recognized that the department will have to undergo “a radical financial management transformation” and that it would take more than a decade to achieve. The report concluded that many studies and interviews with current and former leaders in DOD point to the same problems and frustrations, and that repetitive audit reports verify systemic problems illustrating the need for radical transformation in order to achieve success. Secretary Rumsfeld further confirmed the need for a fundamental transformation of DOD in his “top-down” Quadrennial Defense Review. Specifically, his September 30, 2001, Quadrennial Defense Review Report concluded that the department must transform its outdated support structure, including decades old financial systems that are not well interconnected. The report summed up the challenge well in stating: “While America’s business have streamlined and adopted new business models to react to fast-moving changes in markets and technologies, the Defense Department has lagged behind without an overarching strategy to improve its business practices.” As part of our constructive engagement approach with DOD, I met with Secretary Rumsfeld last summer to provide our perspectives on the underlying causes of the problems that have impeded past reform efforts at the department and to discuss options for addressing these challenges. There are four underlying causes a lack of sustained top-level leadership and management accountability for deeply embedded cultural resistance to change, including military service parochialism and stovepiped operations; a lack of results-oriented goals and performance measures and monitoring; and inadequate incentives for seeking change. Historically, DOD has not routinely assigned accountability for performance to specific organizations or individuals that have sufficient authority to accomplish desired goals. For example, under the CFO Act, it is the responsibility of agency CFOs to establish the mission and vision for the agency’s future financial management. However, at DOD, the Comptroller—who is by statute the department’s CFO—has direct responsibility for only an estimated 20 percent of the data relied on to carry out the department’s financial management operations. The department has learned through its efforts to meet the Year 2000 computing challenge that to be successful, major improvement initiatives must have the direct, active support and involvement of the Secretary and Deputy Secretary of Defense. In the Year 2000 case, the then Deputy Secretary of Defense was personally and substantially involved and played a major role in the department’s success. Such top-level support and attention helps ensure that daily activities throughout the department remain focused on achieving shared, agency-wide outcomes. A central finding from our report on our survey of best practices of world-class financial management organizations—Boeing, Chase Manhattan Bank, General Electric, Pfizer, Hewlett-Packard, Owens Corning, and the states of Massachusetts, Texas and Virginia—was that clear, strong executive leadership was essential to (1) making financial management and entitywide priority, (2) redefining the role of finance, (3) providing meaningful information to decision-makers, and (4) building a team of people that deliver results. DOD past experience has suggested that top management has not had a proactive, consistent, and continuing role in building capacity, integrating daily operations for achieving performance goals, and creating incentives. Sustaining top management commitment to performance goals is a particular challenge for DOD. In the past, the average 1.7 year tenure of the department’s top political appointees has served to hinder long-term planning and follow-through. Cultural resistance to change and military service parochialism have also played a significant role in impeding previous attempts to implement broad-based management reforms at DOD. The department has acknowledged that it confronts decades-old problems deeply grounded in the bureaucratic history and operating practices of a complex, multifaceted organization, and that many of these practices were developed piecemeal and evolved to accommodate different organizations, each with its own policies and procedures. For example, as discussed in our July 2000 report,the department encountered resistance to developing departmentwide solutions under the then Secretary’s broad-based DRI. In 1997, the department established a Defense Management Council—including high-level representatives from each of the military services and other senior executives in the Office of the Secretary of Defense—which was intended to serve as the “board of directors” to help break down organizational stovepipes and overcome cultural resistance to changes called for under DRI. However, we found that the council’s effectiveness was impaired because members were not able to put their individual military services’ or DOD agencies’ interests aside to focus on department-wide approaches to long-standing problems. We have also seen an inability to put aside parochial views. Cultural resistance to change has impeded reforms in not only financial management, but also in other business areas, such as weapon system acquisition and inventory management. For example, as we reported last year, while the individual military services conduct considerable analyses justifying major acquisitions, these analyses can be narrowly focused and do not consider joint acquisitions with the other services. In the inventory management area, DOD’s culture has supported buying and storing multiple layers of inventory rather than managing with just the amount of stock needed. Further, DOD’s past reform efforts have been handicapped by the lack of clear, linked goals and performance measures. As a result, DOD managers lack straightforward road maps showing how their work contributes to attaining the department’s strategic goals, and they risk operating autonomously rather than collectively. In some cases, DOD had not yet developed appropriate strategic goals, and in other cases, its strategic goals and objectives were not linked to those of the military services and defense agencies. As part of our assessment of DOD’s Fiscal Year 2000 Financial Management Improvement Plan, we reported that, for the most part, the plan represented the military services’ and Defense components’ stovepiped approaches to reforming financial management, and did not clearly articulate how these various efforts will collectively result in an integrated DOD-wide approach to financial management improvement. In addition, we reported the department’s plan did not have performance measures that could be used to assess DOD’s progress in resolving its financial management problems. DOD officials have informed us that they are now working to revise the department’s approach to this plan so that it in future years’ updates it will reflect a more strategic, department-wide vision and tool for financial management reform. The department faces a formidable challenge in responding to technological advances that are changing traditional approaches to business management as it moves to modernize its systems. For fiscal year 2001, DOD’s reported total information technology investments of almost $23 billion, supporting a wide range of military operations as well as its business functions. As we have reported,while DOD plans to invest billions of dollars in modernizing its financial management and other business support systems, it does not yet have an overall blueprint—or enterprise architecture—in place to guide and direct these investments. As we recently testified, our review of practices at leading organizations showed they were able to make sure their business systems addressed corporate—rather than individual business unit—objectives by using enterprise architectures to guide and constrain investments. Consistent with our recommendation, DOD is now working to develop a financial management enterprise architecture, which is a very positive development. The final underlying cause of the department’s long-standing inability to carry out needed fundamental reform has been the lack of incentives for making more than incremental change to existing “business as usual” processes, systems, and structures. Traditionally, DOD has focused on justifying its need for more funding rather than on the outcomes its programs have produced. DOD generally measures its performance by the amount of money spent, people employed, or number of tasks completed. Incentives for DOD decisionmakers to implement changed behavior have been minimal or nonexistent. Secretary Rumsfeld perhaps said it best in announcing his planned transformation at DOD, “…there will be real consequences from, and real resistance to, fundamental change.” This underlying problem has perhaps been most evident in the department’s acquisition area. In DOD’s culture, the success of a manager’s career has depended more on moving programs and operations through the DOD process rather than on achieving better program outcomes. The fact that a given program may have cost more than estimated, took longer to complete, and did not generate results or perform as promised was secondary to fielding a new program. To effect real change, actions are needed to (1) break down parochialism and reward behaviors that meet DOD-wide and congressional goals, (2) develop incentives that motivate decisionmakers to initiate and implement efforts that are consistent with better program outcomes, including saying “no” or “pulling the plug” on a system or program that is failing, and (3) facilitate a congressional focus on results-oriented management, particularly with respect to resource allocation decisions. As we testified in May 2001, our experience has shown there are several key elements that, collectively will enable the department to effectively address the underlying causes of DOD’s inability to resolve its long- standing financial management problems. These elements, which will be key to any successful approach to financial management reform include addressing the department’s financial management challenges as part of a comprehensive, integrated, DOD-wide business process reform; providing for sustained leadership by the Secretary of Defense and resource control to implement needed financial management reforms; establishing clear lines of responsibility, authority, and accountability for such reform tied to the Secretary; incorporating results-oriented performance measures and monitoring tied to financial management reforms; providing appropriate incentives or consequences for action or inaction; establishing an enterprisewide system architecture to guide and direct financial management modernization investments; and ensuring effective oversight and monitoring. Actions on many of the key areas central to successfully achieving desired financial management and related business process transformation goals —particularly those that rely on longer term systems improvements—will take a number of years to fully implement. Secretary Rumsfeld has estimated that his envisioned transformation may take 8 or more years to complete. Consequently, both long-term actions focused on the Secretary’s envisioned business transformation, as well as short-term actions, focused on improvements within existing systems and processes, will be critical going forward. Short-term actions in particular will be critical if the department is to achieve the greatest possible accountability over existing resources and more reliable data for day-to-day decision-making while longer term systems and business process reengineering efforts are under way. Beginning with the Secretary’s recognition of a need for a fundamental transformation of the department’s business processes, and building on some of the work begun under past administrations, DOD has taken a number of positive steps in many of these key areas. At the same time, the challenges remaining in each of these key areas are somewhat daunting. As we have reported in the past,establishing the right goal is essential for success. Central to effectively addressing DOD’s financial management problems will be the recognition that they cannot be addressed in an isolated, stovepiped, or piecemeal fashion separate from the other high- risk areas facing the department.Successfully reengineering the department’s processes supporting its financial management and other business support operations will be critical if DOD is to effectively address deep-rooted organizational emphasis on maintaining “business as usual” across the department. Financial management is a crosscutting issue that affects virtually all of DOD’s business areas. For example, improving its financial management operations so that they can produce timely, reliable, and useful cost information will be essential if the department is to effectively measure its progress toward achieving many key outcomes and goals across virtually the entire spectrum of DOD’s business operations. At the same time, the department’s financial management problems—and, most importantly, the keys to their resolution—are deeply rooted in and dependent upon developing solutions to a wide variety of management problems across DOD’s various organizations and business areas. For example, we have reported that many of DOD’s financial management shortcomings were attributable in part to human capital issues. The department does not yet have a strategy in place for improving its financial management human capital. This is especially critical in connection with DOD’s civilian workforce, since DOD has generally done a much better job in conjunction with human capital planning for its military personnel. In addition, DOD’s civilian personnel face a variety of size, shape, skills, and succession planning challenges that need to be addressed. As I mentioned earlier, and it bears repetition, the department has reported that an estimated 80 percent of the data needed for sound financial management comes from its other business operations, such as its acquisition and logistics communities. DOD’s vast array of costly, nonintegrated, duplicative, and inefficient financial management systems is reflective of the lack of an enterprisewide, integrated approach to addressing its management challenges. DOD has acknowledged that one of the reasons for the lack of clarity in its reporting under the Government Performance and Results Act has been that most of the program outcomes the department is striving to achieve are interrelated, while its management systems are not integrated. As I mentioned earlier, the Secretary of Defense has made the fundamental transformation of business practices throughout the department a top priority. In this context, the Secretary established a number of top-level committees, councils and boards, including the Senior Executive Committee, Business Initiative Council, and the Defense Business Practices Implementation Board. The Senior Executive Committee was established to help guide efforts across the department to improve its business practices. This committee, chaired by the Secretary of Defense, and with membership to include the Deputy Secretary, the military service secretaries and the Under Secretary of Defense for Acquisition, Logistics and Technology, was established to function as the board of directors for the department. The Business Initiative Council, comprised of the military service secretaries and headed by the Under Secretary of Defense for Acquisition, Technology and Logistics, was established to encourage the military services to explore new money saving business practices to help offset funding requirements for transformation and other initiatives. The Secretary also established the Defense Business Practices Implementation Board, composed of business leaders from the private sector. The board is intended to tap outside expertise to advise the department on its efforts to improve business practices. The department’s successful Year 2000 effort illustrated and our survey of leading financial management organizations captured the importance of strong leadership from top management. As we have stated many times before, strong, sustained executive leadership is critical to changing a deeply rooted corporate culture—such as the existing “business as usual” culture at DOD—and successfully implementing financial management reform. As I mentioned earlier, the personal, active involvement of the Deputy Secretary of Defense played a key role in building entitywide support and focus for the department’s Year 2000 initiatives. Given the long-standing and deeply entrenched nature of the department’s financial management problems combined with the numerous competing DOD organizations, each operating with varying and often parochial views and incentives, such visible, sustained top-level leadership will be critical. In discussing their April 2001 report to the Secretary of Defense on transforming financial management, the authors stated that, “unlike previous failed attempts to improve DOD’s financial practices, there is a new push by DOD leadership to make this issue a priority.” With respect to the key area of investment control, the Secretary took action to set aside $100 million for financial modernization. Strong, sustained executive leadership—over a number of years and administrations—will be key to changing a deeply rooted culture. In addition, given that significant investments in information systems and related processes have historically occurred in a largely decentralized manner throughout the department, additional actions will likely be required to implement a centralized IT investment control strategy. For example, in our May 2001 report, we recommended DOD take action to establish centralized control over transformation investments to ensure that funding is provided for only those proposed investments in systems and business processes that are consistent with the department’s overall business process transformation strategy. Last summer, when I met with Secretary Rumsfeld, I stressed the importance of establishing clear lines of responsibility, decision-making authority, and resource control for actions across the department tied to the Secretary as a key to reform. As we previously reported, such an accountability structure should emanate from the highest levels and include the secretaries of each of the military services as well as heads of the department’s various major business areas. The Secretary of Defense has taken action to vest responsibility and accountability for financial management modernization with the DOD Comptroller. In October 2001, the DOD Comptroller established the Financial Management Modernization Executive and Steering Committees as the governing bodies to oversee the activities related to this modernization effort and also established a supporting working group to provide day-to-day guidance and direction on these efforts. DOD reports that the executive and steering committees met for the first time in January 2002. It is clear to us that the Comptroller has the full support of the Secretary and that the Secretary is committed to making meaningful change. To make this work, it will be important that the Comptroller has sufficient authority to bring about the full, effective participation of the military services and business process owners across the department. The Comptroller has direct control of 20 percent of the data needed for sound financial management and has historically had limited ability to control information technology investments across the department. Addressing issues such as centralization of authority for information systems investments and continuity of leadership will be critical to successful business process transformation. In addition to DOD, a number of other federal departments and agencies are facing an array of interrelated business system management challenges for which resolution is likely to require a number of years and could span administrations. One option that may have merit would be the establishment of chief operating officers, who could be appointed for a set term of 5 to 7 years, with the potential for reappointment. These individuals should have a proven track record as a business process change agent for a large, diverse organization and would spearhead business process transformation across the department or agency. As discussed in our January 2001 report on DOD’s major performance and accountability challenges, establishing a results orientation will be another key element of any approach to reform. Such an orientation should draw upon results that could be achieved through commercial best practices, including outsourcing and shared servicing concepts. Personnel throughout the department must share the common goal of establishing financial management operations that not only produce financial statements that can withstand the test of an audit but, more importantly, routinely generate useful, reliable, and timely financial information for day-to-day management purposes. In addition, we have previously testified that DOD’s financial management improvement efforts should be measured against an overall goal of effectively supporting DOD’s basic business processes, including appropriately considering related business process system interrelationships, rather than determining system-by-system compliance. Such a results-oriented focus is also consistent with an important lesson learned from the department’s Year 2000 experience. DOD’s initial Year 2000 focus was geared toward ensuring compliance on a system-by-system basis and did not appropriately consider the interrelationship of systems and business areas across the department. It was not until the department, under the direction of the then Deputy Secretary, shifted to a core mission and function review approach that it was able to achieve the desired result of greatly reducing its Year 2000 risk. Since the Secretary has established an overall business process transformation goal that will require a number of years to achieve, going forward, it will be especially critical for managers throughout the department to focus on specific measurable metrics that, over time, collectively will translate to achieving this overall goal. It will be important for the department to refocus its annual accountability reporting on this overall goal of fundamentally transforming the department’s financial management systems and related business processes to include appropriate interim annual measures to track progress toward this goal. In the short term, it will be important to focus on actions that can be taken using existing systems and processes. Establishing interim measures to both track performance against the department’s overall transformation goals and facilitate near term successes using existing systems and processes will be critical. The department has established an initial set of metrics intended to evaluate financial performance, and reports that it has seen improvements. For example, with respect to closed appropriation accounts, DOD reported during the first 4 months of fiscal year 2002, a reduction in the dollar value of adjustments to closed appropriation accounts of about 51 percent from the same 4-month period in fiscal year 2001. Other existing metrics concern cash and funds management, contract and vendor payments, and disbursement accounting. DOD also reported that it is working to develop these metrics into higher level measures more appropriate for senior management. We agree with the department’s efforts to expand the use of appropriate metrics to guide its financial management reform efforts. Another key to breaking down parochial interests and stovepiped approaches that have plagued previous reform efforts will be establishing mechanisms to reward organizations and individuals for behaviors that comply with DOD-wide and congressional goals. Such mechanisms should be geared to providing appropriate incentives and penalties to motivate decisionmakers to initiate and implement efforts that result in fundamentally reformed financial management and other business support operations. In addition, such incentives and consequences will be essential if DOD is to break down the parochial interests that have plagued previous reform efforts. Incentives driving traditional ways of doing business, for example, must be changed, and cultural resistance to new approaches must be overcome. Simply put, DOD must convince people throughout the department that they must change from “business as usual” systems and practices or they are likely to face serious consequences, organizationally and personally. Establishing and implementing an enterprisewide financial management architecture will be essential for the department to effectively manage its large, complex system modernization effort now underway. The Clinger- Cohen Act requires agencies to develop, implement, and maintain an integrated system architecture. As we previously reported, such an architecture can help ensure that the department invests only in integrated, enterprisewide business system solutions and, conversely, will help move resources away from non-value added legacy business systems and nonintegrated business system development efforts. In addition, without an architecture, DOD runs the serious risk that its system efforts will result in perpetuating the existing system environment that suffers from systems duplication, limited interoperability, and unnecessarily costly operations and maintenance. In our May 2001 report, we pointed out that DOD lacks a financial management enterprise architecture to guide and constrain the billions of dollars it plans to spend to modernize its financial management operations and systems. DOD has reported that it is in the process of contracting for the development of a DOD-wide financial management enterprise architecture to “achieve the Secretary’s vision of relevant, reliable and timely financial information needed to support informed decision-making.” Consistent with our previous recommendations in this area, DOD has begun an extensive effort to document the department’s current “as-is” financial management architecture by inventorying systems now relied on to carryout financial management operations throughout the department. DOD has identified 674 top-level systems and at least 997 associated interfaces thus far and estimates that this inventory could include up to 1,000 systems when completed. While DOD’s beginning efforts at developing a financial management enterprise architecture are off to a good start, the challenges yet confronting the department in its efforts to fully develop, implement, and maintain a DOD-wide financial management enterprise architecture are unprecedented. Our May 2001 reportdetails a series of recommended actions directed at ensuring DOD employs recognized best practices for enterprise architecture management. This effort will be further complicated as the department strives to develop multiple enterprise architectures across its various business areas. For example, in June 2001, we recommendedthat DOD develop an enterprise architecture for its logistics operations. As I discussed previously, an integrated reform strategy will be critical. In this context, it is essential that DOD closely coordinate and integrate the development and implementation of these, as well as other, architectures. By following this integrated approach and our previous recommendations, DOD will be in the best position to avoid the serious risk that after spending billions of dollars on systems modernization, it will continue to perpetuate the existing systems environment that suffers from duplication of systems, limited interoperability, and unnecessarily costly operations and maintenance. Ensuring effective monitoring and oversight of progress will also be a key to bringing about effective implementation of the department’s financial management and related business process reform. We have previously testifiedthat periodic reporting of status information to department top management, the Office of Management and Budget (OMB), the Congress, and the audit community was another key lesson learned from the department’s successful effort to address its Year 2000 challenge. Previous Financial Management Improvement Plans DOD submitted to the Congress have simply been compilations of data call information on the stovepiped approaches to financial management improvements received from the various DOD components. It is our understanding that DOD plans to change its approach and anchor its plans in an enterprise system architecture. If the department’s future plans are upgraded to provide a department-wide strategic view of the financial management challenges facing the DOD along with planned corrective actions, these plans can serve as an effective tool not only to help guide and direct the department’s financial management reform efforts, but a tool for oversight. Going forward, this Subcommittee’s annual oversight hearings, as well the active interest and involvement of other cognizant defense and oversight committees in the Congress, will continue to be key to effectively achieving and sustaining DOD’s financial management and related business process reform milestones and goals. Given the size, complexity, and deeply engrained nature of the financial management problems facing DOD, heroic end-of-the year efforts relied on by some agencies to develop auditable financial statement balances are not feasible at DOD. Instead, a sustained focus on the underlying problems impeding the development of reliable financial data throughout the department will be necessary and is the best course of action. I applaud the proposals spearheaded by the Senate Armed Services Committee, and subsequently enacted as part of the fiscal year 2002 National Defense Authorization Act, to provide a framework for redirecting the department’s resources from the preparation and audit of financial statements that are acknowledged by DOD leadership to be unauditable to the improvement of DOD’s financial management systems and financial management policies, procedures and internal controls. Under this new legislation, the department will also be required to report to the Congress on how resources have been redirected and the progress that has been achieved. This reporting will provide an important vehicle for the Congress to use in assessing whether DOD is using its available resources to best bring about the development of timely and reliable financial information for daily decisionmaking and transform its financial management as envisioned by the Secretary of Defense.
Financial management problems at the Department of Defense (DOD) are complex, long-standing, and deeply rooted throughout its business operations. DOD's financial management deficiencies represent the single largest obstacle to achieving an unqualified opinion on the U.S. government's consolidated financial statements. So far, none of the military services or major DOD components have passed the test of an independent financial audit because of pervasive weaknesses in financial management systems, operations, and controls. These problems go back decades, and earlier attempts at reform have been unsuccessful. DOD continues to rely on a far-flung, complex network of finance, logistics, personnel, acquisition, and other management information systems for financial data to support day-to-day management and decision-making. This network has evolved into an overly complex and error-prone operation with (1) little standardization across DOD components, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, (4) manual data entry into multiple systems, and (5) a large number of data translations and interfaces, which combine to exacerbate problems with data integrity. Many of the elements that are crucial to financial management reform and business process transformation--particularly those that rely on long-term systems improvements--will take years to fully implement.
Combined Medicare and Medicaid payments to nursing homes for care provided to vulnerable elderly and disabled beneficiaries were expected to total about $63 billion in 2002, with a federal share of approximately $42 billion. Oversight of nursing homes is a shared federal-state responsibility. Based on statutory requirements, CMS defines standards that nursing homes must meet to participate in the Medicare and Medicaid programs and contracts with states to assess whether homes meet these standards through annual surveys and complaint investigations. A range of statutorily defined sanctions is available to help ensure that homes maintain compliance with federal quality requirements. CMS is also responsible for monitoring the adequacy of state survey activities. Every nursing home receiving Medicare or Medicaid payment must undergo a standard survey not less than once every 15 months, and the statewide average interval for these surveys must not exceed 12 months. A standard survey entails a team of state surveyors, including registered nurses (RN), spending several days in the nursing home to assess compliance with federal long-term care facility requirements, particularly whether care and services provided meet the assessed needs of the residents and whether the home is providing adequate quality care, such as preventing avoidable pressure sores, weight loss, or accidents. Based on our earlier work indicating that facilities could mask certain deficiencies, such as routinely having too few staff to care for residents, if they could predict the survey timing, HCFA directed states in 1999 to (1) avoid scheduling a home’s survey for the same month of the year as the home’s previous standard survey and (2) begin at least 10 percent of standard surveys outside the normal workday (either on weekends, early in the morning, or late in the evening). State surveyors’ assessment of the quality of care provided to a sample of residents during the standard survey serves as the basis for evaluating nursing homes’ compliance with federal requirements. CMS establishes specific investigative protocols for state surveyors to use in conducting these comprehensive surveys. These procedural instructions are intended to make the on-site surveys thorough and consistent across states. In response to our earlier recommendations concerning the need to better ensure that surveyors do not miss significant care problems, HCFA planned a two-phase revision of the survey process. In phase one, HCFA instructed states in 1999 to (1) begin using a series of new investigative protocols covering pressure sores, weight loss, dehydration, and other key quality areas, (2) increase the sample of residents reviewed with conditions related to these areas, and (3) review “quality indicator” information on the care provided to a home’s residents, before actually visiting the home, to help guide survey activities. Quality indicators are essentially numeric warning signs of the prevalence of care problems such as greater-than-expected instances of weight loss, dehydration, or pressures sores. They are derived from nursing homes’ assessments of residents and rank a facility in 24 areas compared with other nursing homes in the state. By using the quality indicators to select a preliminary sample of residents before the on-site review, surveyors are better prepared to identify potential care problems. Surveyors augment this preliminary sample with additional resident cases once they arrive in the home. To address remaining problems with sampling and the investigative protocols, CMS is planning a second set of revisions to its survey methodology. The focus of phase two is (1) improving the on-site augmentation of the preliminary sample selected off-site using the quality indicators and (2) strengthening the protocols used by surveyors to ensure more rigor in their on-site investigations. Complaint investigations provide an opportunity for state surveyors to intervene promptly if quality-of-care problems arise between standard surveys. Within certain federal guidelines and time frames, surveyors generally follow state procedures when investigating complaints filed against a home by a resident, the resident’s family, or nursing home employees, and typically target a single area in response to the complaint. Historically, HCFA had played a minimal role in providing states with guidance and oversight of complaint investigations. Until 1999, federal guidelines were limited to requiring the investigation of complaints alleging immediate jeopardy conditions within 2 workdays. In March 1999, HCFA acted to strengthen state complaint procedures by instructing states to investigate any complaint alleging harm to a nursing home resident within 10 workdays. Additional guidance provided to states in late 1999 specified that, as with immediate jeopardy complaints, investigations should generally be conducted on-site at the nursing home. This guidance also identified techniques to help states identify complaints having a higher level of actual harm. As part of a complaint improvement project, also initiated in late 1999, HCFA plans to issue more detailed guidance to states, such as identifying model programs or practices to increase the effectiveness of complaint investigations. Quality-of-care deficiencies identified during either standard surveys or complaint investigations are classified in 1of 12 categories according to their scope (i.e., the number of residents potentially or actually affected) and their severity. An A-level deficiency is the least serious and is isolated in scope, while an L-level deficiency is the most serious and is considered to be widespread in the nursing home (see table 1). States are required to enter information about surveys and complaint investigations, including the scope and severity of deficiencies identified, in CMS’s OSCAR database. The importance of accurate and timely reporting of nursing home deficiency data has increased with the public reporting of survey deficiencies, which HCFA initiated in 1998 on its Nursing Home Compare Web site. The public reporting of deficiency data is intended to assist individuals in differentiating among nursing homes. In November 2002, CMS augmented the deficiency data available on its Web site with 10 clinical indicators of quality, such as the percentage of residents with pressure sores, in nursing homes nationwide. While the intent of this new initiative is worthwhile, CMS had not resolved several important issues that we raised prior to moving from a six-state pilot to nationwide implementation. These issues included: (1) the ability of the new information to accurately identify differences in nursing home quality, (2) the accuracy of the underlying data used to calculate the quality indicators, and (3) the potential for public confusion over the available data. Ensuring that documented deficiencies are corrected is a shared federal- state responsibility. CMS imposes sanctions on homes with Medicare or dual Medicare and Medicaid certification on the basis of state referrals. CMS normally accepts a state’s recommendation for sanctions but can modify it. The scope and severity of a deficiency determine the applicable sanctions that can involve, among other things, requiring training for staff providing care to residents, imposing monetary fines, denying the home Medicare and Medicaid payments for new admissions, and terminating the home from participation in these programs. Before a sanction is imposed, federal policy generally gives nursing homes a grace period of 30 to 60 days to correct the deficiency. We earlier reported, however, that the threat of federal sanctions did not prevent nursing homes from cycling in and out of compliance because they were able to avoid sanctions by returning to compliance within the grace period, even when they had been cited for actual harm on successive surveys. In 1998, HCFA began a two- stage phase-in of a new enforcement policy. In the first stage, effective September 1998, HCFA required states to refer for immediate sanction homes found to have a pattern of harming residents or exposing them to actual or potential death or serious injury (H-level deficiencies and above on CMS’s scope and severity grid). Effective January 14, 2000, HCFA expanded this policy to also require referral of homes found to have harmed one or a small number of residents (G-level deficiencies) on successive standard surveys. CMS is responsible for overseeing each state survey agency’s performance in ensuring quality of care in state nursing homes. Its primary oversight tools are statutorily required federal monitoring surveys conducted annually in 5 percent of the nation’s certified Medicare and Medicaid nursing homes, on-site annual state performance reviews instituted during fiscal year 2001, and analysis of periodic oversight reports that have been produced since 2000. Federal monitoring surveys can be either comparative or observational. A comparative survey involves a federal survey team conducting a complete, independent survey of a home within 2 months of the completion of a state’s survey in order to compare and contrast the findings. In an observational survey, one or more federal surveyors accompany a state survey team to a nursing home to observe the team’s performance. Roughly 85 percent of federal surveys are observational. State performance reviews, implemented in October 2000, measure state performance against seven standards, including statutory requirements regarding survey frequency, requirements for documenting deficiencies, timeliness of complaint investigations, and timely and accurate entry of deficiencies into OSCAR. These reviews replaced state self-reporting of their compliance with federal requirements. In October 2000, HCFA also began to produce 19 periodic reports to monitor both state and regional office performance. The reports are based on OSCAR and other CMS databases. Examples of reports that track state activities include pending nursing home terminations (weekly), data entry timeliness (quarterly), tallies of state surveys that find homes deficiency free (semiannually), and analyses of the most frequently cited deficiencies by states (annually). These reports, in a standard format, enable comparisons within and across states and regions and are intended to help identify problems and the need for intervention. Certain reports—such as the timeliness of state survey activities—are used to monitor compliance with state performance standards. The magnitude of the problems uncovered during standard nursing home surveys remains a cause for concern even though OSCAR deficiency data indicate that state surveyors are finding fewer serious quality problems. Compared to an earlier period, the percentage of homes nationwide cited since mid-2000 for actual harm or immediate jeopardy has decreased in over three-quarters of states—with seven states reporting a drop of 20 percentage points or more. State surveys conducted since about mid-2000 showed less variance from federal comparative surveys, suggesting that (1) state surveyors’ performance in documenting serious deficiencies has improved and (2) the decline in serious nursing home quality problems is potentially real. However, federal comparative surveys, as well as our review of a sample of survey reports from homes with a history of quality- of-care problems, continued to find understatement of actual harm deficiencies. Compared to the preceding 18-month period, the proportion of nursing homes cited for actual harm or immediate jeopardy has declined nationally from 29 percent to 20 percent since mid-2000. In contrast, from early 1997 through mid-2000, the percentage of homes cited for such serious deficiencies was either relatively stable or increased in 31 states. From July 2000 through January 2002, 40 states cited a smaller percentage of homes with such serious deficiencies, while only 9 states and the District of Columbia cited a larger proportion of homes with such deficiencies. Despite these changes, there is still considerable variation in the proportion of homes cited for serious deficiencies, ranging from about 7 percent in Wisconsin to about 50 percent in Connecticut. Appendix II provides trend data on the percentage of nursing homes cited for serious deficiencies for all 50 states and the District of Columbia. Table 2 shows the recent change in actual harm and immediate jeopardy deficiencies for states that surveyed at least 100 nursing homes. Specifically: Twenty-five states had a 5 percentage point or greater decrease in the proportion of homes identified with actual harm or immediate jeopardy. For over two-thirds of these states, the decrease in serious deficiencies was greater than 10 percentage points. Seven states—Arizona, Alabama, California, Michigan, Indiana, Pennsylvania, and Washington— experienced declines of 15 percentage points or more. Two states, South Dakota and Colorado, experienced an increase of 5 percentage points or greater in the proportion of homes with actual harm or immediate jeopardy deficiencies (6.6 and 10.8, respectively). The remaining 11 states were relatively stable—experiencing approximately a 4 percentage point change or less. States offered several explanations for the declines in actual harm and immediate jeopardy deficiencies, including (1) changing guidance from CMS regional offices as to what constitutes actual harm, (2) hiring additional staff, and (3) surveyors failing to properly identify actual harm deficiencies. Our analysis of federal comparative surveys conducted nationwide prior to and since June 2000 showed a decreased variance between federal and state survey findings (see app. I for a description of our scope and methodology). For comparative surveys completed from October 1998 through May 2000, federal surveyors found actual harm or higher-level deficiencies in 34 percent of homes where state surveyors had found no such deficiencies, compared to 22 percent for comparative surveys completed from June 2000 through February 2002. In addition, while federal surveyors found more serious care problems than state surveyors on 70 percent of the earlier comparative surveys, this percentage declined to 60 percent for the more recent surveys. Despite the decline in understatement of actual harm deficiencies from 34 percent to 22 percent, the magnitude of the state surveyors’ understatement of quality problems remains an issue. For example, from June 2000 through February 2002, federal surveyors found at least one actual harm or immediate jeopardy quality-of-care deficiency in 16 of the 85 homes (19 percent) that the states had found to be free of deficiencies. For example, federal surveyors found that 1 of the 16 homes failed to prevent pressure sores, failed to consistently monitor pressure sores when they did develop, and failed to notify the physician promptly so that proper treatment could be started. The federal surveyors who conducted the comparative survey of this nursing home noted in the file that a lack of consistent monitoring of pressure sores existed at the home during the time of the state’s survey and that the state surveyors should have found the deficiency. Several states that reviewed a draft of this report questioned the value of federal comparative surveys because of their timing. Arizona noted that comparative surveys do not have to begin until up to 2 months after the state’s survey, and Iowa and Virginia officials said they might occur so long after the state’s survey that conditions in the home may have significantly changed. Although legislation requires comparative surveys to begin within 2 months of the state’s survey, CMS is continuing to make progress in reducing the timeframe between the state and the comparative survey. Based on our earlier recommendation that comparative surveys begin as soon after the state’s survey as possible, CMS instructed the regions to begin these surveys no later than one month following the state’s survey, and the average time between surveys nationally has decreased from 33 calendar days in 1999 to about 26 calendar days for surveys conducted from June 2000 through February 2002. Even with the reported decline in serious deficiencies, an unacceptably high number of nursing homes—one in five nationwide—still had actual harm or immediate jeopardy deficiencies. Moreover, we found widespread understatement of actual harm deficiencies in a sample of surveys we reviewed that were conducted since July 2000 at homes with a history of harming residents (see app. I for a description of our methodology in selecting this sample). In 39 percent of the 76 survey reports we reviewed, we found sufficient evidence to conclude that deficiencies cited at a lower level (generally, potential for more than minimal harm, D or E) should have been cited at the level of actual harm or higher (G level or higher on CMS’s scope and severity grid). We were unable to assess whether the scope and severity of other deficiencies in our sample of surveys were also understated because of weaknesses in the investigations conducted by surveyors and in the adequacy with which they documented those deficiencies. Of the surveys we reviewed, 30 (39 percent) contained sufficient evidence for us to conclude that deficiencies cited at the D and E level should have been cited as at least actual harm because a deficient practice was identified and linked to documented actual harm involving at least one resident (see table 3). These 30 survey reports depicted examples of actual harm, including serious, avoidable pressure sores; severe weight loss; and multiple falls resulting in broken bones and other injuries (see app. III for abstracts of these 30 survey reports). The following example illustrates understated actual harm involving the failure to provide necessary care and services. A nurse at one facility noted a large area of bruising and swelling on an 89-year-old resident’s chest. Nothing further was done to explore this injury until 11 days later when the resident began to experience shortness of breath and diminished breath sounds. Then a chest x ray was taken, revealing that the resident had sustained two fractured ribs and fluid had accumulated in the resident’s left lung. A facility investigation determined that the resident had been injured by a lift used to transfer the resident to and from the bed. It was clear from the surveyor’s information that the facility failed to take appropriate action to assess and provide the necessary care until the resident developed serious symptoms of chest trauma. Nevertheless, the surveyor concluded that there was no actual harm and cited a D-level deficiency—potential for more than minimal harm. State survey agency officials in Alabama, California, Iowa, and Nebraska told us that surveyors had originally cited G-level deficiencies in 10 of the surveys we reviewed, but that the deficiencies had been reduced to the D level during the states’ reviews because of inadequate surveyor documentation. We concluded that 5 of the 10 surveys did contain adequate documentation to support actual harm because there was a clear link between the deficient facility practice and the documented harm to a resident. For example, the survey managers in one state changed a G- to a D-level deficiency because the surveyor only cited one source of evidence to support the deficiency—nurses’ notes in the residents’ medical records. According to the surveyor, a resident with dementia, experiencing long- and short-term memory problems, fell 11 times and sustained a fractured wrist, three fractured ribs, and numerous bruises, abrasions, and skin tears. According to the notes of facility nurses, a personal alarm unit was in place as a safety device to prevent falls. The surveyor found that the facility had (1) failed to provide adequate interventions to prevent accidents and (2) continued to use the alarm unit even though it did not prevent any of the falls. The medical record documentation of these events was extensive and, in our judgment, was sufficient evidence of a deficiency that resulted in actual harm to the resident. In many of the 76 surveys we reviewed, including surveys in which we found no D- or E-level deficiencies that would appear to meet the criteria for actual harm deficiencies, we identified serious investigation or documentation weaknesses that could further contribute to the understatement of serious deficiencies in nursing homes. In some cases, the survey did not clearly describe the elements of the deficient practice, such as whether the resident developed a pressure sore in the facility or what the facility did to prevent the development of a facility-acquired pressure sore. In other cases, the survey omitted critical facts, such as whether a pressure sore had worsened or the size of the pressure sore. Widespread weaknesses persist in state survey, complaint investigation, and enforcement activities despite increased attention to these issues in recent years. Several factors at the state level contribute to the understatement of serious quality-of-care problems, including poor investigation and documentation of deficiencies, the absence of adequate quality assurance processes, and a large number of inexperienced surveyors in some states due to high attrition or hiring limitations. In addition, our analysis of OSCAR data indicated that the timing of a significant proportion of state surveys remained predictable, allowing homes to conceal problems if they choose to do so. Many states’ complaint investigation policies and procedures were still inadequate to provide intended protections. For example, many states do not investigate all complaints identified as alleging actual harm in a timely manner, a problem some states attributed to insufficient staff and an increase in the number of complaints. Although HCFA strengthened its enforcement policy by requiring state survey agencies, beginning in January 2000, to refer for immediate sanction homes that had a pattern of harming residents, we found that many states did not fully comply with this new requirement. States failed to refer a substantial number of homes for sanction, significantly undermining the policy’s intended deterrent effect. CMS and state officials identified several factors that they believe contribute to state surveys continuing to miss significant care problems. These weaknesses persist, in part, because many states lack adequate quality assurance processes to ensure that deficiencies identified by surveyors are appropriately classified. According to officials we interviewed, the large number of inexperienced surveyors in some states due to high attrition has also had a negative impact on the quality of state surveys and investigations. Our analysis of OSCAR data also indicated that nursing homes could conceal problems if they choose to do so because a significant proportion of current state surveys remain predictable. Consistent with the investigation and documentation weaknesses we found in our review of a sample of survey reports from homes with a history of actual harm deficiencies, CMS officials told us that their own activities had identified similar problems that could contribute to an understatement of serious deficiencies at nursing homes. CMS reviews of state survey reports during fiscal year 2001 demonstrated weaknesses in a majority of states, including: (1) inadequate investigation and documentation of a poor outcome, such as reviewing available records to help identify when a pressure sore was first observed and how it changed over time, (2) failure to specifically identify the deficient practice that contributed to a poor outcome, or (3) understatement of the seriousness of a deficiency, such as citing a deficiency at the D level (potential for actual harm) when there was sufficient evidence in the survey report to cite the deficiency at the G level (actual harm). State survey agency officials expressed confusion about the definition of “actual harm” and “immediate jeopardy,” suggesting that such confusion contributes to the variability in state deficiency trends. For example, officials in one state told us that, in their view, residents must experience functional impairment for state surveyors to cite an actual harm deficiency, an interpretation that CMS officials told us was incorrect. Under such a definition, repeated falls that resulted in bruises, cuts, and painful skin tears would not be cited as actual harm, even if the facility failed to assess the resident for measures to prevent falls. CMS officials also told us that, contrary to federal guidance, state surveyors in at least one state did not cite all identified deficiencies but rather brought them to the homes’ attention with the expectation that the deficiencies would be corrected. CMS officials told us that they identified the problem by asking state officials about the unusually high number of homes with no deficiencies on their standard surveys. Some state officials told us that considerable staff resources are devoted to scrutinizing the support for actual harm and higher-level deficiencies that could lead to the imposition of a sanction. While most of the 16 states we contacted had quality assurance processes to review deficiencies cited at the actual harm level and higher, half did not have such processes to help ensure that the scope and severity of less serious deficiencies were not understated. State officials generally told us that they lacked the staff and time to review deficiencies that did not involve actual harm or immediate jeopardy, but some states have established such programs. For example, Maryland established a technical assistance unit in early 2001 to review a sample of survey reports; the review looks at all deficiencies— not just those involving actual harm or immediate jeopardy. A Maryland official told us that she had the resources to do so because the state legislature authorized a substantial increase in the number of surveyors in 1999. However, staff cutbacks in late 2002 due to the state’s budget crisis have resulted in the reviews being less systematic than originally planned. In Colorado, two long-term-care supervisors reviewed all 1,351 deficiencies cited in fiscal year 2001. Maryland and Colorado officials told us that the reviews have identified shortcomings in the investigation and documentation of deficiencies, such as the failure to interview residents or the classification of deficiencies as process issues when they actually involved quality of care. The reviews, we were told, provide an opportunity for surveyor feedback or training that improves the quality and consistency of future surveys. State officials cited the limited experience level of state surveyors as a factor contributing to the variability in citing actual harm or higher-level deficiencies and the understatement of such deficiencies. Data we obtained from 42 state survey agencies in July 2002 revealed the magnitude of the problem: in 11 states, 50 percent or more of surveyors had 2-years’ experience or less; in another 13 states, from 30 percent to 48 percent of surveyors had similarly limited experience (see app. IV). For example, Alabama’s and Louisiana’s recent annual attrition rates were 29 percent and 18 percent, respectively, and, as a result, almost half of the surveyors in both states had been on the job for 2 years or less. In California and Maryland—states that hired a significant number of new surveyors since 2000—52 percent and 70 percent of surveyors, respectively, had less than 2 years of on-the-job experience. According to CMS regional office and state officials, the first year for a new surveyor is essentially a period of training and low productivity, and it takes as long as 3 years for a surveyor to gain sufficient knowledge, experience, and confidence to perform the job well. High staff turnover was attributed, in part, to low salaries for RN surveyors—salaries that may not be competitive with other employment opportunities for nurses. Overall, 29 of the 42 states that responded to our inquiry indicated that they believed nurse surveyor salaries were not competitive (see app. IV). Officials in several states also told us that the combination of low starting salaries and a highly competitive market forced them to hire less qualified candidates with less breadth of experience. Even though HCFA directed states, beginning January 1, 1999, to avoid scheduling a nursing home’s survey for the same month of the year as its previous survey, over one-third of state surveys remain predictable. Our analysis demonstrated little change in the proportion of predictable nursing home surveys. Predictable surveys can allow quality-of-care problems to go undetected because homes, if they choose to do so, may conceal problems. We recommended in 1998 that HCFA segment the standard survey into more than one review throughout the year, simultaneously increasing state surveyor presence in nursing homes and decreasing survey predictability. Although HCFA disagreed with segmenting the survey, it did recognize the need to reduce predictability. Our analysis of OSCAR data demonstrated that, on average, the timing of 34 percent of current surveys nationwide could have been predicted by nursing homes, a slight reduction from the prior surveys when about 38 percent of all surveys were predictable. The predictability of current surveys ranged from 83 percent in Alabama to 10 percent in Michigan (see app. V for data on all 50 states and the District of Columbia). In 34 states, 25 percent to 50 percent of current surveys were predictable, as shown in table 4. In 9 states, more than 50 percent of current surveys were predictable. Most state agencies did not investigate serious complaints filed against nursing homes within required time frames, and practices for investigating complaints in many states may not be as effective as they could be. A CMS review of states’ timeliness in investigating complaints alleging harm to residents revealed that most states did not investigate all such complaints within 10 days, as CMS requires. Additionally, a CMS-sponsored study of complaint practices in 47 states raised concerns about state approaches to accepting and investigating complaints. Until March 1999, states could set their own complaint investigation time frames, except that they were required to investigate within 2 workdays all complaints alleging immediate jeopardy conditions. In March 1999, we reported that inadequate complaint intake and investigation practices in states we reviewed had too often resulted in extensive delays in investigating serious complaints. As a result of our findings, HCFA began requiring states to investigate complaints that allege actual harm, but do not rise to the level of immediate jeopardy, within 10 workdays. CMS’s 2001 review of a sample of complaints in all states demonstrated that many states were not complying with these requirements. Specifically, 12 states were not investigating all immediate jeopardy complaints within the required 2 workdays, and 42 states were not complying with the requirement to investigate actual harm complaints within 10 days. The agency also found that the triaging of complaints to determine how quickly each complaint should be investigated was inadequate in many states. The extent to which states did not meet the 2-day and 10-day investigation requirements varied considerably. Officials from 12 of the 16 states we contacted indicated that they were unable to investigate complaints on time because of staff shortages. Oklahoma investigated only 3 of the 21 immediate jeopardy complaints that CMS sampled within the required 2- day period and none of 14 sampled actual harm complaints in 10 days. Oklahoma officials attributed this timeliness problem to staff shortages and a surge in the number of complaints received in 2000, from about 5 per day to about 35. The rising volume of complaints is a particular problem for California, which receives about 10,000 complaints annually, and had a 20 percent increase in complaints from January 2001 through July 2002. State officials told us that California law requires all complaints alleging immediate jeopardy to a resident to be investigated within 24 hours and all others to be investigated within 10 days, and that the increase in the number of complaints requires an additional 32 surveyor positions. CMS regional officials told us that the vast majority of California complaints were investigated within 10 days. However, the 2001 review also showed that about 9 percent of the state’s standard surveys were conducted late. Both CMS and California officials indicated that the priority the state attaches to investigating complaints affected survey timeliness. Officials from Washington told us that their practice of investigating facility self- reported incidents led to their not meeting the 10-day requirement on all complaints that CMS reviewed. Washington investigated 18 of 20 sampled actual harm complaints on time—missing the 10-day requirement for the other two by 2 days and 4 days, respectively. Washington officials pointed out that the two complaints not investigated within 10 days were facility self-reported incidents and commented that many other states do not even require investigation of such incidents. Thus, in these other states, such incidents would not even have been included in CMS’s review. In its review of state complaint files, CMS also evaluated whether states had appropriately triaged complaints—that is, determined how quickly each complaint should be investigated. Most of the regions told us that one or more of their states had difficulty determining the investigation priority for complaints. In an extreme case, a regional office discovered that one of its states was prioritizing its complaints on the basis of staff availability rather than on the seriousness of the complaints. Several regions indicated that some states improperly assigned complaints to categories that permitted longer investigation time frames, and one region indicated that triaging difficulties involved state personnel not collecting enough information from the complainant to make a proper decision. Officials from some of the 16 state survey agencies we contacted indicated that HCFA’s 1999 guidance to states on what constitutes an actual harm complaint was unclear and confusing. In an effort to improve state responsiveness to complaints, HCFA hired a contractor in 1999 to assess and recommend improvements to state complaint practices. The study identified significant problems with states’ complaint processes, including complaint intake activities, investigation procedures, and complaint substantiation practices. For example, the report noted that 15 states did not have toll-free hotlines for the public to file complaints. In our earlier reports, we noted that the process of filing a complaint should not place an unnecessary burden on a complainant and that an easy-to-use complaint process should include a toll-free number that permits the complainant to leave a recorded message when state staff are unavailable. Table 5 summarizes major findings from the contractor’s report to CMS. State survey agencies did not refer 711 cases in which nursing homes were found to have a pattern of harming residents to CMS for immediate sanction as required by CMS policy. Our earlier work found that nursing homes tended to “yo-yo” in and out of compliance, in part because HCFA rarely imposed sanctions on homes with a pattern of deficiencies that harmed residents. In response, the agency required that homes found to have harmed residents on successive standard surveys be referred to it for immediate sanction. Most states did not refer at least some cases that should have been referred under this policy. Figure 1 shows the results of our analysis for the four states—Massachusetts, New York, Pennsylvania, and Texas—with the greatest numbers of cases that should have been referred and for the nation (see app. VII for information on all states). These four states accounted for 55 percent of the 711 cases. State and CMS officials identified several reasons why state agencies failed to forward cases to CMS for immediate sanction, including (1) an initial misunderstanding of the policy on the part of some states and regions, (2) poor state systems for monitoring the survey history of homes to identify those meeting the criteria for referral for immediate sanction, and (3) actions, by two states, that were at variance with CMS policy. First, officials from some states—and some CMS regional officials as well—told us that they did not initially fully understand the criteria for referring homes for immediate sanction. For example, several states and CMS regional offices reported that they did not understand that CMS required states to look back before the January 2000 policy implementation date to determine if there was an earlier survey with an actual-harm-level deficiency. The look-back requirement was specifically addressed in a February 10, 2000, CMS policy clarification specifying that state agencies were to consider the home’s survey history before the January 14, 2000, implementation date in determining if a home met the criteria for immediate referral for sanction. However, officials in one region told us that they had instructed three of four states not to look back before the January 2000 implementation date of the policy. Two other regional offices told us that CMS policy did not require the state to look back before January 2000 for earlier surveys. Officials at another regional office did not recall the look-back policy at the time we talked to them in mid-2002, and were not sure what advice they had given their states when the policy was first implemented. Second, some state survey agencies told us that their managers responsible for enforcement did not have an adequate methodology for checking the survey history of homes to identify those meeting the criteria. Some states said that their managers relied on manual systems, which are less accurate and sometimes failed to identify cases that should have been referred. Officials in one state told us that its district offices had no consistent procedure for checking the survey history of homes. An official in another state told us that some cases were not referred because time lags in reporting some surveys meant that an earlier survey—such as a complaint survey—with an actual harm deficiency might not have been entered in the state’s tracking system until after a later survey that also found harm-level deficiencies. Third, two states did not implement CMS’s expanded policy on immediate sanctions. New York was in direct conflict with CMS policy. Although CMS policy calls for state referrals to CMS regardless of the type of deficiency, a state agency official told us that the state only referred a home to CMS for immediate sanction if both actual harm citations were for the exact same deficiency. A CMS official indicated that New York began complying with the policy in September 2002. Texas, the second state, did not implement the CMS policy statewide until July 2002, when it received our inquiry about the cases not referred for immediate sanction. In the interim from January 2000 through July 2002, three of Texas’s 11 district offices specifically requested from state survey agency officials, and were granted, permission to implement the policy. While CMS has increased its oversight of state survey and complaint activities and instituted a more systematic oversight process by initiating annual state performance reviews, CMS officials acknowledged that the effectiveness of the reviews could be improved. In particular, CMS officials told us that for the initial state performance review in fiscal year 2001, they lacked the capability to systematically distinguish between minor lapses identified during the reviews and more serious problems that require intervention. CMS oversight is also hampered by continuing limitations in OSCAR data, the inability or reluctance of some CMS regions to use such data to monitor state activities, and inadequate oversight of certain areas, such as survey predictability and state referral of homes for immediate enforcement actions. CMS has restructured regional office responsibilities to improve the consistency of federal oversight and plans to further strengthen oversight by increasing the number of federal comparative surveys. However, three federal initiatives critical to reducing the subjectivity evident in the current survey process and the investigation of complaints have been delayed. In the first of what is planned as an annual process, CMS’s 10 regional offices reviewed states’ fiscal year 2001 performance for seven standards to determine how well states met their nursing home survey responsibilities (see app. VIII for a description of the seven standards). This enhanced oversight of state survey agency performance responds to our prior recommendations. In 1999, we reported that HCFA’s oversight of state efforts had limitations preventing it from developing accurate and reliable assessments of state performance. HCFA regional office policies, practices, and oversight had been inconsistent, a reflection of coordination problems between HCFA’s central office and its regional staffs. In important areas, such as the adequacy of surveyors’ findings and complaint investigations, HCFA relied on states to evaluate their own performance and report their findings to HCFA. Although OSCAR data were available to HCFA for monitoring state performance, they were infrequently used, and neither the states nor HCFA’s regional offices were held accountable for failing to meet or enforce established performance standards. To promote consistent application of the standards across the 10 regions, the agency developed detailed guidance for measuring each standard, including the method of evaluation, the data sources to be used, and the criteria for determining whether a state met a standard. Only two states met each of the five standards we reviewed and many did not meet several standards. Appendix IX identifies the standards we analyzed and the results of CMS’s review of these standards. During the 2001 review, CMS elected not to impose the most serious sanctions available for inadequate state performance, including reducing federal payments to the state or initiating action to terminate the state’s agreement, but advised the states that annual reviews in subsequent years will serve as the basis for such actions. While imposing no sanctions during the 2001 review, CMS did require several states to prepare corrective action plans. Each year, CMS plans to update and improve the standards based on experience gained in prior years. Characterizing its fiscal year 2001 state performance review as a “shakeout cruise,” CMS is working to address several weaknesses identified during the reviews, including difficulty in determining if identified problems were isolated incidents or systemic problems, flawed criteria for evaluating a critical standard, and inconsistencies in how regional offices conducted the reviews. In our discussions of the results of the performance reviews with officials of CMS’s regional offices, it was evident that some regions had a much better appreciation of the strengths and weaknesses of survey activities in their respective states than was reflected in the state performance reports. However, this information was not readily available to CMS’s central office. In addition, CMS has not released a summary of the review to permit easy comparison of the results. For subsequent reviews, CMS plans to more centrally manage the process to improve consistency and help ensure that future reviews distinguish serious from minor problems. CMS officials acknowledged that the first performance review did not provide adequate information regarding the seriousness of identified problems. The agency indicated that it had since revised the performance standards to enable it to determine the seriousness of the problems identified. Some regional office summary reports provided insufficient information to determine whether a state did not meet a particular standard by a wide or a narrow margin. For example, although California did not meet the standard to investigate all complaints alleging actual harm within 10 days, the regional office summary provided no details about the results. Regional officials told us that they found very few California complaints that were not investigated within the 10-day deadline and those that were not were generally investigated by the 13th day. Conversely, although the report for Oregon shows that the state met the 10-day requirement, our discussions with regional officials revealed that serious shortcomings nevertheless existed in the state’s complaint investigation practices. Officials in the Seattle region told us that for many years Oregon had contracted out investigations of complaints to local government entities not under the control of the state agency and, as a result, exercised little control over the roughly 2,000 complaints the state receives against nursing homes each year. For instance, under this arrangement, information about complaint investigations, including deficiencies identified, was not entered into CMS’s database. Regional officials told us that the Oregon state agency recently assumed responsibility for investigating complaints filed by the public, but that the local government entities continue to investigate facility self-reported incidents. CMS’s standard for measuring how well states document deficiencies identified during standard surveys was flawed because it mixed major and minor issues, blurring the significance of findings. CMS’s protocol required assessment of 33 items, ranging from the important issue of whether state surveyors cited deficiencies at the correct scope and severity level to the less significant issue of whether they used active voice when writing deficiencies. Because of the complexity of the criteria and concerns about the consistency of regional office reviews of states’ documentation practices, CMS decided not to report the results for this standard for 2001. For the 2002 review, CMS reduced the number of criteria to be assessed from 33 to 7. Based on the available evidence of the understatement of actual harm deficiencies, we believe that successful implementation of the documentation standard in 2002 and future years is critical to help ensure that deficiencies are cited at the appropriate scope and severity level. CMS’s regional offices were sometimes inconsistent in how they conducted their reviews, raising questions about the validity and fairness of the results. For example: Although the guidelines for the review indicated that the regional offices were to assess the timeliness of complaint investigations based on the state’s prioritization of the complaint, officials from one region told us that they judged timeliness based on their opinion of how the complaint should have been prioritized. Two regional offices acknowledged that they did not use clinicians to review complaint triaging. Officials from two states questioned the credibility of reviews not conducted by clinicians. Although one objective of the reviews was to review some immediate jeopardy complaints in every state, the random samples selected in some states did not yield such complaints. In such cases, one region indicated that it specifically selected a few immediate jeopardy complaints outside the sample while another region did not. To eliminate this inconsistency in future years, CMS has instructed the regions to expand their sample to ensure that at least two immediate jeopardy complaints are reviewed in each state. While some regions examined more than the required number of complaints to assess overall timeliness, one region felt that additional reviews were unnecessary. For instance, surveyors reviewing California, which receives thousands of complaints per year, expanded the number of complaints reviewed beyond the minimum number required because they felt that the required random sample of 40 complaints did not provide sufficient information about overall timeliness in the state. To assess overall timeliness, they visited all but 1 of the state’s 17 district offices to review complaints. However, surveyors from another CMS region reviewed only 3 or 4 of the roughly 18 complaints a state received and told us that additional reviews were unnecessary because the state had already failed the timeliness criterion based on the few complaints reviewed. Although the review of 3 or 4 complaints technically met CMS’s sampling requirement, we believe examination of most or all of the relatively few remaining complaints would have provided a more complete picture of the state’s overall timeliness. While CMS has addressed some of the weaknesses in its 2001 state performance review by revising the standards and guidance for the 2002 review, including simplifying the criteria for assessing documentation and requiring regions to assess states’ complaint prioritization efforts separately from the timeliness issue, the performance standards do not yet address certain issues that are important for assessing state performance and that would further strengthen CMS oversight of state survey activities. These issues include: Assessing the predictability of state surveys. Although CMS monitored compliance with its requirement for state survey agencies to initiate at least 10 percent of their standard surveys outside normal working hours to reduce predictability, it did not examine compliance with its 1999 instructions for states to avoid scheduling a home’s survey during the same month each year. As shown in app. V, our analysis of CMS data found that from 10 percent to 31 percent of surveys in 31 states were predictable because they were initiated within 15 days of the 1-year anniversary of the prior survey. Evaluating states’ compliance with the requirement to refer nursing homes that have a pattern of harming residents for immediate sanctions. CMS officials confirmed that there was no consistent oversight of state agencies’ implementation of this policy. Several CMS regional offices generally did not know, for example, how their states were monitoring homes’ survey history to detect cases that should be referred for immediate sanction. CMS could have used the enforcement database to determine that New York was not adhering to the agency’s immediate sanctions policy. During calendar years 2000 and 2001, New York cited actual harm at a relatively high proportion of its nursing homes but only referred 19 cases for immediate sanction. Over a comparable period, New Jersey, a state with far fewer homes and citations, referred almost three times as many cases. Developing better measures of the quality of state performance, in addition to process measures. Several CMS regional officials believed that the scope of the state performance standards should address additional areas of performance, including assessing the adequacy of nursing homes’ plans of correction submitted in response to deficiencies and the appropriateness of states’ recommended enforcement remedies. In particular, several regions noted that rather than focusing only on the timeliness of complaint investigations, regions should also assess the adequacy of the investigation itself, including whether the complaint should have been substantiated. The introduction of a new CMS complaint tracking database, discussed below, should enable regions to automate the review of complaint timeliness, thereby allowing them to focus more attention on such issues. CMS’s oversight of state survey activities is further hampered by limitations in the data used to develop the 19 periodic reports intended to assist the regions in monitoring state performance and by the regions’ inconsistent use of the reports. For instance, CMS’s current complaint database does not provide key information about the number of complaints each state receives (including facility self-reported incidents) or the time frame in which each complaint is investigated. In addition, officials from one region emphasized to us that information about complaints provided in the reports did not correspond with CMS’s required complaint investigation time frames. The reports identify the number of state on-site complaint investigations that took place in three different time periods—3 days, from 4 to 14 days, and 15 days or more; however, required time frames for complaint investigations are 2 days for complaints alleging immediate jeopardy and 10 days for those alleging harm. Additionally, a regional official pointed out that investigations shown in one of the reports as taking place within 3 days do not necessarily represent complaints that the state prioritized as immediate jeopardy. Despite the problems with these data, however, several regional offices indicated that the reports could at least serve as a starting point for discussions with states about their complaint programs and often lead to a better understanding of state complaint activities. CMS indicated that the deficiencies in complaint data should be addressed by the new automated complaint tracking system that it is developing for use by all states as part of the redesign of OSCAR. Officials from several regions also told us that the value of some of the 19 periodic reports was unclear, and officials in three regions said they either lacked the staff expertise or the time to use the reports routinely to oversee state activities. For example, officials in one region told us that they used one of the reports about complaints to ask states questions about their prioritization practices. But a different region appeared unaware that the reports showed that two of its states might be outliers in terms of the percentage of complaints they prioritized as actual harm or immediate jeopardy. Additionally, because the periodic reports do not include trend data, many regional offices were unaware of the trends in the percentage of homes cited in their states for actual harm or immediate jeopardy. We believe that such data could be useful to CMS’s regions in identifying significant trends in their states. CMS indicated that it is continuing to make progress in redesigning the OSCAR reporting system. In 1999, we recommended that the agency develop an improved management information system that would help it track the status and history of deficiencies, integrate the results of complaint investigations, and monitor enforcement actions. Another objective of the OSCAR redesign is to make it easier to analyze the data it contains, addressing the problem that generating analytical reports from OSCAR was difficult and most regions lacked the expertise to do so. The redesigned system, called the Quality Improvement and Evaluation System, would also eliminate the need for duplicate data entry, which should reduce the potential for data entry errors to which OSCAR is susceptible. CMS has faced some problems in the implementation of the new system, such as inadvertent modifications of survey data results when data are transferred from the old OSCAR database into the new system, but the agency indicated that its target date for completing the redesign is 2005. CMS has taken, or is undertaking, several other efforts to improve federal oversight and survey procedures, including making structural changes to the regional offices to improve coordination, expanding the number of comparative surveys conducted each year, improving the survey methodology, developing clearer guidance for surveyors, and developing additional guidance to states for investigating complaints. As of April 2003, only the effort to restructure the regional offices had been completed. The other efforts critical to reducing the subjectivity evident in the current survey process and the investigation of complaints have been delayed. In December 2002, CMS reduced the number of regional managers in charge of survey activities from 10 (1 per region) to 5, a change intended to provide more management attention to survey matters and to improve accountability, direction, and leadership. Our prior and current work found that regional offices’ policies, practices, and oversight were often inconsistent. For example, in 1999 we reported that regional offices used different criteria for selecting and conducting comparative surveys. The 5 regional managers will be responsible only for survey and certification activities, while in the past many of the 10 were also responsible for managing their regions’ Medicaid programs. In response to our prior recommendations, CMS plans to more than double the number of federal comparative surveys in which federal surveyors resurvey a nursing home within 2 months of the state survey to assess state performance. We noted in 1999 that, although insufficient in number, comparative surveys were the most effective technique for assessing state agencies’ abilities to identify serious deficiencies in nursing homes because they constitute an independent evaluation of the state survey. CMS plans to hire a contractor to perform approximately 170 additional comparative surveys per year, bringing the annual total of comparative surveys performed by both CMS surveyors and the contractor to about 330. Although CMS had intended to award a contract and begin surveys by spring 2003, as of July 2003, it was still in the process of identifying qualified contractors. CMS officials stated that using a contractor would provide CMS flexibility because if it suspects that a state or region is having problems with surveys, it can quickly have the contractor conduct several comparative surveys there. Being able to direct the contractor to quickly focus on states or regions where state surveys may be problematic could represent a significant improvement in CMS’s oversight of state survey agencies. CMS’s implementation schedules have slipped for three critical initiatives intended to enhance the consistency and accuracy of state surveys and complaint investigations, delaying the introduction of improved methodologies or guidance until 2003 or 2004. Because surveyors often missed significant care problems due to weaknesses in the survey process, HCFA took some initial steps to strengthen the survey methodology, with the goal of introducing an improved survey process in 2000. In July 1999, the agency introduced quality indicators to help surveyors do a better job of selecting a resident sample, instructed states to increase the sample size in areas of particular concern, and required the use of investigative protocols in certain areas, such as pressures sores and nutrition, to help make the survey process more systematic. However, HCFA recognized that additional steps were required to ensure that surveyors thoroughly and systematically identify and assess care problems. To address remaining problems with sampling and the investigative protocols, CMS contracted for the development of a revised survey methodology. The contractor has proposed a two-phase survey process. In the first phase, surveyors would initially identify potential care problems using quality indicators generated off-site prior to the start of the survey and additional, standardized information collected on-site, from a sample of as many as 70 residents. During the second phase, surveyors would conduct an investigation to confirm and document the care deficiencies initially identified. According to CMS officials, this process differs from the current methodology because it would more systematically target potential problems at a home and give surveyors new tools to more adequately document care outcomes and conduct on-site investigations. Use of the new methodology could result in survey findings that more accurately identify the quality of care provided by a nursing home to all of its residents. Initial testing to evaluate the proposed methodology focused primarily on the first phase and was completed in three states during 2002. As of April 2003, a CMS official told us that the agency lacked adequate funding to conduct further testing that more fully incorporates phase two. As a result, it is not clear when changes to survey methodology will be implemented. We continue to believe that redesign of the survey methodology, under way since 1998, is necessary for CMS to fully respond to our past recommendation to improve the ability of surveys to effectively identify the existence and extent of deficiencies. While CMS’s goal of not adding additional time to surveys is an important consideration, it should not take priority over the goal of ensuring that surveys are as effective as possible in identifying the quality of care provided to residents. Recognizing inconsistencies in how the scope and severity of deficiencies are cited across states, in October 2000, HCFA began developing more structured guidance for surveyors, including survey investigative protocols for assessing specific deficiencies. The intent of this initiative is to enable surveyors to better (1) identify specific deficiencies, (2) investigate whether a deficiency is the result of poor care, and (3) document the level of harm resulting from a home’s identified deficient care practices. The areas originally targeted for this initiative included deficiencies related to pressure sores, urinary catheters and incontinence, activities programming, safe food handling, and nutrition. Delays have occurred because CMS is committed to incorporating the work of multiple expert panels and two rounds of public comments for each deficiency. The project has been further delayed because the approach used to identify resident harm shifted during the course of work. The process should proceed more quickly, however, now that CMS has developed its approach. CMS expected to release the first new guidance, addressing pressure sores, in early 2003, but officials were unable to tell us how many of the 190 federal nursing home requirements will ultimately receive new guidance or a specific time line for when this initiative will be completed. As discussed earlier, CMS’s state performance reviews include an assessment of state surveyors’ documentation of the scope and severity of a sample of deficiencies cited, which should provide CMS with an opportunity to assess the effectiveness of the new guidance. Finally, despite initiation of a complaint improvement project in 1999, CMS has not yet developed detailed guidance for states to help improve their complaint systems. Effective complaint procedures are critical because complaints offer an opportunity to assess nursing home care between standard surveys, which can be as long as 15 months apart. In 1999, HCFA commissioned a contractor to assess and recommend improvements to state complaint practices. CMS received the contractor’s final report in June 2002, and indicated agreement with the contractor that reforming the complaint system is urgently needed to achieve a more standardized, consistent, and effective process. The study identified serious weaknesses in state complaint processes (see table 5) and made numerous recommendations to CMS for strengthening them. Key recommendations were that CMS increase direction and oversight of states’ complaint processes and establish mechanisms to monitor states’ performance. CMS indicated that it has already taken steps to address these recommendations by initiating annual performance reviews that include evaluating the timeliness of state complaint investigations and the accuracy of states’ complaint triaging decisions, and by developing the new ASPEN complaint tracking system, which should provide more complete data about complaint activities than the current system. The contractor also recommended that CMS (1) expand outreach for the initiation of complaints, such as use of billboards or media advertising, (2) enhance complaint intake processes by using professional intake staff, (3) improve investigation and resolution processes by using available data about the home being investigated and establishing uniform definitions and criteria for substantiating complaints, (4) make the process more responsive by conducting timely investigations and allowing the complainant to track the progress of the investigation, and (5) establish a higher priority for complaint investigations in the state survey agency. CMS noted that some of these recommendations are beyond the agency’s purview and will require the support of all stakeholders to accomplish. CMS told us that it plans to issue new guidance to the states in late fiscal year 2003—about 4 years after the complaint improvement project initiative was launched. As we reported in September 2000, continued federal and state attention is required to ensure necessary improvements in the quality of care provided to the nation’s vulnerable nursing home residents. The reported decline in the percentage of homes cited for serious deficiencies that harm residents is consistent with the concerted congressional, federal, and state attention focused on addressing quality-of-care problems. More active and data- driven oversight is increasing CMS’s understanding of the nature and extent of weaknesses in state survey activities. Despite these efforts, however, the proportion of homes reported to have harmed residents is still unacceptably high. It is therefore essential that CMS fully implement key initiatives to improve the rigor and consistency of state survey, complaint investigation, and enforcement processes. The seriousness of the challenge confronting CMS in ensuring consistency in state survey activities is also becoming more apparent. Our work, as well as that of CMS, demonstrates the persistence of several long-standing problems and also provides insights on factors that may be contributing to these shortcomings: state surveyors continue to understate serious deficiencies that caused actual harm or placed residents in immediate jeopardy; deficiencies are often poorly investigated and documented, making it difficult to determine the appropriate severity category; states focus considerable effort on reviewing proposed actual harm deficiencies, but many have no quality assurance processes in place to determine if less serious deficiencies are understated or have investigation and documentation problems; the timing of too many surveys remains predictable, allowing problems to go undetected if a home chooses to conceal deficiencies; numerous weaknesses persist in many states’ complaint processes, including the lack of consumer toll-free hotlines in many states, confusion over prioritization of complaints, inconsistent complaint investigation procedures, and the failure of most states to investigate all complaints alleging actual harm within 10 days, as required; and states did not refer a substantial number of homes that had a pattern of harming residents to CMS for immediate sanctions. Over the past several years, CMS has taken numerous steps to improve its oversight of state survey agencies, but needs to continue its efforts to help better ensure consistent compliance with federal requirements. Several areas that require CMS’s ongoing attention include (1) the newly established standard performance reviews to ensure that critical elements of the review, such as assessing states’ ability to properly document deficiencies, are successfully implemented, (2) the successful modernization of CMS’s data system by 2005 to support the survey process and provide key information for monitoring state survey activities, (3) the planned expansion of comparative surveys to improve federal oversight of the state survey process, (4) the survey methodology redesign intended to make the survey process more systematic, (5) the development of more structured guidance for surveyors to address inconsistencies in how the scope and severity of deficiencies are cited across states, and (6) the provision of detailed guidance to states to ensure thorough and consistent complaint investigations. Some of these efforts have been under way for several years, and CMS has consistently extended their estimated completion and implementation dates. We believe that effective implementation of planned improvements in each of these six areas is critical to ensuring better quality care for the nation’s 1.7 million nursing home residents. To strengthen the ability of the nursing home survey process to identify and address problems that affect the quality of care, we recommend that the Administrator of CMS finalize the development, testing, and implementation of a more rigorous survey methodology, including guidance for surveyors in documenting deficiencies at the appropriate level of scope and severity. To better ensure that state survey and complaint activities adequately address quality-of-care problems, we recommend that the Administrator require states to have a quality assurance process that includes, at a minimum, a review of a sample of survey reports below the level of actual harm (less than G level) to assess the appropriateness of the scope and severity cited and to help reduce instances of understated quality-of-care problems. finalize the development of guidance to states for their complaint investigation processes and ensure that it addresses key weaknesses, including the prioritization of complaints for investigation, particularly those alleging harm to residents; the handling of facility self-reported incidents; and the use of appropriate complaint investigation practices. To better ensure that states comply with statutory, regulatory, and other CMS nursing home requirements designed to protect resident health and safety, we recommend that the Administrator further refine annual state performance reviews so that they (1) consistently distinguish between systemic problems and less serious issues regarding state performance, (2) analyze trends in the proportion of homes that harm residents, (3) assess state compliance with the immediate sanctions policy for homes with a pattern of harming residents, and (4) analyze the predictability of state surveys. We provided a draft of this report to CMS and the 22 states we contacted during the course of our review. (CMS’s comments are reproduced in app. X.) CMS concurred with our findings and recommendations, stating that it already had initiatives under way to improve the effectiveness of the survey process, address the understatement of serious deficiencies, provide better data on state complaint activities, and improve the annual federal performance reviews of state survey activities. Although CMS concurred with our recommendations, its comments on intended actions did not fully address our concerns about the status of the initiative to improve the effectiveness of the survey process or the recommendation regarding state quality assurance systems. Eleven of the 22 states also commented on our draft report. CMS and state comments generally covered five areas: survey methodology, state quality assurance systems, definition of actual harm, survey predictability, and resource constraints. In response to our recommendation that the agency finalize the development, testing, and implementation of a more rigorous nursing home survey methodology, under way since 1998, CMS commented that it had already taken steps to improve the effectiveness of the survey process, such as the development of surveyor guidance on a series of clinical issues. However, the agency did not specifically comment on any actions it would take to finalize and implement its new survey methodology, which is broader than the actions CMS described. Our draft report noted that, earlier this year, CMS said it lacked adequate funding for the additional field testing needed to implement the new survey methodology. Through September 2003, CMS will have committed $4.7 million to this effort. While CMS did not address the lack of adequate funding in its comments on our draft report, a CMS official subsequently told us that about $508,000 has now been slated for additional field testing. This amount, however, has not yet been approved. Not funding additional field testing could jeopardize the entire initiative, in which a substantial investment has already been made. We continue to believe that CMS should implement a revised survey methodology to address our 1998 finding that state surveyors often missed significant care problems due to weaknesses in the survey process. We recommended that CMS require states to have a quality assurance process that includes, at a minimum, a review of a sample of survey reports below the level of actual harm to help reduce instances of understated quality-of-care problems. CMS commented on the importance of this concept and noted it had already incorporated such reviews into CMS regional offices’ reviews of the state performance standards. However, the agency did not indicate whether it would require states to initiate an ongoing process that would evaluate the appropriateness of the scope and severity of documented deficiencies, as we recommended. While federal oversight is critical, the annual performance reviews conducted by federal surveyors examine only a small, random sample of state survey reports and should not be considered a substitute for appropriate and ongoing state quality assurance mechanisms. In its comments, New York stated that, in April 2003, it had implemented a process consistent with our recommendation and it had already realized positive results. New York is using the results of these reviews to provide surveyor feedback and expects that instances where deficiencies may be understated will decrease. California also commented that it fully supports this recommendation but indicated that a new requirement could not be implemented without additional resources. Officials from five states indicated that resource shortages are a challenge in meeting federal standards for oversight of nursing homes. Alabama commented that there is a relationship among (1) the scheduling of nursing home standard surveys, (2) the number and timing of complaint surveys, (3) the tasks that must be accomplished during each survey, and (4) the resources that are available to state agencies. According to Alabama, the funding provided by CMS is insufficient to meet all of the CMS workload demands, and many of the serious problems identified in our draft report were attributable to insufficient funding for state agencies to hire and retain the staff necessary to do the required surveys. For example, Alabama indicated that the inability of some states to meet survey time frames—maintaining a 12-month average between standard surveys and investigating complaints alleging actual harm within 10 days— is almost always the result of states not having enough surveyors to accomplish the required workload. Comments from other states echoed Alabama’s concerns about the adequacy of funding provided by CMS. Arizona said that, in order to hire and retain qualified surveyors, it increased surveyor salaries in 2001. Because CMS did not increase the state’s survey and certification budget to accommodate these increases, the state left surveyor positions unfilled and curtailed training to make up for the funding shortfall. Arizona also observed that CMS’s priorities sometimes conflict, further complicating effective resource use. CMS’s performance standards require states to investigate all complaints alleging immediate jeopardy or actual harm in 2 and 10 days, respectively. For budgeting purposes, however, CMS ranks complaint investigations as a lower priority than annual surveys and instructs states to ensure that annual surveys will be completed before beginning work on complaints. California and Connecticut officials said that the growing volume of complaints in their states, combined with limited resources, is a concern. California officials observed that the growth in the number of complaints, coupled with the lack of significant funding increase from CMS, has made it impossible to meet all federal and state standards. They added that they received a 3-percent increase in survey funding from fiscal years 2000 through 2003, but documented the need for a 24-percent increase over this period. As noted in our draft report, the higher priority California attaches to investigating complaints affected survey timeliness—about 12 percent of the state’s homes were not surveyed within the required 15 months. Connecticut indicated that 90 percent of the complaints it receives allege actual harm and require investigation within 10 days, but that with fairly stagnant budget allocations from CMS, its ability to initiate investigations of so many complaints within 10 days was limited. CMS’s fiscal year 2001 state performance review found that Connecticut did not investigate about 30 percent of the sampled actual harm complaints in a timely manner. Although not specifically mentioning resources, New York noted that the increasing volume of complaints was a concern and indicated that any assistance CMS could provide would be welcome. Comments from four states on our analysis of a sample of survey deficiencies from homes with a history of harming residents revealed state confusion about CMS’s definition of actual harm and immediate jeopardy, a situation that contributes to the variability in state deficiency trends shown in table 2. CMS’s written comments did not address our review of these deficiencies; however, during an interview to follow up on state comments, CMS officials told us that they agreed with our determinations of actual harm as detailed in appendix III. Arizona and California agreed that some of the deficiencies we reviewed for nursing homes in their states should have been cited at the level of actual harm. However, their disagreement regarding others stemmed from differing interpretations of CMS guidance, particularly the language on the extent of the consequences to a resident resulting from a deficiency. For example, Arizona stated that one of the two deficiencies we reviewed could not be supported at the actual harm level because the injuries from multiple falls—including skin tears and lacerations of the extremities and head requiring suturing—did not compromise the residents’ ability to function at their highest optimal level (table 8, Arizona 3). In these cases, it was documented that nursing home staff had failed to implement plans of care intended to prevent such falls. In contrast, California agreed with us that state surveyors should have cited actual harm for similar injuries resulting from falls—head lacerations and a minimal impaction fracture of the hip—due to the inappropriate use of bed side rails (table 8, California 9). CMS officials noted that the definition of actual harm uses the term “well-being” rather than function because harm can be psychological as well as physical. Moreover, they indicated that whether the consequence was small or large was irrelevant to determining harm. CMS central office officials acknowledged that the language linking actual harm to practices that have “limited consequences” for a resident has created confusion for state surveyors and that this reference will be eliminated in an upcoming revision of the guidance. Regarding preventable stage II pressure sores, California stated that guidance received from CMS’s San Francisco regional office in November 2000 precluded citing actual harm unless the pressure sores had an impact on residents’ ability to function. According to a California official, this and similar guidance on weight loss was the CMS regional office’s reaction to the growing volume of appeals by nursing homes of actual harm citations as well as a reaction to administrative law hearing decisions. Prior to this written guidance, which California received in late 2000, it routinely cited preventable stage II pressure sores as actual harm. The guidance noted that small stage II pressure sores seldom cause actual harm because they have the potential to heal relatively quickly and are usually of limited consequence to the resident’s ability to function. We discussed the San Francisco regional office guidance with another regional office as well as with CMS central office officials, who agreed that the San Francisco region’s pressure sore guidance was inconsistent with CMS’s definition of harm, which judges the impact of a deficiency on a resident’s “well-being” rather than functioning. Moreover, central office officials indicated that the regional office’s guidance should have been submitted to CMS’s Policy Clearinghouse for approval. This entity was created in June 2000 to ensure that regional directives to states are consistent with national policy. San Francisco regional office officials indicated that the individual responsible for the guidance provided to California had since left the agency. California also disagreed with our assessment that state surveyors should have cited immediate jeopardy for a resident who repeatedly wandered (eloped) outside the facility near a busy intersection. According to state officials, California’s policy on immediate jeopardy requires the surveyor to witness the incident. A San Francisco regional office official told us that surveyors did not have to witness an elopement to cite immediate jeopardy. An official from a different regional office agreed and noted that repeated elopements suggested the existence of a systemic problem that warranted citation of immediate jeopardy. Although Iowa and Nebraska did not comment specifically on the deficiencies in their surveys that we determined to be actual harm, they did address the definition of harm and the role of surveyor judgment in classifying deficiencies. Iowa officials indicated that a more precise definition of harm is needed because of varying emphasis over the last several years on the degree of harm—harm that has a small consequence for the resident or serious harm. Nebraska commented that we may have based our conclusion that two deficiencies in its surveys should have been cited at the actual harm level on insufficient information because citing actual harm is a judgment call that varies among state and federal surveyors based on experience and expertise. As noted in our draft report, we found sufficient evidence in the surveys we reviewed to conclude that some deficiencies should have been cited as actual harm because a deficient practice was identified and linked to documented actual harm. CMS, Arizona, and Iowa commented that nursing home surveys, as currently structured, are inherently predictable because of the statutory requirement to survey nursing homes on average every 12 months with a maximum interval of 15 months between each home’s survey. We agree but believe that survey predictability could be further mitigated by segmenting the surveys into more than one visit, a recommendation we made in 1998 but that CMS has not implemented. Currently, surveys are comprehensive reviews that can last several days and entail examining not only a home’s compliance with resident care standards but also with administrative and housekeeping standards. Dividing the survey into segments performed over several visits, particularly for those homes with a history of serious deficiencies, would increase the presence of surveyors in these homes and provide an opportunity for surveyors to initiate broader reviews when warranted. With a segmented set of inspections, homes would be less able to predict their next scheduled visit and adjust the care they provide in anticipation of such visits. CMS also commented that our report captures only the number of days since the prior survey and does not take into account other predictors, for example the time of day or day of the week. Rather than segmenting standard surveys as we earlier recommended, the agency instructed states to reduce survey predictability by starting at least 10 percent of surveys outside the normal workday—either on weekends, in the early morning, or in the evening. It also instructed states to avoid, if possible, scheduling a home’s survey for the same month as its previous standard survey. Though varying the starting time of surveys may be beneficial, this initiative is too limited in reducing survey predictability, as evidenced by our finding that 34 percent of current surveys were predictable. Arizona commented that it was unaware of any CMS guidance to avoid scheduling a home’s survey for the same month of the year as the home’s previous standard survey and indicated the state will now incorporate the requirement into its scheduling process. Comments from CMS and Arizona stated that the window of time for a survey to be unpredictable was limited and, as a result, little could be done to reduce predictability. CMS’s technical comments noted that many states have annual state licensing inspection requirements that would limit the window available to conduct surveys to 9 to 12 months after the prior survey, particularly since most inspections are done in conjunction with the federal survey to maximize available resources. CMS, however, was unable to provide a list of such states. None of the 10 states we subsequently contacted had state licensure inspection requirements that would explain their high levels of survey predictability. Arizona commented that the state’s licensing inspections are triggered by facilities applying to renew their licenses 60-120 days before their annual license expires. Due to budgetary constraints, Arizona conducts both this state and the federal survey at the same time. While not a requirement, the state strives to complete surveys during this 60-120 day period of time. Thus, nursing homes in Arizona may have some level of control over when federal surveys are conducted, particularly when the state begins complying with CMS guidance to avoid scheduling a home’s survey for the same month as its previous survey. As we reported in September 2000, Tennessee also had an annual licensing inspection requirement that contributed to survey predictability, but the state modified its law to permit homes to be surveyed at a maximum interval of 15 months. Since then, the proportion of predictable surveys in Tennessee decreased from about 56 percent to 29 percent. Arizona also stated that surveys had to be conducted within a 45-day window after the 1-year anniversary of the prior survey to be considered unpredictable. Arizona’s comments erroneously assume that a survey cannot take place before the 1-year anniversary of the prior survey. There is no prohibition on resurveying a home prior to the 1-year anniversary of its last survey, and many states do so. In fact, from October 1, 2000 through September 30, 2001, Arizona conducted 23 percent of its surveys before the 1-year anniversary. CMS provided several technical comments that we incorporated as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Administrator of the Centers for Medicare & Medicaid Services and appropriate congressional committees. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-7118 or Walter Ochinko, Assistant Director at (202) 512-7157 if you or your staffs have any questions. GAO staff acknowledgments are listed in appendix XI. This appendix describes our scope and methodology following the order that findings appear in the report. Nursing home deficiency trends. To identify trends in the proportion of nursing homes cited for actual harm or immediate jeopardy, we analyzed data from CMS’s OSCAR system. We compared standard survey results for three approximately 18-month periods: (1) January 1, 1997, through June 30, 1998, (2) January 1, 1999, through July 10, 2000, and (3) July 11, 2000, through January 31, 2002. Because surveys are to be conducted at least once every 15 months (with a required 12-month state average), it is possible that a facility was surveyed more than once in a time period. To avoid double counting of facilities, we included only the most recent survey of a facility from each of the time periods. The data from the two earliest time periods were included in our September 2000 report. We updated our earlier analysis of surveys conducted from January 1, 1999, through July 10, 2000, because it excluded approximately 300 surveys that had been conducted but not entered into OSCAR at the time we conducted our analysis in July 2000. Sample of state survey reports. To assess the trends in actual harm and immediate jeopardy deficiencies discussed above, we (1) identified 14 states in which the percentage of homes cited for actual harm had declined to below the national average since mid-2000 or was consistently below that average and (2) reviewed 76 survey reports from homes that had G-level or higher quality-of-care deficiencies on prior surveys but whose current survey had quality-of-care deficiencies at the D or E level, suggesting that the homes had improved. All the surveys we reviewed were conducted from July 2000 through April 2002. Our review focused on four quality-of-care requirements that are the most frequently cited nursing home deficiencies nationwide (see table 6). According to OSCAR data, 99 surveys in the 14 states conducted on or after July 2000 documented a D- or E-level deficiency in at least one of these four quality-of-care requirements. We reviewed all such deficiencies in surveys from 13 states but randomly selected 22 surveys from California, which cited the majority (45) of these deficiencies. In reviewing the surveys, we looked for a description of the resident’s diagnoses, any assessment of special problems, and a description of the care plan and physician orders connected with the deficiency identified. We also looked for a clear statement of the home’s deficient practice and the relationship between the deficiency and the care outcome. Federal comparative surveys. In September 2000, we reported on the results of 157 comparative surveys completed from October 1998 through May 2000. To update our analysis, we asked each CMS region to provide the results of more recent comparative surveys, including data on the corresponding state survey. The regions identified and provided information on the deficiencies identified in 277 comparative surveys that were completed from June 2000 through February 2002. Survey predictability. In order to determine the predictability of nursing home surveys, we analyzed data from CMS’s OSCAR database. We considered surveys to be predictable if (1) homes were surveyed within 15 days of the 1-year anniversary of their prior survey or (2) homes were surveyed within 1 month of the maximum 15-month interval between standard surveys. Consistent with CMS’s interpretation, we used 15.9 months as the maximum allowable interval between surveys. Because homes know the maximum allowable interval between surveys, those whose prior surveys were conducted 14 or 15 months earlier are aware that they are likely to be surveyed soon. Complaints. We analyzed the results of CMS’s state performance review for fiscal year 2001 to determine states’ success in investigating both immediate jeopardy complaints and actual harm complaints within time frames required either by statute or by CMS instructions. To better understand the results of state performance as determined by CMS’s review, we interviewed officials from CMS’s 10 regional offices and 16 state survey agencies (see state performance standards below for a description of how these states were chosen). We also reviewed the report submitted to CMS by its contractor, which was intended to assess and recommend ways to strengthen state complaint practices. Finally, to assess the implementation of CMS’s new automated system for tracking information about complaints, we reviewed CMS guidance materials and interviewed CMS officials and state survey agency officials from our 16 sample states. Enforcement. To determine if states had consistently applied the expanded immediate sanction policy, we analyzed state surveys in OSCAR that were conducted before April 9, 2002, and identified homes that met the criteria for referral for immediate sanction. We included surveys conducted prior to the implementation of the expanded immediate sanction policy because actual harm deficiencies identified in such surveys were to be considered by states in recommending a home for immediate sanction beginning in January 2000. To be affected by CMS’s expanded policy, a home with actual harm on two surveys must have an intervening period of compliance between the two surveys. Because OSCAR is not structured to consistently record the date a home with deficiencies returned to compliance, we had to estimate compliance dates using revisit dates as a proxy. We compared the results of our analysis to CMS’s enforcement database to determine if CMS had opened enforcement cases for the homes we identified. Our analysis compared the survey date in OSCAR to the survey date in CMS’s enforcement database. We considered any survey date in the enforcement database within 30 days of the OSCAR survey date to be a match. CMS officials reviewed and concurred with our methodology. We then asked CMS to analyze the resulting 1,334 unmatched cases to determine if a referral should have been made. State performance standards. To assess state survey activities as well as federal oversight of state performance, we analyzed the conduct and results of fiscal year 2001 state survey agency performance reviews during which the CMS regional offices determined compliance with seven federal standards; we focused on the five standards related to statutory survey intervals, deficiency documentation, complaint activities, enforcement requirements, and OSCAR data entry. Because some regional office summary reports on the results of their reviews for each state did not provide detailed information about the results, we also obtained and reviewed regions’ worksheets on which the summary reports were based. In addition, we conducted structured interviews with officials from CMS, CMS’s 10 regional offices, and 16 state survey agencies to discuss nursing home deficiency trends, the underlying causes of problems identified during the performance reviews, and state and federal efforts to address these problems. We also discussed these issues with officials from 10 additional states during a governing board meeting of the Association of Health Facility Survey Agencies. We selected the 16 states with the goal of including states that (1) were from diverse geographic areas, (2) had shown either an increase or a decrease in the percentage of homes cited for actual harm, (3) had been contacted in our prior work, and (4) represented a mixture of results from federal performance reviews of state survey activities. We also obtained data from 42 state survey agencies on surveyor experience, vacancies, and related staffing issues. Nationwide, the proportion of nursing homes cited for actual harm or immediate jeopardy during state standard surveys declined from 29 percent in mid-2000 to 20 percent in January 2002. From July 2000 through January 2002, 40 states cited a smaller percentage of homes with such serious deficiencies while only 9 states and the District of Columbia cited a larger proportion of homes with such deficiencies. In contrast, from early 1997 through mid-2000, the percentage of homes cited for such serious deficiencies was either relatively stable or increased in 31 states. To identify these trends, we analyzed data from CMS’s OSCAR system. We compared results for three approximately 18-month periods: (1) January 1, 1997, through June 30, 1998, (2) January 1, 1999, through July 10, 2000, and (3) July 11, 2000, through January 31, 2002 (see table 7). Because surveys are to be conducted at least once every 15 months (with a required 12- month state average), it is possible that a facility was surveyed more than once in a time period. To avoid double counting of facilities, we included only the most recent survey from each of the time periods. Some of the data in table 7 were included in our September 2000 report. However, we updated our analysis of surveys conducted from January 1, 1999, through July 10, 2000, because it excluded approximately 300 surveys that had been conducted but not entered into OSCAR at the time we conducted our analysis in July 2000. Our analysis of a sample of 76 nursing home survey reports demonstrated a substantial understatement of quality-of-care problems. Our sample was selected from 14 states in which the percentage of homes cited for actual harm had declined to below the national average since mid-2000 or was consistently below that average. We identified survey reports in these states from homes that had G-level or higher quality-of-care deficiencies (see table 1) on prior surveys but whose current survey had quality-of-care deficiencies at the D or E level, suggesting that the homes had improved. All the surveys we reviewed were conducted from July 2000 through April 2002. Our review focused on four quality-of-care requirements that are the most frequently cited nursing home deficiencies nationwide (see table 6). In our judgment, 30 of the 76 surveys (39 percent) from 9 of the 14 states had one or more deficiencies that documented actual harm to residents— G-level deficiencies—and 1 survey contained a deficiency that could have been cited at the immediate jeopardy level. While state surveyors classified these deficiencies as less severe, we believe that the survey reports document that poor care provided to and injuries sustained by these residents constituted at least actual harm. Table 8 provides abstracts of the 39 deficiencies that understated quality problems. Table 9 summarizes state survey agencies’ responses to our July 2002 questions about nursing home surveyor experience, vacancies, hiring freezes, competitiveness of salaries, and minimum required experience. Our analysis found that 34 percent of current nursing home surveys were predictable, allowing nursing homes to conceal deficiencies if they choose to do so. In order to determine the predictability of nursing home surveys, we analyzed data from CMS’s OSCAR database (see table 10). We considered surveys to be predictable if (1) homes were surveyed within 15 days of the 1-year anniversary of their prior survey or (2) homes were surveyed within 1 month of the maximum 15-month interval between standard surveys. Consistent with CMS’s interpretation, we used 15.9 months as the maximum allowable interval between surveys. Because homes know the maximum allowable interval between surveys, those whose prior surveys were conducted 14 or 15 months earlier are aware that they are likely to be surveyed soon. From January 2000 through March 2002, states referred 4,310 cases to CMS under its expanded immediate sanctions policy when nursing homes were found to have a pattern of harming residents. Because some homes had more than one sanction or may have had multiple referrals for sanctions, 4,860 sanctions were implemented (see table 11). Table 12 summarizes the amounts of federal civil money penalties (CMP) implemented against nursing homes referred for immediate sanction. Although these monetary sanctions were implemented, CMS’s enforcement database does not track collections. In addition, states may have imposed other sanctions under their own licensure authority, such as state monetary sanctions, in addition to or in lieu of federal sanctions. Such state sanctions are not recorded in CMS’s enforcement database. State survey agencies did not refer to CMS for immediate sanction a substantial number of nursing homes found to have a pattern of harming residents. Most states failed to refer at least some cases and a few states did not refer a significant number of cases. While seven states appropriately referred all cases, the number of cases that should have been but were not referred ranged from 1 to 169. Four states accounted for about 55 percent of cases that should have been referred. Table 13 shows the number of cases that states should have but did not refer for immediate sanction (711) as well as the number of cases that states appropriately referred (4,310) from January 2000 through March 2002. Table 14 summarizes HCFA’s state performance standards for fiscal year 2001, describes the source of the information CMS used to assess compliance with each standard, and identifies the criteria the agency used to determine whether states met or did not meet each standard. Table 15 summarizes the results of CMS’s fiscal year 2001 state performance review for each of the five standards we analyzed. We focused on five of CMS’s seven performance standards: statutory survey intervals, the supportability of survey findings, enforcement requirements, the adequacy of complaint activities, and OSCAR data entry. Because several standards included multiple requirements, the table shows the results of each of these specific requirements separately. The following staff made important contributions to this work: Jack Brennan, Patricia A. Jones, Dan Lee, Dean Mohs, and Peter Schmidt. Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature. GAO-03-187. Washington, D.C.: October 31, 2002. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Nursing Homes: More Can Be Done to Protect Residents from Abuse. GAO-02-312. Washington, D.C.: March 1, 2002. Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities. GAO-02-279. Washington, D.C.: February 15, 2002. Nursing Homes: Success of Quality Initiatives Requires Sustained Federal and State Commitment. GAO/T-HEHS-00-209. Washington, D.C.: September 28, 2000. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Homes: HCFA Should Strengthen Its Oversight of State Agencies to Better Ensure Quality of Care. GAO/T-HEHS-00-27. Washington, D.C.: November 4, 1999. Nursing Home Oversight: Industry Examples Do Not Demonstrate That Regulatory Actions Were Unreasonable. GAO/HEHS-99-154R. Washington, D.C.: August 13, 1999. Nursing Homes: HCFA Initiatives to Improve Care Are Under Way but Will Require Continued Commitment. GAO/T-HEHS-99-155. Washington, D.C.: June 30, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Complaint Investigation Processes in Maryland. GAO/T-HEHS-99-146. Washington, D.C.: June 15, 1999. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. Nursing Homes: Stronger Complaint and Enforcement Practices Needed to Better Ensure Adequate Care. GAO/T-HEHS-99-89. Washington, D.C.: March 22, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Federal and State Oversight Inadequate to Protect Residents in Homes with Serious Care Problems. GAO/T-HEHS- 98-219. Washington, D.C.: July 28, 1998. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998.
Since July 1998, GAO has reported numerous times on nursing home quality-of-care issues and identified significant weaknesses in federal and state oversight. GAO was asked to assess the extent of the progress made in improving the quality of care provided by nursing homes to vulnerable elderly and disabled individuals, including (1) trends in measured nursing home quality, (2) state responses to previously identified weaknesses in their survey, complaint, and enforcement activities, and (3) the status of oversight and quality improvement efforts by the Centers for Medicare & Medicaid Services (CMS). The proportion of nursing homes with serious quality problems remains unacceptably high, despite a decline in the incidence of such reported problems. Actual harm or more serious deficiencies were cited for 20 percent or about 3,500 nursing homes during an 18-month period ending January 2002, compared to 29 percent for an earlier period. Fewer discrepancies between federal and state surveys of the same homes suggests that state surveyors are doing a better job of documenting serious deficiencies and that the decline in serious quality problems is potentially real. Despite these improvements, the continuing prevalence of and state surveyor understatement of actual harm deficiencies is disturbing. For example, 39 percent of 76 state surveys from homes with a history of quality-of-care problems--but whose current survey found no actual harm deficiencies--had documented problems that should have been classified as actual harm or higher, such as serious, avoidable pressure sores. Weaknesses persist in state survey, complaint, and enforcement activities. According to CMS and states, several factors contribute to the understatement of serious quality problems, including poor investigation and documentation of deficiencies, limited quality assurance systems, and a large number of inexperienced surveyors in some states. In addition, GAO found that about one-third of the most recent state surveys nationwide remained predictable in their timing, allowing homes to conceal problems if they chose to do so. Considerable state variation remains regarding the ease of filing a complaint, the appropriateness of the investigation priorities, and the timeliness of investigations. Some states attributed timeliness problems to inadequate staff and an increase in the number of complaints. Although the agency strengthened enforcement policy by requiring states to refer for immediate sanction homes that had repeatedly harmed residents, GAO found that states failed to refer a substantial number of such homes, significantly undermining the policy's intended deterrent effect. CMS oversight of state survey activities has improved but requires continued attention to help ensure compliance with federal requirements. While CMS strengthened oversight by initiating annual state performance reviews, officials acknowledged that the reviews' effectiveness could be improved. For the initial fiscal year 2001 review, officials said they lacked the capability to systematically distinguish between minor lapses and more serious problems that required intervention. CMS oversight is also hampered by continuing database limitations, the inability of some CMS regions to use available data to monitor state activities, and inadequate oversight in areas such as survey predictability and state referral of homes for enforcement. Three key CMS initiatives have been significantly delayed--strengthening the survey methodology, improving surveyor guidance for determining the scope and severity of deficiencies, and producing greater standardization in state complaint processes. These initiatives are critical to reducing the subjectivity evident in current state survey and complaint activities.
The private sector, driven by today’s globally competitive business environment, is faced with the challenge of maintaining and improving quality service at lower costs. As a result, many firms have radically changed, or reengineered, their ways of doing business to meet customer needs. Since the Department of Defense’s (DOD) environment is also changing, it needs to do the same. With the end of the Cold War, DOD’s logistics system must now support a smaller, highly mobile, high-technology force. Also, due to the pressures of budgetary limits, DOD must seek ways to make logistics processes as efficient as possible. To provide reparable parts for its aircraft, the Air Force uses an extensive logistics system that was based on management processes, procedures, and concepts largely developed decades ago. As of September 1994, the Air Force had invested $33 billion in reparable parts for its fleet of more than 6,800 aircraft. Reparable parts are items that can be fixed and used again, such as hydraulic pumps, navigational computers, landing gear, and wing sections. The Air Force’s logistics system, often referred to as a logistics pipeline, consists of a number of activities, including the purchase, storage, distribution, and repair of parts. The Air Force’s reparable parts pipeline primarily exists to ensure that aircraft stationed around the world at Air Force installations can get the parts they need to keep them operational. It also exists to support aircraft overhaul activities, when aircraft are periodically taken out of service for structural repairs and parts replacements. The Air Force Materiel Command (AFMC) is the organization that has primary responsibility for carrying out pipeline operations. Its tasks include determining how much inventory the Air Force needs to support its fleet, purchasing parts when necessary, and operating the facilities where major parts and aircraft repair are done. To carry out many of these tasks, AFMC has five air logistics centers (ALC) that are located in different regions throughout the United States. Each center is responsible for managing a portion of the reparable parts inventory, repairing certain parts, and overhauling specific types of aircraft. For fiscal year 1996, the Air Force estimates it will cost about $4.6 billion for maintenance of equipment and aircraft at the depot level. Other organizations also play a role in pipeline operations, including Air Force bases around the world, where Air Force aircraft are stationed. Although base maintenance personnel handle minor repairs, they send parts and aircraft to the ALCs for the heavier, more involved repairs. The bases, in turn, order replacement parts through the ALCs, where the bulk of Air Force inventory is stored. Another of these organizations is the Defense Logistics Agency (DLA), which handles the warehousing and distribution operations at each of the five ALCs. In general, new and repaired parts are stored at each center in DLA warehouses until they are needed. When an order is placed for a part, DLA retrieves the part from warehouse shelves and ships it accordingly. DLA also receives the broken items being shipped from the bases and stores them until the ALC repair shops are ready to fix them. Figure 1.1 shows how the Air Force’s inventory was distributed among Air Force bases and the ALCs (including DLA warehouses) as of September 1994. It also shows the amount of inventory in transit between the various locations. DLA plays another important role in pipeline operations; it provides expendable parts needed by the various Air Force repair activities. Expendable parts—also known as consumables—include items such as nuts, bolts, and rivets that are used extensively to fix reparable parts and aircraft. If these items are not readily available, repair operations can stall and lead to large quantities of unrepaired inventory. We have issued a series of reports on private sector practices that could be applied to DOD’s expendable inventories. Each report recommended new techniques that would minimize DLA’s role in storing and distributing expendable inventory. Although not as large as the Air Force, commercial airlines’ operations resemble the Air Force’s in several ways. First, airlines operate out of a number of different airports, and they must provide the aircraft at these locations with the parts they need. Second, airlines must periodically overhaul their aircraft and ensure that repair activities get the necessary parts. Third, the reparable parts pipeline that exists to fulfill these needs involves the purchase, storage, distribution, and repair of parts. In addition, for both the Air Force and commercial airlines, time plays a crucial role in the reparable parts pipeline. The amount of time involved in the various pipeline activities directly affects the responsiveness of logistics operations. For example, the longer it takes to deliver parts to a mechanic, the longer it will be before the aircraft can be repaired and ready for takeoff. Time also has a significant impact on cost. For example, the longer it takes to repair a part, the more inventory an organization must carry to ensure coverage while that part is out of service. Condensing pipeline times, therefore, simultaneously improves responsiveness and drives down costs. Complexity also plays an important role; it adds to costly overhead and pipeline time. For example, if an organization holds multiple layers of inventory at different locations, it must provide the space, equipment, and personnel to accommodate this inventory at each location, all of which contribute to overhead costs. Moreover, if a part must filter through each of these levels before finally reaching the end user, such as a mechanic, each stop along the way adds to pipeline time. As part of our continuing effort to help improve DOD’s inventory management practices, the Ranking Minority Member, Subcommittee on Oversight of Government Management and the District of Columbia, Senate Committee on Governmental Affairs, requested that we compare the Air Force’s management of its $33 billion reparable parts inventory with the operations of leading-edge private sector firms. This report focuses on (1) best management practices used in the commercial airline industry to streamline logistics operations and improve customer service, (2) Air Force reengineering efforts to improve the responsiveness of its logistics system and reduce costs, and (3) barriers that may stop the Air Force from achieving the full benefits of its reengineering efforts. To obtain DOD’s overall perspective on the Air Force’s logistics system and the potential application of private sector practices to its operations, we interviewed officials at the Office of the Under Secretary of Defense for Logistics and Air Force Headquarters, Washington, D.C., and DLA Headquarters, Alexandria, Virginia. We also discussed specific Air Force logistics policies and operations and reviewed inventory records at AFMC, Dayton, Ohio. To examine Air Force repair facilities, other logistics operations, and the new logistics practices being tested in the Air Force, we visited the Sacramento ALC, McClellan AFB, California; San Antonio ALC, Kelly AFB, Texas; Oklahoma City ALC, Tinker AFB, Oklahoma; and Dyess AFB, Texas. At these locations, we discussed maintenance and repair activities and processes, inventory management practices, “Lean Logistics” and reengineering program initiatives, and the potential application of additional private sector practices. We also contacted officials at the Warner Robins and Ogden ALCs to discuss and document the new business practices being tested and planned at those locations. Except where noted, our analysis reflects inventory valued at the last acquisition cost, as of September 1994. As highlighted in this report, the accuracy of Air Force inventory information is questionable. We did not test or otherwise validate the Air Force inventory data. During this review, we selected and physically examined a sample of items from the Air Force inventory that we believe highlighted the effect of the current and past DOD inventory management practices. This judgmental sample was drawn from E-3 and C-135 unique parts. Because we selected these items based on high dollar value, high levels of inventory on hand, and/or low demand rates, the results of our sample analysis cannot be projected to the total Air Force inventory. To identify best management practices being used by the private sector, we reviewed over 200 articles from various management and distribution publications, identified companies that were highlighted as developing innovative management practices, and visited the following organizations in the airline industry: American Airlines Maintenance Center, Tulsa, Oklahoma; British Airways Engineering, Heathrow Airport, United Kingdom; British Airways Avionics Engineering, Llantrissant, South Wales, United British Airways Maintenance Cardiff, South Wales, United Kingdom; United Airlines, San Francisco, California; United Airlines Maintenance Center, Indianapolis, Indiana; Boeing Commercial Airplane Group, Seattle, Washington; Federal Express, Memphis, Tennessee; and Tri-Star Aerospace Corporation, Deerfield Beach, Florida. At each company, we discussed and examined documentation related to the company’s reengineering efforts associated with management, employees, information technology, maintenance and repair processes, and facilities. We also contacted Southwest Airlines to obtain information on its maintenance and material management operations and visited the Northrop-Grumman Corporation aircraft production facility in Stuart, Florida, to examine its integrated supplier operations. To obtain additional information on supplier partnerships and implementation strategies, we participated in an International Quality and Productivity Center symposium on supplier partnerships in Nashville, Tennessee. Representatives from John Deere Waterloo Works; Bethlehem Steel; Federal Express; BP Exploration (Alaska), Inc.; E.I. DuPont; Salem Tools; Volvo GM-Heavy Trucks; Berry Bearing Company; The Torrington Company; Procard, Inc.; Lone Star Gas Company; Coors Brewing Company; Texas Instruments, Inc.; Allied Signal; Oryx Energy Company; Timken; Sun Microsystem, Inc.; Dixie Industrial Supply; Darter, Inc.; Mighty Mill Supply, Inc.; Alloy Sling Chain Industries; Columbia Pipe and Supply Company; Strong Tool Company, Inc.; Id One, Inc.; and Magid Glove and Safety Manufacturing Company, discussed their supplier partnership concepts, implementation strategies, and results. To gain a better understanding of how companies are applying integrated approaches to their logistics operations, we attended an integrated supply chain round table, hosted by Procter and Gamble. Attending this round table were representatives from Chrysler Corporation, Digital Equipment Corporation, E.I. Dupont Corporation, Levi Strauss, Massachusetts Institute of Technology, Siemens Corporation, 3M Corporation, and Xerox Corporation. To determine the ongoing problems of the current Air Force logistics system, we reviewed related reports issued since 1990 by us, the Air Force Audit Agency, and Air Force Logistics Management Agency. We conducted our review from August 1993 to August 1995 in accordance with generally accepted government auditing standards. Commercial airlines have cut costs and improved customer service by streamlining their logistics operations. The most successful improvements include using highly accurate information systems to track and control inventory, employing various methods to speed the flow of parts through the pipeline, shifting certain inventory management tasks to suppliers, and letting third parties handle parts repair and other functions. One of the airlines we studied, British Airways, has substantially reengineered its logistics operations over the last 14 years. These improvements have helped transform British Airways from a financially troubled, state-owned airline into a successful private sector enterprise. British Airways today is considered among the most profitable airlines in the world and has posted profits every year since 1983. British Airways has approached the process of change as a long-term effort that requires a steady vision and a focus on continual improvement. Although the airline has reaped significant gains from improvements to date, it continues to reexamine operations and is making continuous improvements to its logistics system. British Airways has used an integrated approach to reengineer its logistics system. It laid out a clear corporate strategy, determined how logistics operations fit within that strategy, and tied organizationwide improvements directly to those overarching goals. With this approach, the various activities encompassed by the logistics pipeline were viewed as a series of interrelated processes rather than isolated functional areas. For example, when British Airways began changing the way parts were purchased from suppliers, it considered how those changes would affect mechanics in repair workshops. British Airways takes a significantly shorter time than the Air Force to move parts through the logistics pipeline. Figure 2.1 compares British Airways’ condensed pipeline times with the Air Force’s current process by showing how long it takes a landing gear component to move through each organization’s system. British Airways officials described how an integrated approach could lead to a continuous cycle of improvement. For example, culture changes, improved data accuracy, and more efficient processes all lead to a reduction in inventories and complexity of operations. These reductions, in turn, improve an organization’s ability to maintain accurate data, and they stimulate continued change in culture and processes, both of which fuel further reductions in inventory and complexity. Despite this integrated approach, British Airways’ transformation did not follow a precise plan or occur in a rigid sequence of events. Rather, according to one manager, airline officials took the position that doing nothing was the worst option. After setting overall goals, airline officials gave managers and employees the flexibility to continually test new ideas to meet those goals. The five general areas in which British Airways has reengineered its practices are corporate focus and culture, information technologies, material management, repair processes, and facilities. These efforts are summarized in table 2.1 and are discussed briefly after the table and in more detail in appendix I. British Airways officials said changing the corporate mind-set was the single most important aspect of change, as well as the most difficult. Before reforms got underway in 1981, British Airways was an inefficient, over-staffed government organization on the brink of bankruptcy. By 1987, when privatization occurred, British Airways had substantially changed the culture that gave rise to these problems. Converting this culture has entailed appointing new top management from private industry to bring a better business focus to the organization and serve as champions of change; undertaking an initial round of drastic cost cuts, which included a 35-percent reduction in the workforce to eliminate redundant and unnecessary positions; adopting a new corporate focus and strategy in which improving customer service became the driving force behind all improvements; setting new performance measures that reflected customer service goals and corporate financial targets; instituting ongoing training and education programs to familiarize managers and employees with the new corporate philosophy; adopting total quality management principles to promote continual replacing managers who were unwilling or unable to adapt to the new negotiating agreements with employee unions to allow for a more flexible workforce. British Airways officials said the airline could not have successfully reengineered its practices without having the right technological tools to plan, control, and measure operations. As a result, the airline developed three key systems, the most important of which was an inventory tracking system that provides real-time, highly accurate visibility of parts and processes. The three systems have enabled managers and workers to know what parts are on hand, where they are, what condition they are in, when they will be needed, and how well operations are meeting corporate goals. The airline did not delay initiatives to streamline specific processes until changes in corporate culture and upgrades in data systems had been made; it began reexamining its processes concurrently. Two of the areas targeted were the way parts flow in from suppliers as well as how they are stored and distributed internally. Initiatives to streamline these areas have included shifting from in-house personnel to a third-party logistics company the task of arranging, tracking, and ensuring delivery of parts from its primarily North American suppliers and to third-party repair vendors; reducing the number of suppliers from 6,000 to 1,800 and working toward more cooperative relationships with the remaining suppliers; working with key expendable parts suppliers to establish more than 30 local distribution centers near British Airways’ main repair depot, such as the one shown in figure 2.2, to provide 24-hour delivery of such parts; establishing an integrated supplier program in which a key expendable parts vendor has taken on responsibility for monitoring parts usage and determining when to replenish inventory levels; consolidating internal stocking points into strategic locations to reduce inventory layers and improve responsiveness to end users; and installing automated storage, retrieval, and delivery systems to help ensure quick delivery of parts to end users. British Airways also targeted its component repair and aircraft overhaul operations for change because it wanted to speed up the repair process. It has converted a number of workshops to a “cellular” arrangement, which involves bringing the resources needed to repair an item or range of items into one location, or “cell” (see fig. 2.3). These resources include not only the mechanics and the equipment directly involved in the repairs, but also support personnel and inventory. In the past, all of these resources may have been scattered among several different sites. The cellular approach has reduced repair times by simplifying the flow of parts through repair workshops and ensuring that mechanics have the support they need to complete work quickly. While reengineering its processes, British Airways decided to renovate existing structures or build entirely new facilities to accommodate the new practices. Converting to cellular operations, for example, required moving widely scattered workshops under one roof and providing additional space for inventory and support staff. The renovations occurred primarily at British Airways’ main repair depot at London’s Heathrow Airport. Two new facilities were constructed in South Wales to house avionics component repair and Boeing 747 aircraft overhaul activities. British Airways was able to implement the most aggressive changes through the new facilities, called “green field sites” (see fig. 2.4). British Airways, which undertook this new construction after determining that it needed additional capacity, used the new facilities as an opportunity to start with a clean slate. It was able to fully implement state-of-the-art practices in workforce management philosophies, information systems, material management, and repair processes without being hindered by preexisting conditions. For example, one of the most valuable aspects of the green field sites has been British Airways’ ability to establish an entirely new corporate culture. Most employees are new hires, and all had to pass through a rigorous screening process to ensure that they possessed the skills and personal characteristics conducive to the flexible, team-oriented environment envisioned. British Airways’ initiatives have helped improve the responsiveness of logistics operations and reduced associated costs. Table 2.2 shows key performance measures that illustrate the result of British Airways’ efforts. Other airlines have pursued improvements similar to the steps taken by British Airways and have likewise seen dramatic results. For example, United Airlines adopted cellular repair in its engine blade overhaul workshop. As a result, United Airlines has reduced repair time by 50 to 60 percent and decreased work-in-process inventory by 60 percent. Table 2.3 highlights examples of some of the approaches other companies have used. Southwest Airlines differs from other airlines; it contracts out almost all component repair and aircraft overhaul. In selecting repair vendors, Southwest emphasizes the quality of repairs because fewer breakdowns enable it to carry less inventory and keep repair costs down. Southwest also emphasizes the speed of repairs. It stipulates specific repair turnaround times, and it applies penalties whenever these times are exceeded. Manufacturers, suppliers, and third-party logistics providers are also playing a role in streamlining operations and improving the effectiveness of logistics activities. In many cases, these vendors enter partnership-type arrangements with customers that involve longer term relationships and more open sharing of information. The following are examples of vendors that are helping companies better meet logistics needs. Boeing, one of the world’s leading aircraft manufacturers, has adopted a policy in which it promises next-day shipment for all standard part orders unless the customer specifies otherwise. Through its main distribution center in Seattle, Washington, and a network of smaller distribution centers worldwide, Boeing is providing quick order-to-delivery times and making it possible for customers to move from just-in-case toward just-in-time stocking policies. Tri-Star, a distributor of aerospace hardware and fittings, offers an integrated supplier program in which it works closely with customers to manage expendable parts inventories. Its services, which can be tailored to customer requirements, include placing a Tri-Star representative in customer facilities to monitor inventory bins at end-user locations, place orders, manage receipts, and restock bins. Tri-Star also maintains data on usage, determines what to order and when, and provides replenishment on a just-in-time basis. The integrated supplier programs entail other services as well, such as 24-hour order-to-delivery times, quality inspection, parts kits, establishment of electronic data interchange links and inventory bar coding, and vendor selection management. Tri-Star operates integrated supplier programs with nine aerospace companies, including British Airways, the first airline to enter such an arrangement with Tri-Star, and United Airlines, a recent addition. Table 2.4 shows the types of services, reductions, and improvements achieved by Tri-Star for some of its customers (designated as A through E) under the integrated supplier program. FedEx Logistics Services (FLS), a division of express delivery pioneer Federal Express, enables companies to shed certain logistics functions while boosting their capabilities to respond to operational or customer needs. Among its services is PartsBank, in which FLS stores a company’s spare parts at FLS warehouses; takes orders; and retrieves, packs, and ships needed parts. Once a replacement part is received, the customer can place the broken item in the package, and Federal Express will pick up the item and deliver it to the source of repair within 48 hours. FLS provides coverage 24 hours a day, 365 days a year. It also maintains the data associated with these activities and can provide real-time visibility of assets in the warehouse or in transit. In addition to PartsBank, FLS will develop customized services, which involves examining a client’s distribution practices and finding ways to eliminate wasteful steps. In recognition of increasing budgetary pressures, the changing global threat, and the need for radical improvements to its logistics system, the Air Force has begun a reengineering program aimed at redesigning its logistics operations. This program, called Lean Logistics, is testing many of the same leading-edge concepts found in private sector that have worked successfully in reducing cost and improving service. The Air Force, however, could expand and improve Lean Logistics, where feasible, by including closer “partnerships” with suppliers and third-party logistics services, testing the cellular concept in the repair process, and modifying its facilities. Incorporating some of these practices will require the collaboration of DLA and other DOD components. Also, to adopt these concepts Air Force-wide, the Air Force must improve its information system capabilities. Certain issues must be resolved before the Air Force achieves a fully reengineered logistics system that substantially reduces cost and improves service. For example, (1) the basic DOD culture must become receptive to radical new concepts of operations, (2) the traditional role of DLA as a supplier of expendable parts and as a storage and distribution service will be significantly altered, and (3) improvements to outdated and unreliable inventory data systems require management actions and funding decisions that must be made outside the responsibility of both Lean Logistics managers and the entire Air Force. The current Air Force logistics system is slow and cumbersome. Under the current process, the Air Force can spend several months or even years to contract for an item or piece parts and have it delivered or it may take several months to repair the parts and then distribute them to the end user. The complexity of the repair and distribution process creates many different stopping points and layers of inventory as parts move through the system. Parts can accumulate at each step in the process, which increases the total number of parts in the pipeline. The Air Force has developed both a three-level and a two-level maintenance concept to repair component parts. Under the three-level concept (organizational, intermediate, and depot), a broken part must pass through a number of base-level and depot-level steps in the pipeline (see fig. 3.1). After a broken part is removed from the aircraft by a mechanic, it is routed through the base repair process. If the part cannot be repaired at the base, it is sent to an ALC and enters the depot repair system. After it is repaired, the part is either sent back to the base or returned to the DLA warehouse, where it is stored as serviceable inventory. When DLA receives a request for a part, it ships the part to the base, where it is stored until needed for installation on an aircraft. . ....... . ....... Currently, the Air Force estimates that this repair cycle takes an average of 63 days to complete. This estimate, however, is largely based on engineering estimates that do not provide an accurate measure of repair cycle time. The actual repair time may be significantly longer because the Air Force does not include in its estimate the time a part sits in the repair shop or in storage awaiting repair. Under the two-level maintenance concept (organizational and depot), items that were previously repaired at the intermediate base maintenance level will be repaired at the depot level, thus significantly reducing the logistics pipeline, inventory levels, and maintenance personnel and equipment at the base level. In part because of the length of its pipeline, the Air Force has invested $33 billion in reparable aircraft parts and $3.7 billion in expendable parts, totaling $36.7 billion as of September 1994. The Air Force estimates that $20.4 billion of its total inventory is needed to support daily operations and war reserves. The Air Force allocates the remaining 44 percent to other types of reserves to ensure that it will not run out of parts if they are needed. The reserve inventory, valued at $16.3 billion, consists of the following categories: $1.7 billion for safety stocks, which are stocks purchased to ensure the Air Force will not run out of routinely needed parts; $2.8 billion for numeric stockage objective items, which are parts that are not routinely needed but are considered critical to keep an aircraft in operational status, so they are purchased and stored just in case an item fails; and $11.8 billion for items considered in “long supply,” which is a term denoting that more stock is on hand than what is needed to meet current demands, safety, and numeric stockage objective levels, but this stock is not currently being considered for disposal. Figure 3.2 details the Air Force’s allocation of its inventory to daily operations, war reserves, and other categories of stock. Air Force officials have said the Air Force can no longer continue its current logistics practices if it is to effectively carry out its mission in today’s environment. Budgetary constraints in recent years have led to substantial reductions in personnel, leaving the remaining workforce to deal with a logistics operation that has traditionally relied on large numbers of personnel to make it work. At AFMC, the organization primarily responsible for supporting the Air Force fleet, the workforce was reduced by 18.5 percent between 1990 and 1994. Moreover, in June 1995, the Defense Base Realignment and Closure Commission recommended that two of AFMC’s five ALCs be closed. As these ALCs are eventually closed, AFMC will have to find ways to accommodate their workload with the resources that remain. In addition, the end of the Cold War has led to an evolution of the military services’ roles and missions. DOD’s emphasis today is on sustaining a military force that can respond quickly to regional conflicts, humanitarian efforts, and other nontraditional missions. These changing roles and missions, combined with ongoing fiscal constraints, has resulted in DOD’s call for a smaller, highly mobile, high-technology force and a leaner, more responsive logistics system. To address logistics needs, in 1994 DOD issued a strategic plan for logistics that sets forth a series of improvements. This plan, which reflects many of the philosophies found in the private sector, outlines improvements in three areas. First, it calls for reducing logistics response times—the time necessary to move personnel, inventory, and other assets—to better meet customer needs. Second, it calls for a more “seamless” logistics system. The different activities comprising logistics operations are to be viewed and managed as a series of interdependent activities rather than isolated functional areas. Third, the plan seeks a streamlined infrastructure to help reduce overhead costs associated with facilities, personnel, and inventory. The Air Force has described its initiatives to improve its logistics system as the cornerstone of all future improvements. These efforts, spearheaded by AFMC, are aimed at dramatically improving service to the end user while simultaneously reducing pipeline time, excess inventory, and other logistics costs. The initiatives, called Lean Logistics, are still in the early stages and therefore still evolving. Nonetheless, AFMC began testing certain practices through small-scale demonstration projects in October 1994, with promising results to date. In addition, AFMC plans to begin testing additional, broader-based process improvements in fiscal year 1996. The demonstration projects underway as of March 1995 involved less than 1 percent of Air Force inventory items and tested the following primary concepts: (1) consolidated serviceable inventories, in which minimum levels of required inventory were stored in centralized distribution points in ALCs; (2) rapid transportation of parts between bases and ALCs; (3) repair of broken parts at ALCs as they arrive from bases or as centralized inventory levels drop; and (4) improved tracking of parts through the repair process. Each ALC tested some combination of these concepts and was identifying the information system improvements needed to adopt these practices on a wider scale. The tests scheduled to begin in fiscal year 1996 are aimed at broadening these efforts. Teams involving personnel from AFMC headquarters and each ALC have been redesigning five underlying business processes to overhaul the way parts are bought, distributed, and repaired. The teams are now determining how the redesigned processes must fit together so that reforms can be carried out in an integrated manner. Table 3.1 shows the business areas being addressed and briefly describes how each process will be changed. The test projects currently underway have demonstrated that the Air Force could sustain operations with significantly fewer parts. For example, at the Sacramento ALC, where all four concepts are being tested, 62 percent ($52.3 million) of the items involved in the project were identified as potential excess. Similarly, at the Warner Robins ALC, 52 percent ($56.3 million) of the items in its test program were identified as potential excess. AFMC has recently developed a preliminary plan for implementing its Lean Logistics concepts commandwide. Although these concepts could substantially improve operations, Air Force efforts to date are not as extensive as they could be. A number of leading-edge practices that have worked successfully in the private sector in reducing cost and improving service are not currently incorporated into the Lean Logistics program. These include the following: Use of third parties: The current Lean Logistics program does not include the use of third-party logistics services to store and distribute reparable parts between the bases and depot repair centers. As discussed in chapter 2, these services not only provide delivery of parts within 48 hours, they also alleviate information technology shortfalls by independently tracking parts through the storage and distribution process. Fast information system capability improvements: The Air Force expects the information technology improvements needed to expand Lean Logistics initiatives to come from two sources—commercial software for interim solutions to its current needs and DOD-wide system improvements being managed by the Joint Logistics Systems Center (JLSC) for long-term solutions. These long-term solutions may not be available for 5 to 10 years. In contrast, British Airways fully implemented information system improvements within 3 years. Supplier partnerships and reduced supplier base: The Air Force has not incorporated the concept of an integrated supplier into the Lean Logistics program. As discussed in chapter 2, British Airways and some aircraft manufacturers have significantly improved their logistics systems using this concept. Improved availability of expendable parts is critical to reducing the amount of time it takes to repair component parts. Supplier distribution centers: Similar to the integrated supplier program, the supplier distribution center is a technique used by British Airways to minimize the amount of time it takes to receive parts from a suppler. Currently, the Lean Logistics program is not testing this concept. Cellular concept for repair processes: To minimize the amount of time it takes to repair parts, British Airways adopted the cellular concept that centralizes the functions and resources needed to repair a part (e.g., testing, cleaning, machining, tooling, and supplies) in one location. British Airways also applied this concept to the aircraft overhaul facilities. The Lean Logistics program has not planned to test this concept. Modernize existing or build new facilities to reflect new business practices: To adopt the cellular concept and improve the storage and distribution of parts, British Airways modernized existing facilities. To maximize the impact of their entire reengineered process and corporate culture, British Airways built green field site facilities and staffed them with employees selected for their technical competence as well as their flexibility for new processes and team orientation. Although new construction and modernization of logistics facilities is a very difficult aspect of reengineering for the Air Force because of base closures and funding limitations, this aspect of reengineering could be a consideration when future logistics decisions are made for supporting new weapon systems. A number of these additional initiatives would require new relationships between the Air Force and commercial suppliers, distributors, and other third parties. To develop these relationships, the Air Force and DLA must work together because, under the current system, DLA is the primary supplier to the Air Force for expendable items and provides a storage and distribution service for Air Force reparable parts. Several major obstacles stand in the way of the Air Force’s efforts to institutionalize its reengineered logistics system. These obstacles include the following: The “corporate culture” within DOD and the Air Force has been traditionally resistant to change. Organizations often find changes in operations threatening and are unwilling to change current behavior until proposed ideas have been proven. This kind of resistance must be overcome if the Air Force is to expand its radical new concepts of operations. One of the largest obstacles to speeding up repair times is the lack of expendable parts needed to complete repairs. With a new approach to better serve its military customers, the role of DLA as the traditional supplier of consumable items and as a storage and distribution service is changing. However, at this point, DLA is still considering alternative approaches to manage expendable parts and is discussing these new concepts with contractors and the services. Until these new approaches are implemented, the Air Force’s ability to improve the repair process may be limited. Some of the biggest gains available to the Air Force, such as improvements to outdated and unreliable inventory data systems, require management actions and funding decisions that must be made outside the responsibility of both Lean Logistics managers and the entire Air Force. In addition, some of these systems will not be fully deployed throughout the Air Force for 5 to 10 years. Changes in corporate culture must accompany efforts to transform operations if progress is to continue within the Air Force reengineering program. According to a Lean Logistics official, the current mindset may hinder Lean Logistics for several reasons. First, people find radical changes in operations threatening and, as is common in many organizations, resist efforts to change. Second, Lean Logistics is still a relatively new concept, and personnel lack a thorough understanding of what it is and how it will improve operations. As a result, they are unwilling to change current behaviors until Lean Logistics concepts are proven. Third, Lean Logistics does not yet have support from all of the necessary functional groups within AFMC, the Air Force, and DOD. This support will be needed if the full range of changes is to be carried out. In June 1994, we convened a symposium on reengineering that brought together executives from five Fortune 500 companies that have been successful in reengineering activities. The following principles for effective reengineering, reflecting panel members’ views, emerged from the symposium: Top management must be supportive of and engaged in reengineering efforts to remove barriers and drive success. An organization’s culture must be receptive to reengineering goals and principles. Major improvements and savings are realized by focusing on the business from a process rather than functional perspective. Processes should be selected for reengineering based on a clear notion of customer needs, anticipated benefits, and potential for success. Process owners should manage reengineering projects with teams that are cross-functional, maintain a proper scope, focus on customer metrics, and enforce implementation timelines. Panel members at the symposium expressed the view that committed and engaged top managers must support and lead reengineering efforts to ensure success because top management has the authority to encourage employees to accept reengineered roles. Also, top management has the responsibility to set the corporate agenda and define the organization’s culture and the ability to remove barriers that block changes to the corporate mindset. For example, the Vice President of Reengineering at Aetna Life and Casualty Insurance Company said, “Top management must drive reengineering into the organization. Middle management won’t do it.” The panelists agreed that a lack of top management commitment and engagement is the cause of most reengineering failures. According to the Corporate Headquarters Program Manager of Process Management at IBM, “To be successful, reengineering embedded in the fiber of our people until it becomes a way of life.” To develop a corporate culture that is receptive to reengineering, the panelists emphasized the importance of communicating reengineering goals consistently on all levels of the organization, training in skills such as negotiation and conflict resolution, and tailoring incentives and rewards to encourage and reinforce desired behaviors. One of the largest obstacles to speeding up repair times is the lack of expendable parts needed to complete repairs. Supplier-operated local distribution centers could help ensure quick availability of such parts. Similarly, integrated supplier programs, in which certain inventory management responsibilities are shifted to the supplier, are also aimed at improving expendable item support. We have strongly urged DLA to endorse the use of aggressive just-in-time concepts whose principal objectives are to transfer inventory management responsibilities to key distributors. Existing information systems are also an obstacle because they do not always provide the accurate, real-time information needed to expand current efforts beyond their limited scope. According to AFMC’s deputy chief of staff for logistics, AFMC is working with systems that have not been significantly improved in 15 years. As a result, much of the data used to run the Lean Logistics demonstration projects have been collected manually, a task that project leaders said would be impossible under an Air Force-wide program. Improvements to material management and depot maintenance information systems—key to success of the Lean Logistics initiatives—are under the control of JLSC. JLSC is staffed with personnel from the military services and DLA, and is trying to standardize data systems across DOD. These systems, however, will not be fully deployed throughout the Air Force for 5 to 10 years. Currently, AFMC officials are working with JLSC officials to define Air Force requirements. They are also working to develop short-term solutions to enable the Lean Logistics program to move forward using commercial software. According to one Lean Logistics official, however, AFMC may have trouble pursuing and later adopting many of these short-term solutions because funding for systems outside of JLSC’s umbrella is severely limited. The current Air Force logistics system is inefficient and costly compared with leading-edge business practices. AFMC has recognized the need for radical change and is beginning to pursue some of these practices. Because some of the results to date have been promising, these efforts should be supported and expanded. The Air Force, however, could build on its reengineering effort by including additional practices pursued and successfully adopted by the private sector. In addition, current and future AFMC initiatives will be seriously hindered unless top-level DOD commitment and engagement is received, and all affected Air Force organizations and other DOD components—specifically DLA and JLSC—fully support AFMC’s efforts. DLA’s support will be critical for developing local distribution centers and integrated supplier programs to meet the Air Force requirements for expendable parts. JLSC officials may have to find ways that will allow the Air Force the flexibility to use existing commercial software to resolve existing information technology weaknesses and expand its reengineering initiatives. Without these logistics system improvements, the Air Force will continue to operate a logistics system that results in billions of dollars of wasted resources. Given the budget reductions it has already absorbed, the Air Force might not be able to provide effective logistics support to future DOD operations. To build on the existing Air Force reengineering efforts and achieve major logistics system improvements, we recommend that the Secretary of Defense commit and engage top-level DOD managers to support and lead Air Force reengineering efforts to ensure its success. We also recommend that the Secretary of Defense direct the Secretary of the Air Force to incorporate additional leading-edge logistics concepts into the existing Lean Logistics program, where feasible. Specific concepts that have been proven to be successful and should be considered, but have not been incorporated in the current Air Force program include installing information systems that are commercially available to track inventory amounts, location, condition, and requirements; counting existing inventory once new systems are in place to ensure accuracy of the data; establishing closer relationships with suppliers; encouraging suppliers to establish local distribution centers near major repair depots for quick shipment of parts; using integrated supplier programs to shift to suppliers the responsibility for managing certain types of inventory; using third-party logistics services to manage the storage and distribution of reparable parts and minimize DOD information technology requirements; reorganizing workshops, using the cellular concept where appropriate, to reduce the time it takes to repair parts; and integrating successful reengineered processes and flexible, team-oriented employees in new facilities (like the green field sites) to maximize productivity improvements, as new facilities are warranted to meet changes in the types and quantities of aircraft. In addition, we recommend that the Secretary of the Air Force (1) prepare a report to the Secretary of Defense that defines its strategy to adopt these leading practices and expand the reengineering program Air Force-wide and (2) establish milestones for the report’s preparation and issuance and identify at a minimum the barriers or obstacles that would hinder the Air Force from adopting these concepts; the investments (people, skills, and funding) required to begin testing these new concepts and the projected total costs to implement them Air Force-wide; the potential savings that could be realized; and the Air Force and other DOD components whose support will be needed to fully test these new concepts. We further recommend that the Secretary of Defense use the Air Force’s report to set forth the actions and milestones to alleviate any barriers or obstacles (such as overcoming resistance to organizational change and improving outdated inventory information systems), provide the appropriate resources, and ensure the collaboration between the Air Force and other DOD components that would enable the Air Force to achieve an integrated approach to reengineering its processes. Once these steps are taken, we recommend that the Secretary of Defense direct the Secretary of the Air Force to institutionalize a reengineering effort that is consistent with successful private sector reengineering efforts. These efforts include communicating reengineering goals and explaining them to all levels of the organization, training in skills to enable employees to work across functions and modifying this training as necessary to support the reengineering process, and tailoring rewards and incentives to encourage and reinforce desired behaviors. In commenting on a draft of this report, DOD generally agreed with the findings, conclusions, and recommendations, and stated that the Air Force’s Lean Logistics program should receive top-level DOD support in achieving its goals. DOD also stated that the Air Force should consider incorporating additional leading-edge practices into its reengineering effort. According to DOD, the Air Force will be asked to provide a report to the Secretary of Defense by July 1996 that will discuss the feasibility of including such additional practices in the Lean Logistics initiative and to address other concerns raised in this report. By October 1996, the Office of the Secretary of Defense will address how it plans to alleviate any barriers and obstacles identified in the Air Force’s report. DOD indicated that the Air Force plans to take steps to institutionalize its reengineering efforts by December 1996.
Pursuant to a congressional request, GAO reviewed the Air Force's management of its reparable parts inventory, focusing on: (1) commercial airline industry practices to streamline logistics operations and improve customer service; (2) Air Force reengineering efforts to improve its logistics system and reduce costs; and (3) barriers to the Air Force's reengineering efforts. GAO found that: (1) the commercial airline industry, including certain manufacturers, suppliers, and airlines, are using leading-edge practices to improve logistics operations and reduce costs; (2) in recognition of increasing budgetary pressures, the changing global threat, and the need for radical improvements in its logistics system, the Air Force has begun a reengineering program aimed at redesigning its logistics operations; (3) GAO has urged these changes and supports them, and has identified additional private-sector practices that may result in even greater savings; (4) there are several major barriers to bringing about change that must be addressed and resolved if the Air Force is to reengineer its logistics system and save billions of dollars; (5) the Air Force reengineering effort addresses inherent problems with its logistics system, but additional steps can be taken to maximize potential improvements; (6) additional steps GAO identified that could enhance this program include establishing a top-level DOD champion of change to support the Air Force initiatives, greater use of third-party logistics services, closer partnerships with suppliers, encouraging suppliers to use local distribution centers, centralizing repair functions, and modifying repair facilities to accommodate these new practices; (7) the success of the Air Force in achieving a quantum leap in system improvements hinges on its ability to address and overcome certain barriers, such as inherent organizational resistance to change; (8) top-level DOD officials must be supportive of and engaged in Air Force reengineering efforts to remove these barriers and drive success; (9) information systems do not always provide Air Force managers and employees with accurate, real-time data on the cost, amount, location, condition, and usage of inventory; and (10) without the support of top-level DOD management and accurate, real-time inventory information, the expansion of the Air Force's reengineering efforts could be seriously impaired.
The Air Force has rapidly expanded its use of RPAs in the last decade to support combat operations in Iraq and Afghanistan. The Air Force flies three types of RPAs—the MQ-1 (Predator), the MQ-9 (Reaper) and the larger RQ-4 (Global Hawk). Beyond the traditional intelligence, surveillance, and reconnaissance capability to analyze evolving battlefield conditions, the MQ-1 and the MQ-9 have been outfitted with missiles to strike targets, with equipment to designate targets for manned aircraft by laser, and with sensors to locate the positions of improvised explosive devices and moving insurgents, among other missions. All the military services operate RPAs, and each uses different approaches to assign personnel to pilot them and operate their sensors. For example, the Air Force (the focus of this review) assigns officers to fly RPAs and enlisted personnel to operate the RPAs’ sensors, which provide intelligence, surveillance, and reconnaissance capabilities. In addition, the Air Force relied solely on manned-aircraft pilots to fly RPAs until 2010, when it established an RPA pilot career field for officers who specialize in flying RPAs and are not qualified to fly manned aircraft. Similarly, the Navy assigns officers to pilot RPAs, and enlisted personnel to operate RPA sensors. However, the Navy has not established a separate career field for pilots who specialize in flying RPAs and instead assigns pilots of manned aircraft to operate them. By contrast, the Army and Marine Corps have opted to assign enlisted personnel to fly RPAs and operate their sensors. Further, in both the Army and Marine Corps, there is no distinction between the pilot and sensor operator. Air Force RPA pilots carry out their missions and pilot RPAs from eight active-duty bases in the continental United States including Creech, Cannon, and Beale Air Force Bases and from Air National Guard bases in six states including North Dakota, New York, and Ohio. In addition, RPA pilots are trained at some of the bases where RPAs are operated, such as at Beale Air Force Base, as well as at other bases where RPAs are not operated, such as at Holloman Air Force Base. The Air Force plans to add an Air Force Reserve unit at Hurlburt Field as well as Air National Guard RPA bases in Arkansas, Iowa, Michigan, New York, and Pennsylvania (see fig. 1). The initial training that the Air Force provides to its RPA pilots is designed specifically for flying RPAs and consists of two major components that take about 10 months to complete. The first major component is Undergraduate RPA Training and it consists of a basic flying skills course in which RPA pilots learn to fly a small manned aircraft in Pueblo, Colorado; instrument training in a manned-aircraft flight simulator at Randolph Air Force Base in Texas, and an RPA fundamentals course that is also at Randolph. In the second major component of their initial training, RPA pilots get their first opportunity to fly an RPA at a Formal Training Unit, which for most active-duty pilots takes place at Holloman Air Force Base in New Mexico. During this training, RPA pilots learn basic RPA operations in all mission areas including intelligence, surveillance, and reconnaissance as well as close air support. Following their time in Formal Training Units, RPA pilots finish their training by attending a 2- week joint weapons course in which they learn how to operate with the Army, Navy, and Marine Corps in a joint operational environment. The Air Force spends considerably less to train RPA pilots than it does to train manned-aircraft pilots. Specifically, Air Education and Training Command officials estimate that the Air Force spends about $65,000 to train each RPA pilot to complete Undergraduate RPA Training. Conversely, these officials estimate that the Air Force spends an average of $557,000 for each manned-aircraft pilot to complete the corresponding portion of manned-aircraft pilot training, which is called Undergraduate Pilot Training. The Air Force currently flies the bulk of its RPAs using a concept known as remote-split operations. With remote-split operations, a small number of RPA pilots deploy to operational theaters located overseas to launch and recover RPAs from various locations around the world while other RPA pilots remotely control the RPA for its mission from Air Force bases in the United States (see fig. 2). According to Air Force officials, remote- split operations help the Air Force reduce the personnel and equipment it deploys overseas because the units that launch and recover RPAs are staffed with a relatively small number of pilots, sensor operators, support personnel, and equipment. In addition, remote-split operations provide the Air Force flexibility to change the geographic region of the world where an RPA pilot conducts a mission without moving the pilot, support personnel, or equipment needed to control the RPA. If the Air Force is not able to use one of its launch and recovery sites for various reasons such as poor weather, the Air Force can continue its RPA operations by launching RPAs from a different launch and recovery site. The Defense Officer Personnel Management Act (DOPMA) created a system for managing the promotions for the officer corps of each of the military services. DOPMA specifies that the secretaries of the military departments must establish the maximum number of officers in each competitive category that may be recommended for promotion by competitive promotion boards. Career categories, also known as competitive categories, cluster officers with similar education, training, or experience, and these officers compete among themselves for promotion opportunities. Under this system, as currently implemented in the Air Force, there are several competitive categories including one that contains the bulk of Air Force officers called the Line of the Air Force, which includes RPA pilots, as well as pilots of manned aircraft and other operations-oriented careers. To consider officers for promotion from among those who are eligible, the Air Force assigns groups of senior officers to serve as members of a promotion selection board for each competitive category of officer in the Air Force. Promotion boards consist of at least five active-duty officers who are senior in grade to the eligible officers, but no officer on the board is below the rank of major. In addition, Air Force guidance states that the Air Force attempts to provide a balanced perspective on promotion boards, and hence it selects officers who mirror, as much as possible, the officers they are considering with respect to race, sex, aeronautical rating, career field, and command. Promotion boards typically convene annually at AFPC headquarters to review a variety of records for each eligible officer, including performance and training reports as well as recommendations from supervisors. Board members assess these records using a best-qualified approach and use a variety of methods to score the records and resolve differences among the scoring of the board members, if necessary. An Air Force officer cannot serve as a member of two successive promotion boards considering officers of the same competitive category and rank. A key feature of DOPMA is its “up-or-out” promotion system. Under this system, as currently implemented in the Air Force, promotion to the first two ranks in an officer’s career is not competitive. Specifically, 100 percent of fully qualified Air Force second lieutenants and first lieutenants are promoted after serving for 2 years in their respective ranks and do not meet with a competitive promotion board. However, as officers advance through the ranks in cohorts that are determined by the year they were commissioned, they compete for promotion against other members of their cohort at set years or zones of consideration for each rank. For example, Air Force officers are generally considered for promotion to major, or the grade of O-4, after 10 years. Under the DOPMA system, a select group of officers can also be considered for promotion 1 or 2 years early, or “below the zone.” However, because only a limited number of officers below the zone may be promoted, officers have their greatest potential for promotion “in the zone.” If officers in a cohort are not promoted while they are in the zone, they can compete for promotion in the following one or in some instances two years later, which is known as competing “above the zone.” However, if these officers are not selected for promotion above the zone, they could be involuntarily separated from the Air Force. The Air Force has taken some steps toward managing RPA pilots using a strategic human-capital approach but faces several challenges including accurately identifying personnel requirements, limited training time for pilots, recruiting and retention difficulties, and incorporating feedback from RPA pilots into its operations. The Air Force’s effort to meet combatant command RPA requirements has included some elements of strategic human-capital planning, but increasing demand and past experience indicate the Air Force has not accurately identified RPA personnel requirements. High-performing organizations use strategic human-capital planning to help them evaluate the extent to which their human-capital approaches support the accomplishment of programmatic goals. Strategic human-capital planning involves identifying human-capital needs like the necessary “shape,” which involves ensuring that agencies have the right numbers of staff at the right levels of experience, as well as the necessary size of the workforce for accomplishing agency missions while also enabling the workforce to accomplish career-development tasks, which furthers agency goals and objectives. The Air Force has taken steps to plan for the shape and size of the RPA pilot workforce and react to requirements from the Secretary of Defense, including adding a cadre of experienced officers to mentor officers recruited into a new career the Air Force established for RPA pilots. In order to develop a long-term, sustainable career path for pilots flying RPAs and demonstrate its commitment to RPA pilots, in 2010 the Air Force established an RPA pilot career field with a separate set of training requirements. These officers are qualified only to fly RPAs and are not qualified on Air Force manned aircraft. In addition, the Air Force recognized that as new officers were recruited into the RPA pilot career field, they would need a group of more-senior officers to serve as mentors and leaders. Therefore, in 2011, the Air Force permanently recategorized around 475 manned-aircraft pilots who were generally serving at the ranks of major and lieutenant colonel to serve as permanent RPA pilots, according to Air Force documentation. Air Force officials stated that these more-senior pilots would help provide a leadership and experience base for the new RPA pilot career field. The officials also stated that additional manned-aircraft pilots have been permanently recategorized as RPA pilots since 2011, and Air Force documentation shows a total of 545 recategorized manned-aircraft pilots. Furthermore, the Air Force has taken steps to plan for the size of its RPA pilot workforce. According to Headquarters Air Force officials, the number of RPA combat air patrols (CAP), directed by the Secretary of Defense and based on the mission needs of the combatant commands, is a primary factor in determining RPA pilot personnel levels. In 2010, the Secretary of Defense directed the Air Force to fund personnel to reach 65 CAPs by fiscal year 2013 and be prepared to grow beyond that requirement in future years. To determine the number of RPA pilots, the Air Force Manpower Agency conducted a personnel requirements study for MQ-1 Predator squadrons in 2008 and established the number of RPA crews required to fly one CAP for 24 hours, referred to as the crew ratio. Based on the study, the Air Force concluded that the crew ratio for MQ-1 Predator squadrons would be 10:1, which calls for 10 RPA pilots to sustain a Predator for 24 hours. Air Force officials stated that although the 2008 study did not address the personnel requirements for MQ-9 Reaper squadrons, the Air Force used the study as the basis for establishing a 10:1 crew ratio for MQ-9 units also because MQ-1 and MQ- 9 units have similar requirements. In addition to this crew ratio, the Air Force used Air Force Instruction 38-201 to calculate the required number of additional pilots it needs for support positions such as commanders, and staff positions at various organizational levels including headquarters. Using the crew ratio and the Air Force instruction, the Air Force determined that the total number of RPA pilots required to sustain the 65 CAPs currently required by the Secretary of Defense is between 1,600 and 1,650 pilots, according to a Headquarters Air Force official. Furthermore, the Air Force has taken steps to react to increased CAP requirements. Until 2009, the Air Force relied solely on manned-aircraft pilots serving assignments as RPA pilots to fill personnel requirements. In fiscal year 2006, manned-aircraft pilots were sustaining 12 CAPs, and the 2006 Quadrennial Defense Review stated that the Predator system alone would grow to 21 CAPs by 2010. However, according to Headquarters Air Force officials, by 2007 the demand from the combatant commands had already exceeded that benchmark. Air Force leadership committed the service to meeting the increased requirements, and the Air Force took actions to provide sufficient personnel. These actions included lengthening the assignments of manned-aircraft pilots in RPA squadrons and then extending those assignments indefinitely, mobilizing pilots from the Air National Guard and Air Force Reserve, delaying the establishment of the RPA weapons school after designating RPA as a formal weapon system, and extending the length of deployments to augment staffing levels of RPA squadrons. In 2009, the Air Force also began assigning manned-aircraft training graduates to RPA assignments as their first assignment after completing Undergraduate Pilot Training. In 2010, the Air Force established the RPA pilot career field. Figure 3 summarizes the steps that the Air Force took to react to increased CAP requirements since 2007. Using these steps, the Air Force has made progress towards meeting the CAP requirements, but at personnel levels that were below requirements. In addition, the Air Force reduced the capacity of its RPA training unit because instructors were pulled to fly in RPA units. In fiscal year 2012, the Air Force began a reconstitution period intended to staff the training units, restart the weapons school, and increase the overall number of RPA pilots to increase the crew ratios of RPA units. As of December 2013, there were 1,366 RPA pilots, or around 85 percent of the total of 1,600 pilots determined by the Air Force as necessary to sustain RPA operations and training for 65 CAPs. In addition, the Air Force anticipates increasing the number of RPA pilot staff positions across the Air Force from 111 as of December 2013 to 300 by fiscal year 2023 to serve at various Air Force commands, including at Headquarters Air Force and Air Combat Command. The Air Force has not accurately identified optimum personnel requirements, or crew ratio, for the number of RPA pilots it requires. We have reported that high-performing organizations use complete and current data to inform their strategic human-capital planning and remain open to reevaluating workforce planning efforts. In the 2008 study that the Air Force Manpower Agency conducted to determine the appropriate crew ratios for MQ-1 Predator squadrons, the Air Force did not account for all of the flying and administrative tasks that are required in these squadrons. While the study accounted for some important tasks that RPA pilots perform in MQ-1 squadrons such as performing operational missions, it did not account for other important tasks such as those required to launch and recover RPAs. In addition, the study did not account for some important administrative tasks such as conducting flight-safety evaluations and providing a commander’s support staff. The study acknowledged that due to its reporting time frames, it did not capture the personnel requirements of a variety of tasks. Headquarters Air Force personnel acknowledged the study’s limitations and said that because the study omitted critical and important tasks from its analysis, the resulting crew ratio that it recommended probably did not provide enough pilots to perform the work in an MQ-1 squadron. These officials stated that, because of the study’s omissions, the 10:1 crew ratio for MQ-1 squadrons established in an Air Force instruction that was based on this study should probably be increased. Similarly, some RPA unit commanders and RPA pilots in some of our focus groups also said that the crew ratio is too low. However, to-date the Air Force has not updated the crew ratio for RPA squadrons. Headquarters Air Force officials stated that updating the crew ratio has not been a top priority. At the same time, these officials noted that more recently they have discussed the need to update the crew ratio and expressed optimism that it would become a priority in the future, though no concrete plans exist to initiate an update to the requirement. Furthermore, an Air Force instruction states that a crew ratio establishes the number of personnel required to support a unit mission and that if a ratio is too low, combat capability is diminished and flight safety suffers. Such risks can arise when crew-ratio requirements are set too low, as well as when units operate at crew ratios that are too far below optimum crew ratios. However, Air Force documentation shows that crew ratios in RPA units have fluctuated between 7:1 and 8.5:1, and at times have dropped to 6:1, according to Air Force officials. This indicates that the RPA pilot workload is performed by fewer pilots working more hours to accomplish the mission than if the Air Force ensured that its RPA units operated at the required crew ratios. The Air Force has operated at these levels to provide a higher number of CAPs. According to Headquarters Air Force officials, in the past the Air Force has attempted to deny requests made by combatant commanders for Air Force RPA capabilities because they push crew ratios too low. These officials stated that when the Air Force denies a request it provides justification, which include concerns about crew ratios, to the Joint Staff, which is responsible for resolving differences between combatant commanders’ requests for capabilities and the services that provide them. However, Air Force officials stated that the Joint Staff has overridden some of the Air Force denials in order to accomplish missions, despite the possibility that crew ratios would decrease. Without establishing a minimum crew ratio for RPA units, the Air Force does not have the information it needs to determine when those units are operating at crew ratio levels that expose the Air Force to unacceptable levels of risk to accomplishing its mission and ensuring safety. As a result of inaccurate crew ratios for Air Force RPA squadrons and a lack of a minimum crew ratio, the RPA pilot workforce has sustained a high pace of operations, which limits its time for training and development. The Air Force Unmanned Aircraft Systems Flight Plan 2009-2047 states that it is imperative to provide the necessary training and opportunities for advancement that will create a cadre of future Air Force leaders. However, unit commanders in each of the three locations we visited and some RPA pilots stated that the high pace of operations and demand for RPA capabilities limited their units’ time to train for the various mission sets that RPA units are required to perform. One unit commander stated that battlefield commanders that his unit supports have pointed out that his RPA pilots need training, and pilots in some focus groups noted that limited training opportunities prevent RPA units from excelling at their missions and becoming experts in their field. In addition, pilots in all 10 focus groups indicated that they are limited in their ability to pursue developmental opportunities. Furthermore, DOD has noted that the prevalence and use of unmanned systems, including RPAs, will continue to grow at a dramatic pace. As discussed above, the Secretary of Defense has stated specifically that the requirement for 65 CAPs represents a temporary plateau in progress toward an increased enduring requirement. Also, as the national security environment changes, RPA pilots will be expected to conduct a broader range of missions across different conditions and environments, including antiaccess and area-denial environments where the freedom to operate RPAs is contested. By not creating an environment where RPA pilots can receive the training and development opportunities they need to perform their functions effectively, the Air Force may be hindering its ability to perform its mission even if it is able to operate at the optimum crew ratio that is set in the Air Force instruction. The Air Force has used a dual strategy to meet its increasing need for RPA pilots: using manned-aircraft pilots and recruiting RPA pilots, the career field established in 2010 for officers trained to only fly RPAs. However, the Air Force has faced challenges in recruiting RPA pilots since it began this career field. High-performing organizations tailor their recruitment and retention strategies to meet their specific mission needs. The Air Force intends to build a cadre of dedicated RPA pilots, and projects that RPA pilots will make up 90 percent of the RPA pilot workforce by fiscal year 2022. However, the Air Force has not been able to achieve its recruiting goals for RPA pilots in fiscal years 2012 and 2013. In fiscal year 2013, the Air Force recruited 110 new RPA pilots, missing its goal of 179 pilots by around 39 percent. Consequently, while the Air Force has made progress in increasing the total number of RPA pilots and staffed its RPA units at about 85 percent of current requirements as of December 2013, around 42 percent of those pilots are manned-aircraft pilots and manned-aircraft pilot training graduates. Both of these groups are temporary RPA pilots who serve only one assignment in an RPA squadron. While the length of these assignments can be extended, these pilots will likely not stay in the RPA squadrons permanently (see fig. 4). Headquarters Air Force officials believe the Air Force has missed its recruiting goals in 2012 and 2013 for RPA pilots because potential recruits have a limited understanding of the RPA mission and there is a lack of recruiting officials with RPA experience to advise potential recruits. The Air Force may face challenges recruiting officers to serve as RPA pilots because of a negative perception that some in the Air Force associate with flying RPAs. Headquarters Air Force officials, RPA pilots in some of our focus groups, and one unit commander stated that some in the Air Force view flying RPAs negatively, resulting in a stigma. According to these officials one reason some view flying an RPA negatively is because flying an RPA does not require pilots to operate an aircraft while on board an aircraft in-flight. In addition, officials stated that overcoming this stigma may be difficult because publicizing the work that RPA pilots do is often not feasible due to the classified nature of RPA missions. Nonetheless, Headquarters Air Force officials stated that the Air Force projects it will meet its recruiting goals for the RPA pilot career field for fiscal year 2014 on the basis of commitments made by cadets participating in the Air Force Reserve Officer Training Corps. We have reported that high-performing organizations make use of targeted investments such as recruiting bonuses as part of their strategies to recruit high-quality personnel with the critical skills. However, Headquarters Air Force officials reported that the Air Force is not currently exercising its option to offer a recruiting bonus as an incentive to volunteer for the RPA pilot career field. Officials from the Headquarters Air Force and the Office of the Secretary of Defense stated that such pay incentives are rarely used to recruit officers in the Air Force. Headquarters Air Force officials also stated that due to the current constrained budget environment in which DOD and the federal government are operating, the Air Force would first prefer to exhaust the use of all nonmonetary options for improving recruiting before offering bonuses. As a result, the Air Force may have to continue to rely on manned-aircraft pilots to meet RPA pilot personnel needs. This approach may not be cost-effective because the Air Force spends an average of $557,000 per pilot on traditional Undergraduate Pilot Training, compared to an average of $65,000 for Undergraduate RPA Training, according to Air Education and Training Command officials. Without a more-tailored approach to recruiting RPA pilots that increases the appeal of the new career to potential recruits, the Air Force risks perpetuating personnel shortages and may need to continue relying on manned-aircraft pilots to fill its personnel requirements. Moreover, the Air Force uses officers as RPA pilots, but it has not evaluated whether using alternative personnel populations such as enlisted or civilian personnel as RPA pilots is a viable option. A report by the House Permanent Select Committee on Intelligence urged the Air Force to study the other military services’ experiences with using enlisted personnel as RPA operators and evaluate whether this approach would degrade mission performance. Headquarters Air Force officials stated that prior to 2010, they decided to assign officers to serve as RPA pilots because they thought officers were more appropriate since RPAs fly in complex airspace, and, in some cases, fire missiles at adversaries. Headquarters Air Force officials also stated that they have, at times, considered the use of enlisted or civilian personnel but have not initiated formal efforts to evaluate whether using such populations would negatively affect the ability of the Air Force to carry out its missions. However, without an evaluation of the viability of using other sources of personnel, the Air Force may lack valuable information on whether additional options exist for meeting personnel requirements. With regard to pilot retention, the Air Force has taken some steps but does not have a retention strategy for RPA pilots, though indications suggest that it could face challenges retaining them in the future. Specifically, according to Headquarters Air Force officials, the Air Force has offered assignment incentive payments to RPA pilots since the career field was established in 2010. In addition, the officials stated that manned- aircraft pilots serving assignments in RPA squadrons receive skill-based aviator career incentive pay and can receive aviator retention pay by extending their service commitment in the Air Force. Despite these incentive payments, pilots in 7 of 10 focus groups we conducted indicated that retention of RPA pilots is or will be a challenge. In addition, pilots in some focus groups stated that they are considering their options for leaving active-duty service in the Air Force to go to the Air National Guard, or Air Force Reserve, or the private sector. Unit commanders in one location we visited, pilots in some of our focus groups, and other Air Force officials stated that they were concerned about the future retention rates of RPA pilots. Headquarters Air Force officials stated that the Air Force’s strategy for meeting personnel requirements has focused on recruiting and that they have not observed indications of a concern with the retention of RPA pilots. However, the Air Force has not evaluated the potential effect of the difficult working conditions, such as long working hours and frequently rotating shifts that we discuss in more detail later in this report, that RPA pilots face and how those conditions may affect the Air Force’s ability to retain RPA pilots, despite the situation that many of these pilots will begin to reach the end of their service commitments in fiscal year 2017. In a 2011 memorandum to the Air Force, the Secretary of Defense directed the Air Force to provide sufficient incentives to retain high-quality RPA personnel. Although the Air Force has made retention payments available to RPA pilots, these efforts may not be enough or appropriate to overcome the challenges the Air Force may face to retain RPA pilots. While the Air Force has mechanisms in place to collect feedback from RPA pilots, it has not used this feedback to develop its strategic human- capital approach to managing RPA pilots, such as by incorporating their feedback into tailoring a recruiting and retention strategy or by taking actions related to training and development. High-performing organizations involve their employees in their strategic human-capital approaches and planning in order to improve motivation and morale by seeking employee feedback on a periodic basis, and using that input to adjust their human-capital approaches. The Air Force has mechanisms in place that it has used to collect feedback from RPA pilots. For example, the Air Force solicits feedback from RPA units as well as all other Air Force units during an annual Unit Climate Assessment that gauges discrimination, harassment, and morale issues at the unit level. While this effort is not specific to the RPA units, it does include assessments of RPA units. Unit commanders can use the results of their Unit Climate Assessments to address challenges at the local unit level. However, Headquarters Air Force officials responsible for managing RPA pilots have not obtained information from these assessments to identify whether they include potentially valuable information about any concerns related to establishing the RPA pilot career field. Headquarters Air Force officials stated that the Air Force created this career field more quickly and under greater operational demand than any career field in recent Air Force history. However, these officials also stated that using feedback from the Unit Climate Assessments to address issues at a headquarters level that would affect RPA pilots could undermine unit commanders. They also noted that officials at the headquarters level might lack the proper context for understanding the assessment results. The Air Force also collected feedback from RPA pilots in studies the Air Force School of Aerospace Medicine published in 2011 and 2013 to assess the level of and reasons for stress in personnel assigned to RPA units, which included surveys and interviews of RPA pilots. In response to these studies, the Air Force took actions designed to address stress in personnel assigned to RPA units. For instance, the studies recommended that the Air Force assign an operational psychologist to each RPA unit, and, in response, local flight surgeons, clinical providers, and aerospace physiologists have created teams to help address stress concerns at the base level. While researchers from the Air Force’s medical research community conducted these studies, they included findings related to personnel shortages that are germane to the Air Force personnel and operations communities. However, Headquarters Air Force officials from the personnel and operations communities stated that, prior to our review, they were unaware of the studies and their findings. RPA pilots in our focus groups also noted information that suggests that incorporating pilot feedback from existing mechanisms could help improve communication and address issues pilots are facing. For example, pilots in some of our focus groups stated that they did not know what the career path for an RPA pilot is or what steps they should take to advance. Further, in some of our focus groups, manned-aircraft pilots who are serving assignments as RPA pilots expressed uncertainty regarding whether they will be able to return to their manned platforms and what effect, if any, their RPA assignment will have on their careers. Pilots in some focus groups also reported that senior leadership had not communicated to them about this uncertainty, and one pilot specifically noted that the lack of communication negatively affects morale. Without using existing mechanisms to obtain feedback from RPA pilots directly, Headquarters Air Force may be missing an opportunity to obtain information that can help it address recruiting, retention, training, and development challenges related to RPA pilots. RPA pilots find their mission rewarding, but they reported that they face multiple, challenging working conditions. RPA pilots in 8 of the 10 focus groups we conducted reported that they found it rewarding to be able to contribute to combat operations every day through the RPA mission. For instance, one pilot stated that the mission is the reason that he had decided to become a permanent RPA pilot and that it was rewarding to contribute to overseas contingency operations, which he would not be able to do in any other job. Similarly, the Air Force School of Aerospace Medicine published studies in 2011 and 2013 that evaluated the psychological condition of RPA personnel and found that RPA pilots held positive perceptions of the effect and contributions of their work. However, RPA pilots also stated that they face multiple challenging working conditions including: long hours, working shifts that frequently rotate, and remaining in assignments beyond typical lengths. RPA pilots in all of our focus groups reported that these challenging conditions negatively affected their morale and caused them stress. Similarly, the Air Force School of Aerospace Medicine studies found that RPA personnel reported sources of stress that were consistent with the challenges we identified. These challenges include the following: RPA pilots in 8 of our 10 focus groups stated, and Air Force studies we reviewed show, that RPA pilots work long hours. RPA pilots in 7 of our focus groups described factors that contribute to their long hours including performing administrative duties and attending briefings, in addition to flying shifts. The Air Force studies also found that working long hours was one of the top five reasons for stress among personnel in RPA squadrons. In the studies, over 57 percent of respondents reported that they worked more than 50 hours per week. In addition, the studies found that over 40 percent of respondents reported that performing administrative duties added hours to their work week and was the third-highest reason for stress among active- duty RPA personnel. RPA pilots also reported that it was challenging to work on shifts that rotate. RPA pilots in 7 of the 10 focus groups we conducted stated that constantly rotating shifts caused sleep problems for them because they must continuously adjust their sleep schedule to accommodate new shifts. In addition, pilots noted that continuously rotating to new shifts disrupted their ability to spend time with their family and friends. Officials told us that it was ideal for pilots working evening or night shifts to maintain a consistent sleep pattern on their off-duty days even though those sleep patterns would require that pilots sleep while their family and friends were awake. However, some RPA pilots reported that they typically adjusted their sleep schedules dramatically on their off-duty days so they could spend time with their families and that these changes to their sleep schedules resulted in significant fatigue both at home and when they returned to work. Similarly, over half of the respondents to the surveys included in the Air Force studies we reviewed reported that shift work caused a moderate to large amount of their stress. RPA pilots in 5 of our focus groups reported that being assigned to continue flying RPAs for periods extending beyond the typical Air Force assignment was difficult. In all of the focus groups we conducted with RPA pilots, those who plan to return to flying manned aircraft stated that they have been required to stay in their assignments for periods that are longer than a typical Air Force assignment. Air Force officials stated that there is no requirement for officers to move to a new assignment after a specified period. However, pilots in our focus groups and Air Force headquarters officials said that officer assignments typically last 3 to 4 years. Air Force documentation shows that some of these pilots have been in their RPA assignments for over 6 years. Moreover, the Air Force studies also found that one of the most common stressors that RPA personnel cited was the lack of clarity regarding when they would return to their careers in manned aircraft. Specifically, the 2011 study states that the Air Force informed RPA pilots who previously flew manned aircraft that their RPA assignments were temporary and after 3 to 4 years they could return to their manned-aircraft career. The study goes on to state that due to the increasing demand for RPAs and the long-standing surge in RPA operations, many pilots have been unable to return to their manned-aircraft careers and, until recently, the Air Force kept them in these assignments indefinitely. The Air Force has taken some actions to address some of the challenging working conditions that RPA pilots face. The Air Force studies included over 10 recommendations to address the sources of stress that RPA personnel reported. For example, the studies recommended that the Air Force assign an operational psychologist to each RPA unit to help commanders optimize work-rest schedules and shift cycles, and identify pilots who are reaching elevated levels of fatigue or stress. In response, the Air Force has assigned mental-health providers that are dedicated to RPA squadrons at Beale, Cannon, and Creech Air Force Bases. However, the studies also recommended that the Air Force increase staffing in RPA squadrons to reduce the number of hours that RPA personnel work and to help establish better shift schedules. Air Force researchers stated that increasing staffing levels, or crew ratios, in RPA squadrons would be the most-effective means to reduce RPA pilot stress, but as discussed above, the Air Force has operated its RPA squadrons below the optimum crew ratios. RPA pilots also face challenges related to being deployed-on-station as they balance their warfighting responsibilities with their personal lives. Because pilots are able to operate RPAs from Air Force bases in the United States and are thus able to live at home—what is known as being deployed-on-station—their dual role juxtaposes stress related to supporting combat operations with the strains that can occur in their personal lives. While these pilots face this challenging working condition that may affect their quality of life, DOD’s Quadrennial Quality of Life Reviews have emphasized DOD’s continued commitment to provide servicemembers with the best quality of life possible. Being deployed-on-station is a new concept in warfighting, and a 2011 report prepared for the Air Force Medical Support Agency describes five conditions that personnel who are deployed-on-station can experience. The report notes that these personnel (1) experience a justifiable risk of being the target of hostile adversary attacks because they are combatants and their bank accounts, reputations, or physical safety could be targeted; (2) operate in contact with and sometimes kill adversaries, although operations they conduct are out of direct risk from combat; (3) must act with urgency to sometimes kill adversaries and take other time- pressured actions to help ensure combatants they support do not lose their lives; (4) work on a wartime rhythm that includes 24/7 operations 365 days a year; and (5) are required to conceal information from friends and family about their work because their missions are often classified. A Headquarters Air Force official described being deployed-on-station as a status between deployed-in-theater and not deployed and emphasized that personnel who are deployed-on-station are not directly engaged in combat, which is a significant component of being deployed. The official also acknowledged that being deployed-on-station can be more challenging than assignments with more-limited connections to the battlefield. RPA pilots in each of the 10 focus groups we conducted reported that being deployed-on-station negatively affected their quality of life, as it was challenging for them to balance their warfighting responsibilities with their personal lives for extended periods of time. RPA pilots in some of our focus groups, as well as commanders of RPA squadrons, noted that they would prefer to deploy-in-theater for 6 months with a clear end point and be separated from their family and friends rather than be deployed-on- station for 3 or more years. One commander stated that he preferred being deployed-in-theater and knowing when his deployment would end. In contrast, he stated that in an RPA squadron, it was difficult to juggle his warfighting role with the typical challenges of home life for multiple years. Likewise, the Air Force studies found that being deployed-on-station was one of the most commonly cited stressors that RPA personnel reported. In addition, RPA pilots in 6 of our 10 focus groups reported that they are expected to do more work than their counterparts who are deployed-in- theater. For example, RPA pilots in some of our focus groups who had previously deployed-in-theater stated that they are expected to complete administrative tasks that are not required of them when they are deployed-in-theater. Headquarters Air Force officials as well as pilots in some of our focus groups stated that the Air Force provides support to personnel who are deployed-in-theater that it does not provide for personnel who are deployed-on-station. Moreover, the Air Force has surveyed RPA personnel and other deployed-on-station personnel to study their stress and mental health, but it has not fully analyzed the effects of being deployed-on-station. Specifically, it has not fully analyzed whether being deployed-on-station has negative effects on quality of life that are not attributable to the stressors that are related to low unit-staffing levels that we discussed above such as rotating shifts and long assignments. As a result, the Air Force does not have the information it needs to determine whether being deployed-on-station has a negative effect on the quality of life of RPA pilots that is not attributed to the other factors and what steps might be needed to reduce those effects. AFPC monitors the promotion rates of RPA pilots and has found that they were promoted below the average rate for active-duty line officers on 20 of 24 officer promotion boards since 2006. We reached the same conclusion based on our review of data for these promotion boards. We also found that RPA pilots were promoted below the average rate of manned-aircraft pilots on 21 of 24 boards. Furthermore, we compared the promotion rates of RPA pilots to those of other career fields and found that RPA pilots were promoted at the lowest rate of any career field on 9 of the 24 boards and were promoted in the lowest 5 percent of the career fields that competed on 5 additional boards. Conversely, RPA pilots were promoted in the top 50 percent of the career fields that competed on only 3 boards of the 24 boards. More specifically, RPA pilots competing for promotion to each rank that we analyzed faced challenges. RPA pilots competing for promotion to major were promoted in the top 50 percent on just one of the seven promotion boards since 2006. RPA pilots competing for promotion to lieutenant colonel were promoted at the lowest or next- to-lowest rate compared to the other career fields that competed on 7 of the 9 boards since 2006. Likewise, RPA pilots competing for promotion to the rank of colonel had the lowest promotion rate of any career field that competed on 4 of the 8 colonel boards since 2006. Figures 5, 6, and 7 display the results of our analyses. While AFPC has monitored the promotion rates of RPA pilots, it has not analyzed the factors related to lower promotion rates for these pilots. It is a common statistical practice when analyzing how selected factors are related to a given outcome to account for other key factors that could also be related to the outcome. Although AFPC analyzed the promotions of officers in the Line of the Air Force competitive category, which includes RPA pilots, and identified factors related to promotion outcomes for officers in this category, it has not incorporated a key factor—the career field effect of being an RPA pilot—into its analysis. AFPC analyzed promotion data of officers in the competitive category that includes RPA pilots called Line of the Air Force and found multiple factors related to promotion outcomes. Specifically, AFPC analyzed these data using logistic regression, which is a statistical method that enables AFPC to analyze the relationships among multiple factors. Using this method, AFPC identified a number of factors that are positively and negatively related to promotions. For example, AFPC found that one of the two factors with the most-substantial positive relationship to promotions was for an officer to have completed a professional military education program by attending an Air Force school in-residence, rather than completing the same professional military education program by correspondence. The other factor with the most-substantial positive relationship was for an officer to have completed an advanced academic degree. By contrast, AFPC found that officers who have unfavorable information, such as performance-related reprimands, in their personnel files are promoted at lower rates, in general, than officers who do not. AFPC did not include the career field effect of being an RPA pilot as a factor in its analysis. As a result, AFPC does not know whether or how being an RPA pilot is related to promotions for these pilots. AFPC has analyzed other careers and found that most careers are not related to promotion rates. AFPC officials stated that they had not analyzed this effect because most of the officers currently serving as RPA pilots are temporary RPA pilots and AFPC does not typically analyze a career field effect of temporary assignments. In addition, AFPC assumed that the factors that were substantially related to promotions for the Line of the Air Force category were also substantially related to promotions for the RPA pilot subgroup, but did not confirm that its assumption was warranted. AFPC officials stated that when they analyzed the records of RPA pilots, they focused on the factors identified in the analysis of Line of the Air Force officers, including completing professional military education in-residence and advanced degrees. They found that RPA pilots generally completed professional military education in-residence and advanced degrees at lower rates compared to the average rates for officers who had been promoted since 2006. However, by not including the career field effect of being an RPA pilot into its analysis, the Air Force cannot determine whether these factors have the same relationship with RPA pilot promotions as they do on officer promotions in the broader Line of the Air Force category. The Air Force reported reasons for low RPA pilot promotions rates to Congress and took actions to raise those rates without a comprehensive understanding of the factors related to the promotion rates of these pilots. Specifically, the Air Force attributed low RPA pilot promotion rates to three factors: (1) RPA pilots completed professional military education at lower rates than average; (2) RPA pilots completed advanced degrees at lower rates than average; and (3) the process the Air Force used to select RPA pilots. As discussed above, the AFPC’s approach to identify the first two factors assumed that their relationships with promotion rates for RPA pilots as a subgroup would be the same as those with the Line of the Air Force as a whole, but this assumption was not confirmed through analysis. Regarding the third factor, Air Force documentation states “lower quality pilots are generally sent to RPA squadrons.” Headquarters Air Force officials and two commanders of manned-aircraft squadrons explained that commanders select pilots from their squadrons to assign to RPA squadrons and in general most commanders assign less-skilled pilots and less-competent officers to these squadrons. Headquarters officials also stated that less-skilled and less-competent officers generally had fewer of the factors AFPC identified that positively influence promotions in their records than their peers. Air Force officials also explained that because the bulk of RPA pilots who have competed for promotion since 2006 were assigned using this process, they believe these are the reasons that RPA pilots have been promoted at lower rates than their peers. However, the Air Force has not incorporated variables into its analysis to account for RPA pilots or the process to assign them to determine whether they are related to promotions of RPA pilots. Consequently, the Air Force report to Congress may not be accurate because the Air Force does not have comprehensive analysis to demonstrate that these factors are actually related to RPA pilot promotions. Recently, the Air Force has taken actions to raise promotion rates of RPA pilots. First, to communicate to promotion boards that promoting RPA pilots was important, the Secretary of the Air Force has issued instructions since 2008 to each officer promotion board, directing them to consider the strategic effect made by RPA pilots when evaluating their records for promotion. In the instructions, the Secretary directs board members to consider that RPA pilots’ records may not show the same career progression as their peers because of operational requirements they have had to meet to satisfy the needs of the Air Force. Second, the Air Force intervened on behalf of RPA pilots to enhance their opportunities to achieve one of the two most important factors that AFPC identified in its analysis of all Line of the Air Force officers by reserving 46 in-residence seats in Air Force professional military education schools in 2012 for RPA pilots who were competing to be promoted to major. Moreover, the Air Force stated in its August 2013 report to Congress that its long-term plan to raise promotion rates is to attract “quality” recruits to the RPA pilot career field and to establish a sustainable pace of operations that will enable these pilots time to complete in-residence professional military education and advanced academic degrees. However, because it has not fully analyzed the career field effects of being an RPA pilot, it is unclear whether the Air Force is targeting these corrective actions at the right factors. Consequently, the Air Force’s actions may have limited effect on improving the promotion rates for RPA pilots. The Air Force has demonstrated a commitment in recent years to the use of RPAs, believing that the capabilities they provide are worth the service’s investment in both platforms and personnel. As the RPA pilot career field evolves, it will be important that Air Force senior leadership demonstrates a commitment to a human-capital management approach that addresses a number of outstanding challenges. For instance, without updating its optimum crew ratio for RPA units, the Air Force may have RPA pilot shortfalls even after its current requirement is met, which could exacerbate existing strains on this workforce. In addition, by not establishing a minimum crew ratio below which RPA units cannot operate, the Air Force does not know when it is operating at unacceptable levels of risk to mission and safety. Further, without developing a strategy tailored to address specific challenges of recruiting and retaining RPA pilots, current pilot shortfalls may persist even longer than expected. Finally, without evaluating the viability of using alternative personnel populations, such as enlisted or civilian personnel, the Air Force may not meet and sustain required RPA pilot staffing levels. Moreover, without incorporating feedback from RPA pilots using existing feedback mechanisms, the Air Force may be missing opportunities to manage its human-capital strategies effectively for these pilots. Also, RPA pilots face a number of challenging working conditions that can affect their quality of life including those associated with being deployed-on- station. However, without analyzing whether being deployed-on-station has long-term negative effects, the Air Force does not have the information it needs to determine whether it should take any action in response. Finally, while the Air Force has taken action to improve the chances for RPA pilots to be promoted, senior Air Force leaders cannot be assured that the actions are the appropriate ones because the Air Force has not analyzed the effect that being an RPA pilot itself may have on those chances. We recommend that the Secretary of Defense direct the Secretary of the Air Force to take the following seven actions: update crew ratios for RPA units to help ensure that the Air Force establishes a more-accurate understanding of the required number of RPA pilots needed in its units, establish a minimum crew ratio in Air Force policy below which RPA units cannot operate without running unacceptable levels of risk to accomplishing the mission and ensuring safety, develop a recruiting and retention strategy that is a tailored to the specific needs and challenges of RPA pilots to help ensure that the Air Force can meet and retain required staffing levels to meet its mission, evaluate the viability of using alternative personnel populations including enlisted or civilian personnel as RPA pilots to identify whether such populations could help the Air Force meet and sustain required RPA pilot staffing levels, incorporate feedback from RPA pilots by using existing mechanisms or by collecting direct feedback from RPA pilots, analyze the effects of being deployed-on-station to determine whether there are resulting negative effects on the quality of life of RPA pilots and take responsive actions as appropriate, and include the career field effect of being an RPA pilot into AFPC’s analysis to determine whether and how being an RPA pilot is related to promotions and determine whether the factors AFPC identified in its analysis of Line of the Air Force officers are also related to RPA pilot promotions. We provided a draft of this report to DOD for review and comment. The Deputy Director of Force Management Policy, Headquarters Air Force provided written comments in response to our report. In its written comments, the Air Force concurred with four of our seven recommendations and partially concurred with the remaining three recommendations. The Air Force’s written comments are reprinted in their entirety in appendix III. The Air Force also provided technical comments that we have incorporated into this report where applicable. In concurring with our first three recommendations, the Air Force stated that it: has an effort underway to update crew ratios for RPA units and expects to have this effort completed by February 2015; a minimum crew ratio would indicate when the Air Force receives a request for forces that would pose risks to the mission and safety and it expects to respond to our recommendation by February 2015; and will develop a recruiting and retention strategy that is tailored to the specific needs and challenges of RPA pilots and expects to have this done by October 2015. In concurring with our fifth recommendation, to incorporate feedback from RPA pilots by using existing mechanisms or by collecting direct feedback from RPA pilots, the Air Force stated that if it determines that it is appropriate to collect such feedback, it will do so using a survey. We continue to believe that collecting this feedback could be a useful tool for the Air Force to develop a tailored recruiting and retention strategy and to inform actions it may take related to training and developing RPA pilots. The Air Force partially concurred with our fourth recommendation that it evaluate the viability of using alternative personnel populations as RPA pilots and determine if such populations could help the Air Force meet and sustain required RPA pilot staffing levels. The Air Force stated that it considered assigning enlisted personnel as RPA pilots, but it decided that the responsibilities of piloting an RPA were commensurate with the rank of officers instead. At the same time, the Air Force stated that it has initiated a review of some of its missions and the ranks needed to execute those missions and that it may consider using enlisted airmen in this review. In our report, we acknowledge that the Air Force had previously considered using enlisted personnel as RPA pilots and that the Air Force decided instead to use officers. However, it is not clear what steps the Air Force took in its previous considerations. We think it is a positive step that the Air Force has initiated a review of Air Force missions and rank requirements to execute those missions. Considering the significant role that RPAs play in the Air Force mission, we believe the Air Force should include RPA pilots in its review to evaluate whether enlisted personnel as well as civilians may provide a means for the Air Force to address shortfalls in the staffing levels of RPA pilots. In addition, the Air Force partially concurred with our sixth recommendation that it analyze the effects of being deployed-on-station to determine if there are resulting negative effects on RPA pilots’ quality of life and take responsive actions as appropriate. In response to our recommendation, the Air Force stated that it had studied the effects that being deployed-on-station has on RPA pilots and that many of the stressors it identified in these studies were related to low unit staffing levels. In addition, the Air Force asked us to focus our recommendation on an evaluation of these studies. We acknowledge in our report that the Air Force evaluated the psychological condition of RPA personnel who are deployed-on-station in studies it published in 2011 and 2013. We also acknowledge that the primary recommendation these studies make is to increase staffing levels in RPA units to alleviate the stress of RPA personnel. As we discussed in our report, RPA units have been understaffed and thus increasing staffing levels may be appropriate. However, our finding is focused on whether being deployed-on-station has negative effects on quality of life that are not attributable to the stressors that are related to low unit-staffing levels. We think that a more complete understanding of the effects of being deployed-on-station that are not attributable to low staffing levels will help the Air Force determine if responsive actions are needed that go beyond increasing staffing levels. Further, the 2011 report prepared for the Air Force Medical Support Agency that focuses more directly on the concept of being deployed-on-station is a constructive source of input for the Air Force to understand any negative effects of being deployed-on-station. However, it is not clear that an evaluation of this report and the 2011 and 2013 studies will provide the Air Force with a complete understanding of this new deployment concept’s consequences for its personnel. Finally, the Air Force partially concurred with our seventh recommendation that it include the career field effect of being an RPA pilot into AFPC’s promotion analysis to determine if being an RPA pilot is related to promotions and determine if other factors that AFPC identified in its analysis of Line of the Air Force officers are also related to RPA pilot promotions. The Air Force stated that the RPA career field is a subsection of the Line of the Air Force and therefore factors related to promotions identified in analysis of the Line of the Air Force are directly related to RPA pilot promotions. In our report, we acknowledge that the Air Force identified factors related to promotion outcomes for officers in the Line of the Air Force competitive category. However, as we discussed in the report, not including the career field effect of being an RPA pilot as a factor in its analysis has several consequences. First, AFPC does not know whether or how being an RPA pilot is related to promotions for these pilots. Second, the Air Force cannot determine whether the factors that it found that are related to promotions for the Line of the Air Force competitive category have the same relationship with RPA pilot promotions. Third, the information the Air Force included in a report to Congress in August 2013 on education, training, and promotion rates of RPA pilots may not be accurate. Finally, it is unclear whether the Air Force is targeting actions to increase RPA promotion rates at the right factors and thus its actions may have limited effect. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Air Force. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To understand the context of each of the issues in our review, we analyzed various Department of Defense (DOD) and Air Force documents. This documentation included a report to Congress by the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics on the future of unmanned aerial systems and a report by the Air Force Audit Agency on the Air Force’s personnel management of pilots flying RPAs. We also reviewed reports that we previously issued that address topics related to our review including a 2010 report on DOD planning, training, and doctrine for unmanned aircraft systems. To evaluate the extent to which the Air Force uses a strategic human- capital approach to manage remotely piloted aircraft (RPA) pilots, we used a model of human-capital management GAO had previously developed that specifies leading practices that high-performing organizations exhibit in their strategic human-capital management. The Model for Strategic Human Capital Management is intended to help federal organizations use their human capital effectively and integrate human-capital considerations into daily decision making and planning for the program results they wish to accomplish. It identifies concepts and leading practices that are organized into strategic human-capital management cornerstones including strategic human-capital planning; acquiring, developing, and retaining talent; and creating results-oriented cultures. To adapt the criteria to the context of this review, we reviewed the model to identify specific practices that organizations can use to make progress associated with each of the four strategic human-capital management cornerstones. We then analyzed each practice to determine whether it was appropriate and relevant to both the RPA pilot workforce and the military context overall. After identifying the list of practices, we discussed our adaptation with Air Force officials, who agreed they were appropriate and relevant and provided points of contact for obtaining information on each practice. We interviewed officials from Headquarters Air Force offices including the Officer of Manpower, Personnel, and Services Policy and the Office of Operations, Plans, and Requirements Policy to gather their perspectives and information on practices across all four cornerstones. From these offices, we obtained and analyzed documentation, including strategic DOD and Air Force guidance and data on personnel levels, recruiting, incentive pays, and attrition rates for remotely piloted aircraft (RPA) pilots. In addition, we interviewed knowledgeable officials from the Office of the Under Secretary of Defense for Military Personnel Policy on the Air Force’s use of incentives to recruit and retain RPA pilots. We collected perspectives from RPA pilots and RPA unit commanders on the Air Force’s strategic human-capital planning practices, including the effects of those practices on their training, professional development, quality of life, and retention, as well as any efforts the Air Force has made to solicit feedback from and communicate about key issues with RPA pilots. We also interviewed knowledgeable officials from the Air Force Personnel Center on practices related to results-oriented cultures. Furthermore, we compared the perspectives and documentation we collected to the GAO criteria and held discussions with Air Force officials to discuss instances in which the Air Force’s management actions were not consistent with these criteria. We discussed challenges raised by the RPA pilots and unit commanders with whom we spoke, including any efforts in place to address the challenges. To evaluate the extent to which the Air Force has addressed concerns, if any, about the working conditions of RPA pilots that may affect their quality of life, we identified and analyzed criteria included in DOD’s 2009 and 2004 Quadrennial Quality of Life Reviews in which DOD expresses its commitment to provide servicemembers with the best quality of life possible through support and development of responses to emerging servicemember needs. DOD has broadly defined quality of life to include such factors as morale, health and wellness, and work-life balance. To understand these reviews and the commitments, we obtained information from the Office of the Deputy Assistant Secretary of Defense for Military Community & Family Policy, which is responsible for conducting the department’s Quadrennial Quality of Life Reviews. To understand challenges in the working conditions that RPA pilots may face we analyzed studies that the Air Force conducted to assess the stress and mental-health condition of RPA personnel, including RPA pilots. In particular, we reviewed and analyzed two studies conducted by the Air Force School of Aerospace Medicine published in 2011 and 2013, which identified the sources of stress of RPA personnel. The studies’ results were based on self-administered surveys of Air Force RPA personnel, including pilots, from squadrons in Air Combat Command, Air Force Special Operations Command, the Air National Guard, and the Air Force Reserve. The surveys were administered in 2011 and 2012 with response rates from RPA squadrons that ranged from 24 to 98 percent. The surveys included questions related to exhaustion, distress, and post- traumatic stress disorder. We also interviewed the researchers who conducted these studies to clarify our understanding of their methods, findings, and recommendations to alleviate the stress of RPA personnel. In addition, we analyzed a report prepared for the Air Force Medical Support Agency that describes the defining characteristics of being deployed-on-station and examines the challenges that personnel who are deployed-on-station face. To obtain a firsthand account of the challenging working conditions that RPA pilots face, we conducted focus groups with pilots at Beale, Cannon, and Creech Air Force Bases. We also interviewed leadership officials at these bases to obtain their perspective on the challenges that RPA pilots in their units face. Moreover, we interviewed mental-health professionals at each of the bases we visited to obtain their perspectives on the working conditions of RPA pilots and any effects on their quality of life. To evaluate actions the Air Force has taken to address the challenging working conditions RPA pilots face, we analyzed the recommendations that were included in the studies conducted by the Air Force School of Aerospace Medicine and the report prepared for the Air Force Medical Support Agency. We also obtained and analyzed documentation provided by the Air Force Medical Support Agency that describes actions the Air Force has taken in response to these recommendations and we interviewed officials from this agency to further understand these actions. Furthermore, we interviewed and obtained information from officials in the Air Force Office of Manpower, Personnel and Services Policy and the Office of Operations, Plans and Requirements Policy to determine any actions the Air Force has taken to alleviate the challenging working conditions that RPA pilots face. We also obtained information from commanders and mental-health professionals at each of the bases we visited to understand actions they have taken to address the challenging working conditions that RPA pilots face and that affect their quality of life. To evaluate the extent to which the Air Force analyzes the promotion rates of RPA pilots, we applied criteria from common statistical practices, which indicate that when analyzing relationships between selected factors and a given outcome researchers should account for other key factors that could also explain that relationship. To understand the context of Air Force officer promotions, we reviewed relevant laws and Air Force guidance including the Defense Officer Personnel Management Act and Air Force Instruction 36-2501. To identify the promotion rates of Air Force RPA pilots and how their promotion rates compared to officers in other careers in the Air Force, we analyzed promotion-rate data for officers in the Line of the Air Force competitive category who were promoted “in-the-zone” to the ranks of major, lieutenant colonel, and colonel. We analyzed data from 2006 to the most-recently available data, which for promotion to major and colonel was 2012 and for promotion to lieutenant colonel was 2013. We focused on Line of the Air Force officers, because RPA pilots are included in this category. We focused on officers promoted in-the-zone because this zone is the point in an officer’s career when his or her opportunity for promotion is the highest. We focused on rates of promotion to the ranks of major, lieutenant colonel, and colonel because the promotion rates from second lieutenant to first lieutenant and from first lieutenant to captain are nearly 100 percent, and hence the first competitive promotion opportunity for an Air Force officer occurs as he or she becomes eligible for promotion to the rank of major. In addition, we did not evaluate promotion rates above colonel because no RPA pilots have been promoted to the general officer ranks in the Air Force yet. To identify the percentile of RPA pilot promotion rates compared to other line officer career fields, we analyzed data on the range of promotion rates of active-duty officers from the careers that competed in the promotion zone on each promotion board to the ranks of major, lieutenant colonel, and colonel from 2006 to 2013. For this analysis, the promotion rate of RPA pilots includes the rate for permanent RPA pilots (i.e., RPA pilots and recategorized RPA pilots) as well as temporary RPA pilots (i.e., manned-aircraft pilots serving assignments in RPA squadrons and manned-aircraft pilot training graduates). For this analysis all of the listed career fields are mutually exclusive. That is, if a temporary RPA pilot was identified as an RPA pilot in this analysis, the pilot was not included in the data to calculate promotion rates for other careers such as the manned- aircraft career fields. For each promotion board, officers from between 22 and 33 careers competed for promotion. This analysis excludes career fields where fewer than 10 officers were eligible for promotion, because the rate of promotion in these cases is highly sensitive to the outcomes of single individuals. However, we included the results from 8 boards in which fewer than 10 RPA pilots competed for promotion to provide a more-comprehensive account of RPA pilot promotions. The promotion rate that we calculate for these instances should be considered cautiously since the outcome of one or two individuals could have a large effect on the overall rate. Fewer than 10 RPA pilots were eligible for promotion to the rank of lieutenant colonel for the first 2006 board as well as the 2007 and 2008 boards. In addition, fewer than 10 RPA pilots were eligible for promotion to the rank of colonel for the 2006, 2007, 2008, and both of the 2009 promotion boards. We obtained these data from the Air Force Personnel Center (AFPC), and to understand the methods AFPC used to collect, store, and maintain these data, we interviewed officials from AFPC and reviewed documentation they provided, and we found the data to be reliable for our purposes. To evaluate steps the Air Force took to analyze the promotion rates of RPA pilots and the reasons that these rates have been lower than average, we interviewed Air Force officials in headquarters personnel offices as well as AFPC offices. In addition, we evaluated documentation of AFPC’s analysis of officer promotions rates including the results of AFPC’s logistic regression identifying the factors that are related to officer promotion. We also reviewed the August 2013 report that the Air Force provided to Congress on the promotion rates of RPA pilots in which the Air Force identifies reasons for lower promotion rates of RPA pilots. To identify actions the Air Force took to respond to low RPA pilot promotion rates, we evaluated relevant documentation including instructions the Secretary of the Air Force has provided to promotion board members since 2008 in which the Secretary communicates the importance of promoting RPA pilots. We also reviewed briefings that Air Force headquarters offices as well as AFPC prepared for the Secretary of the Air Force on additional steps the Air Force took to address low RPA pilot promotion rates. We also analyzed the Air Force’s August 2013 report to Congress and additional documentation that the Air Force provided about its plans to raise promotion rates of RPA pilots. As we noted earlier, to obtain the perspectives of RPA pilots related to each of our three objectives we conducted 10 focus groups that each consisted of between six and nine active-duty RPA pilots during site visits to Beale, Cannon, and Creech Air Force Bases. To conduct these focus groups we randomly selected RPA pilots to participate, asked them a structured set of questions during meetings that lasted about 90 minutes, and took detailed notes. We then evaluated these notes using content analysis to develop our findings. We discuss the methods we used to select our participants, develop questions, conduct the focus-group meetings, and analyze the information we obtained in the focus groups, and the results of our analysis, in more detail in appendix II. We conducted this performance audit from February 2013 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To obtain the perspectives of pilots of remotely piloted aircraft (RPA) related to each of our three objectives, we conducted 10 focus group meetings with active-duty RPA pilots during site visits to Beale, Cannon, and Creech Air Force Bases. We decided to visit the three bases we selected because more RPA pilots are stationed at these bases than other Air Force bases. We specifically included Beale Air Force base because we wanted to obtain the perspectives of the RPA pilots who fly the RQ-4 (Global Hawk) who are stationed there. In addition, we selected Cannon Air Force Base because we wanted to obtain the perspectives of RPA pilots assigned to the Air Force Special Operations Command. To select specific RPA pilots to participate in our focus groups, we obtained documentation that included lists of the RPA pilots stationed at each base as well as the amount of time each had served flying RPAs, and their ranks. To obtain a variety of perspectives, we randomly selected pilots with various amounts of experience flying RPAs and we included pilots of various ranks in our groups. These groups typically consisted of six to nine participants. To conduct the focus groups, a GAO moderator followed a protocol that included prompts, instructions to the participants, and a set of three questions, each with several follow-up questions. We pretested this protocol at Beale Air Force base and used it at the remaining two bases. We used the same set of questions from this protocol for each of the 10 focus groups we conducted. These questions are reprinted below. During each focus group, the GAO moderator asked questions related to the topics of our review to participants who, in turn, provided their perspectives on the topics. During the focus-group meetings, three GAO team members took separate sets of detailed notes to document the participants’ comments. See table 2 for the complete list of questions and follow-up questions that we asked during our focus groups. Following our focus-group meetings, we consolidated our separate sets of detailed notes for each focus group to create a compiled final record of the participant comments from each focus group. To do this, a GAO analyst reviewed the set of detailed notes and compiled them in a final record for each focus group. A key rule of this compilation was that if one analyst recorded a comment, but another did not, we included the material in the final record. To ensure that our compiled final record of each focus group was accurate, a second analyst then reviewed at least 25 percent of each of the final records. In instances where an analyst identified some discrepancies between the detailed notes and the final record, the reviewing analyst corrected the discrepancy and reviewed a higher percentage of the notes for that focus group. Next, we used content analysis to analyze the final records of each focus group to identify themes that participants expressed across all or most of the groups. To do this, three GAO analysts first met to discuss and agree on a preliminary set of themes. We then analyzed an initial set of the records and counted instances that we observed these initial themes. We then reconvened as a group to discuss and agree on additional themes to add to our analysis and to consolidate and delete others. We then analyzed our records and made coding decisions. Following the initial analysis by one analyst, a second analyst independently reviewed all of the coding decisions that the first analyst made for each of the records. Where there were discrepancies, the analysts reviewed one another’s coding and rationale for their coding decisions and reached a consensus on which codes should be used. See figure 8 for the complete results of our analysis. When describing the results of our analysis of our focus groups in this report, we use the term “some,” as in “pilots in some focus groups,” to report topics that were discussed by RPA pilots in two to four of our focus groups. The information we present from our focus groups accurately captures the opinions provided by the RPA pilots who attended the 10 focus groups at the three Air Force Bases we visited. However these opinions cannot be generalized to all of the RPA pilots at the three Air Force Bases we visited or to all RPA pilots in the Air Force. The results of our analyses of the opinions of RPA pilots we obtained during our focus groups are not generalizable because the Air Force Bases we selected are not necessarily representative of all of the Air Force Bases that contain RPA squadrons and the RPA pilots included in our focus groups are not necessarily representative of all of the RPA pilots in the Air Force. In addition to the contact named above, Lori Atkinson (Assistant Director), Steve Boyles, Ron La Due Lake, Kelly Liptan, James P. Klein, Steven R. Putansu, Michael Willems, Erik Wilkins-McKee, and Amie Steele made key contributions to this report.
Since 2008, the Air Force has more than tripled the number of its active-duty pilots flying RPAs, which is the term the Air Force uses to refer to unmanned aerial systems such as the MQ-1 Predator. Due to increases in demand, RPA pilots have had a significant increase in workload since 2007. GAO was asked to evaluate the Air Force's approach to managing its RPA pilots as well as their quality of life and promotion rates. For this review, GAO evaluated the extent to which the Air Force (1) has used a strategic human-capital approach to manage RPA pilots; (2) has addressed concerns, if any, about the working conditions of RPA pilots that may affect their quality of life; and (3) analyzes the promotion rates of RPA pilots. GAO analyzed personnel planning documents, Air Force studies, and officer promotion data. GAO also interviewed unit commanders at selected Air Force bases and Headquarters Air Force officials and conducted focus groups with RPA pilots. While the results of these focus groups are not generalizable, they provide valuable insights. The Air Force has managed its remotely piloted aircraft (RPA) pilots using some strategic human-capital approaches, such as planning for the different levels of experience that it needs in these pilots. However, it continues to face challenges. High-performing organizations manage human capital to identify the right number of personnel and to target the right sources to fill personnel needs. In 2008, the Air Force determined the optimum number of RPA pilots—the crew ratio—for some units, but it did not account for all tasks these units complete. Air Force officials stated that, as a result, the crew ratio is too low, but the Air Force has not updated it. Air Force guidance states that low crew ratios diminish combat capability and cause flight safety to suffer, but the Air Force has operated below its optimum crew ratio and it has not established a minimum crew ratio. Further, high work demands on RPA pilots limit the time they have available for training and development and negatively affects their work-life balance. In addition, the Air Force faces challenges recruiting officers into the RPA pilot career and may face challenges retaining them in the future. High-performing organizations tailor their recruiting and retention strategies to meet their specific mission needs, but the Air Force has not tailored its approach to recruiting and retaining RPA pilots nor considered the viability of using alternative personnel such as enlisted personnel or civilians. Without developing an approach to recruiting and retaining RPA pilots and evaluating the viability of using alternative personnel populations for the RPA pilot career, the Air Force may continue to face challenges, further exacerbating existing shortfalls of RPA pilots. Moreover, the Air Force has not used direct feedback from RPA pilots via existing mechanisms, or otherwise, to develop its approach to managing challenges related to recruiting, retention, training, and development of RPA pilots. The Air Force has taken some actions to address potentially difficult working conditions RPA pilots face, but it has not fully analyzed the challenge pilots face to balance their warfighting roles with their personal lives. RPA pilots operate RPAs from bases in the United States and live at home; thus they experience combat alongside their personal lives—known as being deployed-on-station—which RPA pilots stated negatively affects their morale. While the Department of Defense has committed to maintaining high morale for servicemembers, the Air Force has not fully analyzed the effects on morale related to being deployed-on-station, and thus it does not know whether it needs to take actions in response. The Air Force monitors RPA pilot promotion rates, but has not analyzed factors that may relate to their low promotion rates. Statistical principles call for researchers to account for potential key factors in analysis because when they omit key factors, the relationships between other factors may not be accurately estimated. The Air Force analyzed promotions across a group of officers, including RPA pilots, and found factors that related to promotions in general. However, the Air Force has not analyzed the factors related to RPA pilots' promotions specifically and, as a result, it does not have the information to determine what factors may affect their promotions. Consequently, the Air Force may not be targeting actions it is taking to raise RPA pilot promotion rates at the appropriate factors, and information it has reported to Congress may not be accurate. GAO recommends that the Air Force update optimum crew ratios; establish a minimum crew ratio; develop a recruiting and retention strategy; evaluate using alternative personnel populations to be pilots; use feedback from RPA pilots; analyze the effects of being deployed-on-station; and analyze the effect that being an RPA pilot has on promotions. The Air Force concurred with four recommendations and partially concurred with the remaining three recommendations.
Iraq possesses the third largest oil reserve in the world, estimated at a total of 115 billion barrels. As shown in figure 1, only Saudi Arabia and Iran have larger proved world oil reserves. Iraq’s ability to extract these reserves has varied widely over time and Iraq’s oil infrastructure has deteriorated over several decades due to war damage, inadequate maintenance, and the limited availability of spare parts, equipment, new technology, and financing. In addition, Iraq’s crude oil production and export capacities were further affected by considerable looting after Operation Iraqi Freedom and continued attacks on crude oil and refined product pipelines. Nonetheless, crude oil production and exports have recovered since 2003. As of June 2008, Iraq’s crude oil export averaged 2.01 million barrels per day (mbpd), according to Iraqi oil export receipt data (see fig. 2). Iraq generally receives a discounted export price for its crude oil, in part due to its relatively lower quality compared with crude oil sales of the U.S. West Texas Intermediate (WTI) and Brent—benchmarks for world oil prices. Figure 3 shows Iraqi crude oil export prices in comparison to world benchmark prices. According to data on Iraqi crude oil export receipts reported by the Central Bank of Iraq (CBI) for January through June 2008, Iraqi crude oil was priced at an average of $96.88 per barrel. During this same period, WTI and Brent prices averaged $110.95 and $109.17 per barrel, respectively. On average, the CBI price was 12.9 percent and 12.7 percent less than the WTI and Brent, respectively, from January 2007 through June 2008. The following section provides information on Iraq’s revenues from 2005 through 2007 and estimated revenues for 2008. From 2005 through 2007, the Iraqi government generated an estimated $96 billion in cumulative revenues. This estimate is based on actual crude oil export sales of $90.2 billion as reported by the Central Bank of Iraq and Iraqi taxes, interest, and other revenues, of $5.7 billion as estimated by IMF. Ninety-four percent of the total estimated revenues came from the export sale of crude oil. The Central Bank of Iraq export oil revenue data provided by Treasury are based on actual export oil receipts. These data are generally consistent with estimates of Iraq’s crude oil export sales reported by the International Advisory and Monitoring Board (IAMB), the IMF, and the EIA. (See app. II for data on Iraqi crude oil export revenue from these different sources.) Crude oil export revenues increased an average of about 30 percent each year from 2005 to 2007 due to increases in oil exports and price. For 2008, we estimate that Iraq could generate between $73.5 billion to $86.2 billion in total revenues. Table 1 displays the projected total revenues, based on six scenarios projecting oil export revenues by varying price and volume of exports. These scenarios assume that tax and other revenues (a small portion of total revenues) will be $6.9 billion for 2008 but that export oil revenues will vary based on the price Iraq receives for its oil and the volume it exports. As a result, we project that Iraq could generate between $66.5 billion and $79.2 billion in oil revenues in 2008, more than twice the average annual amount Iraq generated from 2005 through 2007. These scenarios use the actual prices Iraq received for its oil exports over the first 6 months of 2008, as reported by the Central Bank of Iraq. For the last 6 months of 2008, we varied the volume exported from 1.89 to 2.01 mbpd and price received from $96.88 to $125.29 per barrel. For a detailed discussion of these scenarios, see appendix III. The following section provides information on the Government of Iraq’s estimated expenditures from 2005 through 2007, expenditure ratios from 2005 through 2007, and estimated expenditures for 2008. From 2005 through 2007, the Iraqi government spent an estimated $67 billion on a variety of operating and investment activities, as reported by the Ministry of Finance. As displayed in table 2, Iraq’s expenditures can be divided between operating and investment expenditures. Operating expenses include salaries and pensions, operating goods and services, interest payments, subsidies to public and private enterprises, social benefits, and other transfers. Investment expenses include capital goods and capital projects such as structures, machinery, and vehicles. Our analysis of Ministry of Finance data on Iraqi expenditures from 2005 through 2007 found the following: Iraq spent 90 percent of the $67 billion on operating expenses and a smaller portion of these funds—about 10 percent—on investment expenses. Iraq’s dollar expenditures grew by about 23 percent per year, from $17.6 billion to $26.6 billion, largely due to increased spending on Iraqi security personnel. (See app. V for details on expenditures by the security ministries and other selected ministries.) However, annual average growth rates computed in Iraqi dinars were 13 percent per year. Growth rates in dinar may be more informative since Iraq spends its budget in dinars. Using dollar-denominated expenditures inflates the growth rates because the dinar appreciated 19 percent against the dollar in 2007. The Iraqi government spent about $947 million, or 1 percent of its total expenditures for the maintenance of Iraqi- and U.S.-funded investments. These expenses include maintenance of roads, bridges, vehicles, buildings, water and electricity installations, and weapons. Investment expenditures increased at an annual rate of 42 percent in Iraqi dinars. However, most of this increase occurred in 2007 and was due primarily to the increase in investment by the Kurdistan Regional Government (KRG), not by the central ministries responsible for providing critical services to the Iraqi people, including oil, water, electricity, and security. For example, of the $1.8 billion increase in investment expenditures in 2007, $1.3 billion, or more than 70 percent was due to a reported increase in KRG investment. Investment by the central ministries declined in 2007. Although Iraq’s total expenditures grew from 2005 through 2007, Ministry of Finance data show that the Iraqi government was unable to spend all of the funds it budgeted. Expenditure ratios are defined as actual expenditures for a ministry or activity divided by the budgeted amount for this ministry or activity. This ratio is a preliminary measure as to how well the government is able to implement its intentions and priorities. Figure 4 displays our analysis of Iraqi expenditure ratios for the 2005 through 2007 budgets. Specifically, we found: While Iraq’s total expenditures increased from 2005 through 2007, Iraq spent a declining share of its budget allocations—73 to 65 percent from 2005 to 2007. In each year, Iraq spent a greater percentage of its operating budget, including salaries, than its investment budget. For example, in 2007, the Iraqi government spent 80 percent of its $28.9 billion operating budget and 28 percent of its $12.2 billion investment budget. The central ministries, responsible for providing essential services to the Iraqi people, spent a smaller share of their investment budgets than the Iraqi government as a whole. Further, their investment expenditure ratios declined from 14 percent in 2005 to 11 percent in 2007. Specifically, while the central ministries budgeted $5.7 billion and $8.1 billion for investments in 2005 and 2007, they spent $825 million and $896 million, respectively. In 2008, we estimate that the Iraqi government could spend between $35.3 billion and $35.9 billion of its $49.9 billion 2008 budget. This estimate is based on the assumption that the expenditure ratio in 2008 will be the same as the average expenditure ratios from 2005 to 2007 except expenditures for war reparations (5 percent of estimated oil export revenues), which will vary with differing scenarios for oil exports. This estimate implies a more than 21-percent increase in dinar expenditures in 2008, compared with the annual average of 13 percent over the past 3 years. However, the Iraqi government is considering a supplemental budget for 2008. According to Treasury, Iraq’s Ministry of Finance introduced a $22 billion supplemental budget, including about $8 billion dedicated to capital spending, which would bring the total 2008 budget allocation to more than $70 billion. This supplemental was submitted to the Council of Representatives in July, according to Treasury. However, based on past expenditure performance, it is unclear whether Iraq will be able to spend this sizable budget. Iraq also has outstanding foreign liabilities. In July 2008, Treasury officials estimated that Iraq will owe between $50 billion to $80 billion in bilateral foreign debt. In addition, Iraq owes $29 billion in war reparations to Kuwait. Oil revenues in Iraq are currently immune from garnishment, liens, and other legal judgments, but this immunity will expire in December 2008 absent further UN Security Council action. As of December 31, 2007, the Iraqi government had financial deposits of $29.4 billion held in the Development Fund for Iraq (DFI) at the New York Federal Reserve Bank, central government deposits at the Central Bank of Iraq (CBI), and central government deposits in Iraq’s commercial banks, which includes state-owned banks such as Rafidain and Rasheed (see table 3). The data for the DFI financial deposits are based on a July 2008 audited statement by IAMB. The financial deposits in the Central Bank of Iraq and Iraq’s commercial banks come from the IMF’s International Financial Statistics, as of July 2008. The financial deposits at the end of 2007 result from an estimated budget surplus of about $29 billion from 2005 to 2007 and unverified balances prior to 2005. As displayed in table 4, we estimate that Iraq’s budget surplus for 2008 could range from $38.2 billion to $50.3 billion, based on the six scenarios we used to project export oil revenues by varying price and volume of export. (See app. III for the six scenarios projecting export oil revenues.) This estimate is based on the assumption that the expenditure ratio in 2008 will be the same as the average expenditure ratios from 2005 to 2007 except expenditures for war reparations (5 percent of estimated oil export revenues), which will vary with differing scenarios for oil exports. However, as previously noted, Iraq is considering a supplemental budget of $22 billion for 2008. If approved and then spent, the proposed budget supplemental would reduce the projected surplus. In addition, the IMF estimates that the Central Bank of Iraq had, as of December 31, 2007, about $31.4 billion in gross foreign exchange reserves. Gross foreign exchange reserves help support Iraq’s monetary policy and back the domestic currency to maintain confidence in the Iraqi dinar and control inflation. It is important to note that adding the $31.4 billion in gross foreign exchange reserves to the Iraqi government’s $29.4 billion in financial deposits may result in double counting. For example, if the Iraqi government uses $1 billion to pay for imported food, both its cash balances and gross foreign exchange reserves would decrease by $1 billion. While the amount the central bank may hold in reserve is not fixed, the IMF stand-by agreement with Iraq specifies a floor of $21.1 billion. CBI’s gross foreign exchange reserves have, on average, increased by more than $7 billion per year from 2005 through 2007. Since fiscal year 2003, Congress has appropriated about $48 billion to U.S. agencies to finance stabilization and reconstruction efforts in Iraq, including developing Iraq’s security forces, enhancing Iraq’s capacity to govern, and rebuilding Iraq’s oil, electricity, and water sectors, among others. As of June 2008, of the $48 billion in appropriated U.S. funds from fiscal years 2003 through 2008, about $42 billion (88 percent) had been obligated and about $32 billion (68 percent) had been spent. Over two-thirds of the $32 billion spent, or $23.2 billion, have supported reconstruction and stabilization activities in the security, oil, water, and electricity sectors. Table 5 compares the allocations and spending of comparable activities by the United States and Iraq in these sectors. The Iraqi government developed its first annual budget in 2005. From May 2003 through June 2004, the Coalition Provisional Authority (CPA) was responsible for spending Iraqi oil revenues for the benefit of the Iraqi people. We previously reported that the CPA allocated approximately $7 billion in Iraqi funds for relief and reconstruction projects, primarily for the import of refined fuel products, security, regional programs, and oil and power projects. Iraq allocated $28 billion between 2005 and 2008 for the four sectors, and U.S. agencies allocated $33.4 billion from fiscal years 2003 to June 2008. Allocations in the security sector account for $22.5 billion of the U.S. amount. As of June 2008, the United States spent 70 percent, or $23.2 billion, of the amount it allocated for these four sectors. In contrast, as of April 2008, Iraq spent 14 percent, or $3.9 billion, of the amount it allocated for similar activities in these sectors. The security sector received the largest share of funds from the United States and Iraq. U.S. government, coalition, and international agencies have identified a number of factors affecting the Iraqi government’s ability to spend more of its revenues on capital investments intended to rebuild its infrastructure. These factors include Iraq’s shortage of trained staff, weak procurement and budgeting systems, and violence and sectarian strife. First, these officials have observed the relative shortage of trained budgetary, procurement, and other staff with the necessary technical skills as a factor limiting the Iraqi government’s ability to plan and execute its capital spending. Officials report a shortage of trained staff with budgetary experience to prepare and execute budgets and a shortage of staff with procurement expertise to solicit, award, and oversee capital projects. Second, weak procurement, budgetary, and accounting systems are of particular concern in Iraq because these systems must balance efficient execution of capital projects while protecting against reported widespread corruption. Third, these officials have noted that violence and sectarian strife remain major obstacles to developing Iraqi government capacity, including its ability to execute budgets for capital projects. The high level of violence contributes to a decrease in the number of workers available, can increase the amount of time needed to plan and complete capital projects, and hinders U.S. advisors’ ability to provide the ministries with assistance and monitor capital project performance. Since 2005, U.S. agencies have been working with the Iraqis to assist the government in addressing challenges in executing its capital budgets. As we have previously reported, the United States has funded efforts since 2005 to build the capacity of key civilian ministries and security ministries to improve the Iraqi government’s ability to effectively execute its budget for capital projects. In 2005 and 2006, the United States provided funding for programs to help build the capacity of key civilian ministries and the Ministries of Defense and Interior. Ministry capacity development refers to efforts and programs to advise and help Iraqi government employees develop the skills to plan programs, execute their budgets, and effectively deliver government services such as electricity, water, and security. We found multiple U.S. agencies leading individual efforts and recommended that Congress consider conditioning future appropriations on the completion of an integrated strategy for U.S. capacity development efforts. In commenting on a draft of this report, Treasury stated that Treasury and Embassy Baghdad are working to improve the pace of Iraqi budget execution and ability to evaluate whether Iraqi capital spending achieves its intended impact. To improve budget reporting, transparency, and accountability, the U.S. and Iraqi governments restarted the Iraq Financial Management Information System (IFMIS) on July 5, 2008, with the expectation that the IFMIS will be operational in all 250 Iraqi spending units by early 2009. In January 2008, we reported that the U.S. Agency for International Development began the IFMIS system in 2003, experienced significant delays, and suspended the IFMIS system in June 2007. Iraq, with the third largest oil reserve in the world, has benefited from the recent rise in oil prices and generated billions of dollars in revenues. In 2008, Iraq will likely earn between $67 billion and $79 billion in oil sales— at least twice the average annual amount Iraq generated from 2005 through 2007. This substantial increase in revenues offers the Iraqi government the potential to better finance its own security and economic needs. We provided a draft of this report to the Departments of State, the Treasury, and Defense for review. We received written comments from Treasury, which we have reprinted in appendix VI. Treasury agreed with the findings of this report. Treasury stated that the report accurately highlights that Iraq’s revenues have grown substantially in recent years and presents a credible picture of Iraq’s cumulative budget surpluses and expected 2008 budget surplus. In addition, Treasury stated that the increase in Iraqi revenues places the Government of Iraq in a stronger position to ultimately shoulder the full burden of its development, reconstruction, and security programs. Treasury noted that Iraq has adequate funds to make and maintain capital investments that deliver services and create conditions that foster economic growth. Although Iraq’s budget surplus is likely to grow significantly over the course of 2008, Treasury stated that the Government of Iraq still needs to improve the effectiveness of its budget execution and accountability for Iraqi funds. Treasury commented that the pace of spending has been held back by various factors, including deficiencies in capacity and security. Treasury also provided technical comments, which we incorporated as appropriate. State provided technical comments, which we incorporated as appropriate. DOD did not provide comments. We are sending copies of this report to interested congressional committees. We will also make copies available to others on request. In addition, this report is available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Joseph A. Christoff, Director, International Affairs and Trade, at (202) 512-8979 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. In this report, we discuss (1) Iraq’s estimated revenues from 2005 through 2008, (2) Iraq’s estimated expenditures from 2005 through 2008, (3) Iraq’s financial deposits through 2007 and budget surpluses, (4) U.S. cumulative expenditures on stabilization and reconstruction activities in Iraq since 2003, and (5) factors affecting Iraq’s efforts to accelerate spending. This report builds on GAO’s extensive body of work on Iraq, including our May 2007 assessment of reconstruction efforts in rebuilding Iraq’s oil and electricity sector, our January 2008 assessment of Iraq’s budget execution, and our June 2008 assessment on the progress made in meeting key goals in The New Way Forward. To complete this work, we analyzed relevant data, reviewed U.S. agency and International Monetary Fund (IMF) documents, and interviewed officials from the Departments of State, Defense, and the Treasury; Department of Energy’s Energy Information Administration (EIA); and the IMF. We also reviewed translated copies of Iraqi documents, including budget, capital spending, and Central Bank of Iraq export oil receipts data. We provided drafts of the report to the Departments of the Treasury, State, and Defense. We received formal comments from the Department of the Treasury, which are included in appendix VI. We conducted this performance audit from May to August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To estimate Iraq’s budget revenues from 2005 through 2007, we used data on export oil revenues and added estimates for other revenues. Crude oil export revenues are based on export oil receipts data from the Central Bank of Iraq (CBI) provided by the Department of the Treasury. The data account for all export transactions including amount paid, exported volume, price charged, date of shipment, payment date, and destination. The transactions are recorded on a daily basis. Monthly prices are calculated by dividing the total revenue by the total output from that month; export volume is calculated by aggregating the output from all the transactions from each month. We found that data were sufficiently reliable to present Iraqi oil export revenue as part of estimates of Iraqi revenues from 2005 through 2007. We determined that the 2004 budget revenue and expenditure data were not reliable and did not include this data in our review. To determine Iraq’s tax and other revenues, we added preliminary estimates of net domestic revenues from oil-related public enterprises, taxes, and other revenues as reported in IMF’s stand-by arrangement with Iraq. We interviewed IMF officials and made comparisons to other available sources to determine the reliability of those estimates. We found that the data were sufficiently reliable for the purpose of our analysis. We also projected total revenues for 2008 by forecasting export oil revenues and added those to IMF’s forecast of net revenues from oil-related public enterprises and taxes and other revenues. We developed six alternative scenarios for export oil revenues. See appendix III for the underlying assumptions about prices and export volumes for each scenario. To provide detailed information on the Iraqi government’s estimated expenditures, we reviewed Iraqi official Ministry of Finance monthly and annual budget and expenditure data for fiscal years 2005 through 2008, which were provided by Treasury. We used Iraq dinar-dollar exchange rates to convert dinar budget and expenditure figures to dollars. To provide a preliminary view of spending trends for the 3-year period 2005- 2007, we calculated annual average growth rates using an ordinary least squares regression technique. Although we computed these growth rates for both dollar and dinar denominated spending, we believe that growth rates in dinars are more informative because actual expenditures are made in dinars. Using dollar-denominated expenditures inflates the growth rates due to the 19-percent appreciation of the dinar against the dollar during this period. We did not use special reports developed by the Ministry of Finance because they include Iraqi commitments to spend as well as actual expenditures. We did not use the special reports for our analyses for two reasons: (1) Treasury Department officials stated in our meeting with them that the special reports contain unreliable data, and (2) the special reports do not define commitments, measure them, or describe how or when these commitments would result in actual expenditures. In addition, our review of these special reports show inconsistent use of poorly defined budget terms, as well as columns and rows that do not add up. Beginning in 2007, the government of Iraq adopted a new budget classification to comply with an IMF requirement. To compare the same expenditure categories over time, we re-grouped some sub-categories, as explained in appendix IV. Detailed breakdowns of the goods and services category are not available for individual ministries beginning in 2007. For the three sub-categories goods, services, and maintenance, the percentage shares were calculated for the years 2005 and 2006. Although we included the latest available 2008 expenditure figures in our tables, we did not use 2008 to calculate growth rates or shares of total expenditures for the 2005 through 2007 period. To provide some insight into how well the Iraqi government was able to implement its intentions and priorities, we constructed an expenditure ratio: actual expenditures divided by the budgeted amount for that activity or ministry. This does not capture the quality or effectiveness of expenditures, but only whether the government was able to spend the money it had budgeted. Treasury officials informed us that their analysis indicated that official Ministry of Finance data were sufficiently reliable. In addition to our interviews of cognizant officials, we examined and reviewed ministry data and compared monthly and annual data for internal consistency. Although we did not independently verify the precision of Iraqi expenditure data for 2005 through 2008, we believe that they are sufficiently reliable for the purposes of our report. However, we found that the data for 2004 were not sufficiently reliable and did not use them in our report. To identify Iraqi financial deposits as of the end of 2007, we reviewed IMF documents and interviewed IMF and Treasury officials. The data for the DFI balances are based on a July 2008 audited statement by the International Advisory and Monitoring Board (IAMB). The data on central government deposits at the CBI and Iraq’s commercial banks come from the International Financial Statistics, as of July 2008. We determined that these reported data were sufficiently reliable for our analysis. To estimate Iraqi government’s cumulative budget surplus from 2005 to 2007 and projected surplus for 2008, we subtracted estimated total expenditures from estimated total revenues. The data source for Iraqi spending is Iraq’s Ministry of Finance. The projected expenditures for 2008 are based on the assumption that the 2005 through 2007 average budget execution rate of 68 percent will remain the same in 2008. We selected this approach because it provided the highest estimate of 2008 expenditures based on recent historic data and trends for 2005 through 2007 and the approved 2008 budget. We varied total expenditures since the amount of war reparations varied. These reparation payments are calculated at 5 percent of the estimated oil export revenues projected in our six scenarios. In addressing the amount of U.S. funds that have been appropriated, obligated, and disbursed, we collected funding information from the Departments of Defense and State and relied on prior GAO reporting and data from the Departments of Defense, State, the Treasury, U.S. Agency for International Development, and the Coalition Provisional Authority to update the information where necessary. Although we have not audited the funding data, we discussed the sources and limitations of the data with the appropriate officials and checked them, when possible, with other information sources. We found the data were sufficiently reliable for broad comparisons in the aggregate and the category descriptions we have made in this report. To update information on factors affecting the Iraqi government’s ability to spend its revenues, we reviewed DOD, State, and IMF reports and met with officials from State, Treasury, DOD, and IMF. We are providing estimates of Iraq’s crude oil export revenues from three sources to show how they compare with actual oil export receipt data reported by the Central Bank of Iraq (CBI). Estimates of Iraq’s crude oil export revenues from different entities—the International Advisory and Monitoring Board (IAMB), Energy Information Administration (EIA), and International Monetary Fund (IMF)—ranged from about $80.2 to about $93.1 billion for the period 2005 through 2007. As shown by figure 5, these sources are consistent with Iraq’s export oil revenues as reported by the CBI and show a consistent upward trend for that period. For example, CBI crude oil export receipt data for 2007 reported revenues of $37.4 billion compared to estimates that ranged from $33.3 billion to $37.5 billion. IAMB data are based on its audit of Iraqi oil receipts and do not include the 5 percent of Iraq’s export oil revenues set aside into a United Nations compensation fund to process and pay claims for losses resulting from Iraq’s invasion and occupation of Kuwait. The IMF data are estimates based on its own analysis and Iraqi authorities’ estimates, and EIA data are estimates based on its own analysis of a variety of sources. This appendix presents our methodology for projecting Iraq’s crude oil export revenues for 2008 and discusses the underlying price and export volume assumptions. First, we calculated the average monthly prices and volumes for Iraqi crude oil exports from January through June 2008, the most recent months for which data were available. For those months, we used data on the volume and price of crude oil exports as reported by the Central Bank of Iraq (CBI). Second, we made assumptions about the price and export volumes of Iraqi crude oil based on historical prices and export volumes. We used these assumptions to project monthly prices and export volumes for the period July through December 2008. Third, we used the actual price and export volume data for the first half of the year and the projected price and export volume data for the second half of the year to project a range of crude oil revenues for Iraq, using six alternative scenarios. We calculated the average monthly prices of Iraqi oil and the corresponding export volumes for January through June 2008 using actual transaction prices and volume, as reported by CBI. These monthly averages are based on the daily prices per barrel and export volumes for each month. Monthly prices are calculated by dividing the total revenue by the total output from that month; export volume is calculated by aggregating the output from all the transactions from each month. We developed six scenarios of export oil revenues by varying price and export volume. Table 6 summarizes alternative assumptions for export prices and alternative assumptions for export volumes. For the forecast period, we assume two constant levels of export volume for each month from July through December 2008: the January through June 2008 average export volume and the June 2008 export volume. This assumption is based on the reasoning that the forecast time frame is short and export volume is constrained by Iraq’s oil production capacity and capability, as well as the level needed for Iraq’s domestic consumption. For each of the two export volumes, we developed a base case, an optimistic scenario, and a pessimistic scenario for the behavior of prices. For the base case, we set prices equal to the June 2008 level. This implies no growth in prices for the rest of the year. For the optimistic case, we determined prices by applying a 12.9 percent discount from the average monthly prices of the West Texas Intermediate (WTI), a key benchmark for world crude oil prices forecast by EIA. This implies 1.12 percent growth in Iraqi crude oil export prices from June to December 2008. Finally, for the pessimistic case, we set prices equal to the January through June 2008 average value. This implies a price drop in July 2008 with no change afterwards; thus, the average monthly growth rate in prices from June to December 2008 is -1.93 percent. As shown in table 7, using a combination of our price and export volume assumptions, we projected six scenarios for Iraqi oil exports for July through December 2008. Figure 6 depicts the total export oil revenues for each of the two levels of assumed export volume, 1.89 and 2.01 mbpd. Total actual export oil revenues from January to June 2008 were $32.88 billion. This appendix provides information on Iraq’s budget to clarify our classification of expenditures as reported by the Ministry of Finance. Iraq’s budget is divided into current operating expenditures and investment. In 2007, the Iraqi government adopted a new chart of accounts as recommended by the International Monetary Fund. To compare the budget expenditures over time, we combined various expenditure categories into four groups. Column 1 in table 8 shows the nine categories of expenditures reported in 2005 and 2006 and their combination into four groups presented in the table; column 2 shows the eight categories used in the 2007 and the 2008 chart of accounts. Operating expenditures consist of (A) employee compensation, (B) goods and services, and (C) other operating expenditures. This appendix provides additional information on the expenditures of five central ministries—defense, interior, oil, water, and electricity— responsible for providing critical services to the Iraq people. Table 9 provides information on the operating and investment expenditures for the security ministries—Ministry of Interior, responsible for internal police and security forces, and Ministry of Defense, responsible for the Iraqi military forces. Our analysis of Iraq’s Ministry of Finance expenditure data from 2005 through 2007 for the security ministries—defense and interior—found the following: From 2005 through 2007, the Iraqi security ministries primarily spent their funds on operating expenses. According to data from Iraq’s Ministry of Finance, Iraq’s security ministries spent 94 percent ($9.1 billion) of their total expenditures on operating expenses and 6 percent ($609 million) on investment expenses. From 2005 through 2007, total expenditures by Iraq’s security ministries grew at an annual rate of 36 percent, in Iraqi dinars, compared to the 13 percent annual growth rate of expenditures by the Iraqi government. This growth in expenditures is largely due to a 39 percent increase in expenditures on salaries and wages, reflecting the increase in military and police personnel. Expenditures on items other than employee compensation, such as weapons, ammunition, trucks and special vehicles, uniforms, food, structures, and other equipment account for 25 percent of total expenditures. These expenditures have grown, in Iraqi dinars, at an annual rate of 25 percent less than the growth rate of compensation. With the adoption of the new chart of accounts in 2007, it is unclear whether some purchases that were recorded as capital goods were now being recorded under goods and services or investment expenditures. We have chosen to combine and report these categories for the security ministries as total expenditures excluding employee compensation. Table 10 provides information on the operating and investment expenditures for ministries providing key essential services—Ministries of Oil, Water Resources, and Electricity. Our analysis of Iraq’s Ministry of Finance expenditure data from 2005 through 2007 for the three ministries, oil, electricity, and water— responsible for providing critical services to the Iraqi people—found the following: Investment expenditures comprise two-thirds of the total expenditures for these three critical ministries—oil, water, and electricity— as compared with only 10 percent for the government as a whole. Spending of their investment budgets has declined significantly from 2005 through 2007. According to Ministry of Finance data, investment spending by the Ministry of Oil and Ministry of Electricity declined at an annual rate of 92 percent and 93 percent, respectively, during this period. Investment spending by the Ministry of Water declined at an annual rate of 13 percent. Further, from 2005 through 2007 the Government of Iraq allocated almost $12 billion dollars toward investment activities of these ministries. However, as table 11 shows, the ministries of oil and electricity spent only a small percentage of the investment funds made available to them. The Ministry of Water Resources spent about 50 percent of its investment budget, while the Ministry of Oil spent 3 percent and Ministry of Electricity spent 14 percent. Key contributors to this report include Godwin Agbara, Assistant Director; Pedro Almoguera; Monica Brym; Lynn Cothern; Gergana Danailova- Trainor; Bruce Kutnick; and Justin Monroe. Technical assistance was provided by Ashley Alley, Jeffrey Baldwin-Bott, Benjamin Bolitzer, Daniel Chen, Aniruddha Dasgupta, Walker Fullerton, Elizabeth Repko, Jena Sinkfield, and Eve Weisberg.
Iraq has an estimated 115 billion barrels of crude oil reserves, the third largest in the world. Oil export revenues are critical to Iraq's reconstruction, accounting for over 90 percent of the Iraqi government's revenues. In June 2008, GAO reported low 2007 spending rates by the Iraqi government for some critical sectors in the face of declining U.S. investments in these sectors. This report examines (1) Iraq's estimated revenues from 2005 through 2008, (2) Iraq's estimated expenditures from 2005 through 2008, (3) Iraq's financial deposits through 2007 and budget surpluses, (4) U.S. cumulative expenditures on stabilization and reconstruction activities in Iraq since 2003, and (5) factors affecting Iraq's efforts to accelerate spending. GAO analyzed relevant data and reviewed documents, including Central Bank of Iraq oil receipts data, International Monetary Fund's (IMF) reports, translated copies of Iraqi budget and expenditures, and U.S. agency funding data and reports. GAO also interviewed officials from the Departments of Defense (DOD), Energy, State, Treasury, and the IMF. This report contains no recommendations. Treasury agreed with the report's findings and stated that Iraq has adequate funds to make and maintain capital investments that deliver services and foster economic growth. State provided technical comments. DOD had no comments. From 2005 through 2007, the Iraqi government generated an estimated $96 billion in cumulative revenues, of which crude oil export sales accounted for about $90.2 billion, or 94 percent. For 2008, GAO estimates that Iraq could generate between $73.5 billion and $86.2 billion in total revenues, with oil exports accounting for between $66.5 billion to $79.2 billion. Projected 2008 oil revenues could be more than twice the average annual amount Iraq generated from 2005 through 2007. These projections are based on actual sales through June 2008 and projections for July to December that assume an average export price from $96.88 to $125.29 per barrel and oil export volumes of 1.89 to 2.01 million barrels per day. From 2005 through 2007, the Iraqi government spent an estimated $67 billion on operating and investment activities. Ninety percent was spent on operating expenses, such as salaries and goods and services, and the remaining 10 percent on investments, such as structures and vehicles. The Iraqi government spent only 1 percent of total expenditures to maintain Iraq- and U.S.-funded investments such as buildings, water and electricity installations, and weapons. While total expenditures grew from 2005 through 2007, Iraq was unable to spend all its budgeted funds. In 2007, Iraq spent 80 percent of its $29 billion operating budget and 28 percent of its $12 billion investment budget. For 2008, GAO estimates that Iraq could spend between $35.3 billion and $35.9 billion of its $49.9 billion budget. As of December 31, 2007, the Iraqi government had accumulated financial deposits of $29.4 billion, held in the Development Fund for Iraq and central government deposits at the Central Bank of Iraq and Iraq's commercial banks. This balance is the result, in part, of an estimated cumulative budget surplus of about $29 billion from 2005 to 2007. For 2008, GAO estimates a budget surplus of between $38.2 billion to $50.3 billion. If spent, a proposed Iraqi budget supplemental of $22 billion could reduce this projected surplus. Since fiscal year 2003, the United States appropriated about $48 billion for stabilization and reconstruction efforts in Iraq; it had obligated about $42 billion of that amount as of June 2008. U.S. agencies spent about $23.2 billion on the critical security, oil, electricity, and water sectors. From 2005 through April 2008, Iraq spent about $3.9 billion on these sectors. U.S. government, coalition, and international officials have identified a number of factors that have affected the Iraqi government's ability to spend more of its revenues on capital investments. These factors included the shortage of trained staff; weak procurement and budgeting systems; and violence and sectarian strife. The United States has funded activities to help build the capacity of key civilian and security ministries to improve Iraq's ability to execute its capital project budget.
The tax gap is an estimate of the difference between the taxes—including individual income, corporate income, employment, estate, and excise taxes—that should have been paid voluntarily and on time and what was actually paid for a specific year. The estimate is an aggregate of estimates for the three primary types of noncompliance: (1) underreporting of tax liabilities on tax returns; (2) underpayment of taxes due from filed returns; and (3) nonfiling, which refers to the failure to file a required tax return altogether or on time. IRS’s tax gap estimates for each type of noncompliance include estimates for some or all of the five types of taxes that IRS administers. As shown in table 1, underreporting of tax liabilities accounted for most of the tax gap estimate for tax year 2001. IRS has estimated the tax gap on multiple occasions, beginning in 1979, relying on its Taxpayer Compliance Measurement Program (TCMP). IRS did not implement any TCMP studies after 1988 because of concerns about costs and burdens on taxpayers. Recognizing the need for current compliance data, in 2002 IRS implemented a new compliance study called the National Research Program (NRP) to produce such data for tax year 2001 while minimizing taxpayer burden. IRS has concerns with the certainty of the tax gap estimate for tax year 2001 in part because some areas of the estimate rely on old data, IRS has no estimates for other areas of the tax gap, and it is inherently difficult to measure some types of noncompliance. IRS used data from NRP to estimate individual income tax underreporting and the portion of employment tax underreporting attributed to self-employed individuals. The underpayment segment of the tax gap is not an estimate, but rather represents the tax amounts that taxpayers reported on time but did not pay on time. Other areas of the estimate, such as corporate income tax and employer-withheld employment tax underreporting, rely on decades-old data. Also, IRS has no estimates for corporate income, employment, and excise tax nonfiling or for excise tax underreporting. In addition, it is inherently difficult for IRS to observe and measure some types of underreporting or nonfiling, such as tracking cash payments that businesses make to their employees, as businesses and employees may not report these payments to IRS in order to avoid paying employment and income taxes, respectively. IRS’s overall approach to reducing the tax gap consists of improving service to taxpayers and enhancing enforcement of the tax laws. IRS seeks to improve voluntary compliance through efforts such as education and outreach programs and tax form simplification. IRS uses its enforcement authority to ensure that taxpayers are reporting and paying the proper amounts of taxes through efforts such as examining tax returns and matching the amount of income taxpayers report on their tax returns to the income amounts reported on information returns it receives from third parties. IRS reports that it collected over $48 billion in fiscal year 2006 from noncompliant taxpayers it identified through its various enforcement programs. In spite of IRS’s efforts to improve taxpayer compliance, the rate at which taxpayers pay their taxes voluntarily and on time has tended to range from around 81 percent to around 84 percent over the past three decades. Any significant reduction of the tax gap would likely depend on an improvement in the level of taxpayer compliance. No single approach is likely to fully and cost-effectively address noncompliance and therefore multiple approaches are likely to be needed. The tax gap has multiple causes; spans five types of taxes; and is spread over several types of taxpayers including individuals, corporations, and partnerships. Thus, for example, while simplifying laws should help when noncompliance is due to taxpayers’ confusion, enforcement may be needed for taxpayers who understand their obligations but decline to fulfill them. Similarly, while devoting more resources to enforcement should increase taxes assessed and collected, too great an enforcement presence likely would not be tolerated. Simplifying or reforming the tax code, providing IRS more enforcement tools, and devoting additional resources to enforcement are three major tax gap reduction approaches discussed in more detail below, but providing quality services to taxpayers plays an important role in improving compliance and reducing the tax gap. IRS taxpayer services include education and outreach programs, simplifying the tax process, and revising forms and publications to make them electronically accessible and more easily understood by diverse taxpayer communities. For example, if tax forms and instructions are unclear, taxpayers may be confused and make unintentional errors. Quality taxpayer services would also be a key consideration in implementing any of the approaches for tax gap reduction. For example, expanding enforcement efforts would increase interactions with taxpayers, requiring processes to efficiently communicate with taxpayers. Also, changing tax laws and regulations would require educating taxpayers of the new requirements in a clear, timely, and accessible manner. In 2006, we reported that IRS improved its two most commonly used services—telephone and Web site assistance— for the 2006 filing season. Increased funding financed some of the improvements, but a significant portion has been financed internally by efficiencies gained from increased electronic filing of tax returns and other operational improvements. Although quality service helps taxpayers comply, showing a direct relationship between quality service and compliance levels is very challenging. As required by Congress, IRS is in the midst of a study that is to result in a 5-year plan for taxpayer service activities, which is to include long-term quantitative goals and to balance service and enforcement. Part of the study focuses on the effect of taxpayer service on compliance. A Phase I report was issued in April 2006 and a Phase II report is due in early 2007, which is to include, among other things, a multiyear plan for taxpayer service activities and improvement initiatives. However, in deciding on the appropriate mix of approaches to use in reducing the tax gap, many factors or issues could affect strategic decisions. Among the broad factors to consider are the likely effectiveness of any approach, fairness, enforceability, and sustainability. Beyond these, our work points to the importance of the following: Measuring compliance levels periodically and setting long-term goals. A data-based plan is one key to closing the tax gap. To the extent that IRS can develop better compliance data, it can develop more effective approaches for reducing the gap. Regularly measuring the magnitude of, and the reasons for, noncompliance provides insights on how to reduce the gap through potential changes to tax laws and IRS programs. In July 2005, we recommended that IRS periodically measure tax compliance, identify reasons for noncompliance, and establish voluntary compliance goals. IRS agreed with the recommendations and established a voluntary tax compliance goal of 85 percent by 2009. Furthermore, we have identified alternative ways to measure compliance, including conducting examinations of small samples of tax returns over multiple years, instead of conducting examinations for a larger sample of returns for one tax year, to allow IRS to track compliance trends annually. Considering the costs and burdens. Any action to reduce the tax gap will create costs and burdens for IRS; taxpayers; and third parties, such as those who file information returns. For example, withholding and information reporting requirements impose some costs and burdens on those who track and report information. These costs and burdens need to be reasonable in relation to the improvements expected to arise from new compliance strategies. Evaluating the results. Evaluating the actions taken by IRS to reduce the tax gap would help maximize IRS’s effectiveness. Evaluations can be challenging because it is difficult to isolate the effects of IRS’s actions from other influences on taxpayers’ compliance. Our work has discussed how to address these challenges, for example by using research to link actions with the outputs and desired effects. Optimizing resource allocation. Developing reliable measures of the return on investment for strategies to reduce the tax gap would help inform IRS resource allocation decisions. IRS has rough measures of return on investment based on the additional taxes it assesses. Developing such measures is difficult because of incomplete data on the costs of enforcement and collected revenues. Beyond direct revenues, IRS’s enforcement actions have indirect revenue effects, which are difficult to measure. However, indirect effects could far exceed direct revenue effects and would be important to consider in connection with continued development of return on investment measures. In general though, the impacts of tax gap reduction by improving voluntary tax compliance can be quite large. For example, if the estimated 83.7 percent voluntary compliance rate that produced a gross tax gap of $345 billion in tax year 2001 had been 85 percent, this tax gap would have been about $28 billion less; if it had been 90 percent, the gap would have been about $133 billion less. Leveraging technology. Better use of technology could help IRS be more efficient in reducing the tax gap. IRS is modernizing its technology, which has paid off in terms of telephone service, resource allocation, electronic filing, and data analysis capability. However, this ongoing modernization will need strong management and prudent investments to maximize potential efficiencies. Congress has been encouraging IRS to develop an overall tax gap reduction plan or strategy that could include a mix of approaches like simplifying code provisions, increased enforcement, and reconsidering the level of resources devoted to enforcement. Some progress has been made towards laying out the broad elements of a plan or strategy for reducing the tax gap. On September 26, 2006, the U.S. Department of the Treasury (Treasury), Office of Tax Policy released “A Comprehensive Strategy for Reducing the Tax Gap.” However, the document generally does not identify specific approaches that Treasury and IRS will undertake to reduce the tax gap, the related time frames for such steps, or explanations of how much the tax gap would be reduced. The document said that such additional details the would be part of the fiscal year 2008 IRS budget request that will be deliberated during early 2007 because of the resource implications associated with tax gap reduction. Tax law simplification and reform both have the potential to reduce the tax gap by billions of dollars. The extent to which the tax gap would be reduced depends on which parts of the tax system would be simplified and in what manner as well as how any reform of the tax system is designed and implemented. Neither approach, however, will eliminate the gap. Further, changes in the tax laws and system to improve tax compliance could have unintended effects on other tax system objectives, such as those involving economic behavior or equity. Simplification has the potential to reduce the tax gap for at least three broad reasons. First, it could help taxpayers to comply voluntarily with more certainty, reducing inadvertent errors by those who want to comply but are confused because of complexity. Second, it may limit opportunities for tax evasion, reducing intentional noncompliance by taxpayers who can misuse the complex code provisions to hide their noncompliance or to achieve ends through tax shelters. Third, tax code complexity may erode taxpayers’ willingness to comply voluntarily if they cannot understand its provisions or they see others taking advantage of complexity to intentionally underreport their taxes. Simplification could take multiple forms. One form would be to retain existing laws but make them simpler. For example, in our July 2005 report on postsecondary tax preferences, we noted that the definition of a qualifying postsecondary education expense differed somewhat among some tax code provisions, for instance with some including the cost to purchase books and others not. Making definitions consistent across code provisions may reduce taxpayer errors. Although we cannot say the errors were due to these differences in definitions, in a limited study of paid preparer services to taxpayers, we found some preparers claiming unallowable expenses for books. Further, the Joint Committee on Taxation suggested that such dissimilar definitions may increase the likelihood of taxpayer errors and increase taxpayer frustration. Another tax code provision in which complexity may have contributed to the individual tax gap involves the earned income tax credit, for which IRS estimated a tax loss of up to about $10 billion for tax year 1999. Although some of this noncompliance may be intentional, we and the National Taxpayer Advocate have previously reported that confusion over the complex rules governing eligibility for claiming the credit could cause taxpayers to fail to comply inadvertently. Although retaining but simplifying tax code provisions may help reduce the tax gap, doing so may not be easy, may conflict with other policy decisions, and may have unintended consequences. The simplification of the definition of a qualifying child across various code sections is an example. We suggested in the early 1990s that standardizing the definition of a qualifying child could reduce taxpayer errors and reduce their burden. A change was not made until 2004. However, some have suggested that the change has created some unintended consequences, such as increasing some taxpayers’ ability to reduce their taxes in ways Congress may not have intended. Another form of simplification could be to broaden the tax base while reducing tax rates, which could minimize incentives for not complying. This base- broadening could include a review of whether existing tax expenditures are achieving intended results at a reasonable cost in lost revenue and added burden and eliminating or consolidating those that are not. Among the many causes of tax code complexity is the growing number of preferential provisions in the code, defined in statute as tax expenditures, such as tax exemptions, exclusions, deductions, credits, and deferrals. The number of these tax expenditures has more than doubled from 1974 through 2005. Tax expenditures can contribute to the tax gap if taxpayers claim them improperly. For example, IRS’s recent tax gap estimate includes a $32 billion loss in individual income taxes for tax year 2001 because of noncompliance with these provisions. Simplifying these provisions of the tax code would not likely yield $32 billion in revenue because even simplified provisions likely would have some associated noncompliance. Nevertheless, the estimate suggests that simplification could have important tax gap consequences, particularly if simplification also accounted for any noncompliance that arises because of complexity on the income side of the tax gap for individuals. Despite the potential benefits that simplification may yield, these credits and deductions serve purposes that Congress has judged to be important to advance federal goals. Eliminating them or consolidating them likely would be complicated, and would likely create winners and losers. Elimination also could conflict with other objectives such as encouraging certain economic activity or improving equity. Similar trade-offs exist with possible fundamental tax reforms that would move away from an income tax system to some other system, such as a consumption tax, national sales tax, or value added tax. Fundamental tax reform would most likely result in a smaller tax gap if the new system has few tax preferences or complex tax code provisions and if taxable transactions are transparent. However, these characteristics are difficult to achieve in any system and experience suggests that simply adopting a fundamentally different tax system may not by itself eliminate any tax gap. Any tax system could be subject to noncompliance, and its design and operation, including the types of tools made available to tax administrators, will affect the size of any corresponding tax gap. Further, the motivating forces behind tax reform likely include factors beyond tax compliance, such as economic effectiveness, equity, and burden, which could in some cases carry greater weight in designing an alternative tax system than ensuring the highest levels of compliance. Changing the tax laws to provide IRS with additional enforcement tools, such as expanded tax withholding and information reporting, could also reduce the tax gap by many billions of dollars, particularly with regard to underreporting—the largest segment of the tax gap. Tax withholding promotes compliance because employers or other parties subtract taxes owed from a taxpayer’s income and remit them to IRS. Information reporting tends to lead to high levels of compliance because income taxpayers earn is transparent to them and IRS. In both cases, high levels of compliance tend to be maintained over time. Also, withholding and information reporting help IRS to better identify noncompliant taxpayers and prioritize contacting them, which enables IRS to better allocate its resources. However, designing new withholding or information reporting requirements to address underreporting can be challenging given that many types of income are already subject to at least some form of withholding or information reporting, underreporting exists in varied forms, and the requirements could impose costs and burdens on third parties. Taxpayers tend to report income subject to tax withholding or information reporting with high levels of compliance, as shown in figure 1, because the income is transparent to the taxpayers as well as to IRS. Additionally, once withholding or information reporting requirements are in place for particular types of income, compliance tends to remain high over time. For example, for wages and salaries, which are subject to tax withholding and substantial information reporting, the percentage of income that taxpayers misreport has consistently been measured at around 1 percent over time. In the past, we have identified a few specific areas where additional withholding or information reporting requirements could serve to improve compliance: Require more data on information returns dealing with capital gains income from securities sales. Recently, we reported that an estimated 36 percent of taxpayers misreported their capital gains or losses from the sale of securities, such as corporate stocks and mutual funds. Further, around half of the taxpayers who misreported did so because they failed to report the securities’ cost, or basis, sometimes because they did not know the securities’ basis or failed to take certain events into account that required them to adjust the basis of their securities. When taxpayers sell securities like stock and mutual funds through brokers, the brokers are required to report information on the sale, including the amount of gross proceeds the taxpayer received; however, brokers are not required to report basis information for the sale of these securities. We found that requiring brokers to report basis information for securities sales could improve taxpayers’ compliance in reporting their securities gains and losses and help IRS identify noncompliant taxpayers. However, we were unable to estimate the extent to which a basis reporting requirement would reduce the capital gains tax gap because of limitations with the compliance data on capital gains and because neither IRS nor we know the portion of the capital gains tax gap attributed to securities sales. Requiring tax withholding and more or better information return reporting on payments made to independent contractors. Past IRS data have shown that independent contractors report 97 percent of the income that appears on information returns, while contractors that do not receive these returns report only 83 percent of income. We have also identified other options for improving information reporting for independent contractors, including increasing penalties for failing to file required information returns, lowering the $600 threshold for requiring such returns, and requiring businesses to report separately on their tax returns the total amount of payments to independent contractors. Requiring information return reporting on payments made to corporations. Unlike payments made to sole proprietors, payments made to corporations for services are generally not required to be reported on information returns. IRS and GAO have contended that the lack of such a requirement leads to lower levels of compliance for small corporations. Although Congress has required federal agencies to provide information returns on payments made to contractors since 1997, payments made by others to corporations are generally not covered by information returns. Information reporting helps IRS to better allocate its resources to the extent that it helps IRS better identify noncompliant taxpayers and the potential for additional revenue that could be obtained by contacting these taxpayers. For example, IRS officials told us that receiving information on basis for taxpayers’ securities sales would allow IRS to determine more precisely taxpayers’ income for securities sales through its document matching programs and would allow it to identify which taxpayers who misreported securities income have the greatest potential for additional tax assessments. Similarly, IRS could use basis information to improve both aspects of its examination program—examinations of tax returns through correspondence and examinations of tax returns face to face with the taxpayer. Currently, capital gains issues are too complex and time consuming for IRS to examine through correspondence. However, IRS officials told us that receiving cost basis information might enable IRS to examine noncompliant taxpayers through correspondence because it could productively select tax returns to examine. Also, having cost basis information could help IRS identify the best cases to examine face to face, making the examinations more productive while simultaneously reducing the burden imposed on compliant taxpayers who otherwise would be selected for examination. Although withholding and information reporting lead to high levels of compliance, designing new requirements to address underreporting could be challenging given that many types of income, including wages and salaries, dividend and interest income, and income from pensions and Social Security are already subject to withholding or substantial information reporting. Also, challenges arise in establishing new withholding or information reporting requirements for certain other types of income that are extensively underreported. Such underreporting may be difficult to determine because of complex tax laws or transactions or the lack of a practical and reliable third-party source to provide information on the taxable income. For example, while withholding or information reporting mechanisms on nonfarm sole proprietor and informal supplier income would likely improve their compliance, comprehensive mechanisms that are practical and effective are difficult to identify. As shown in figure 1, this income is not subject to information reporting, and these taxpayers misreported about half of the income they earned for tax year 2001. Informal suppliers by definition receive income in an informal manner through services they provide to a variety of individual citizens or small businesses. Whereas businesses may have the capacity to perform withholding and information reporting functions for their employees, it may be challenging to extend withholding or information reporting responsibilities to the individual citizens that receive services, who may not have the resources or knowledge to comply with such requirements. Finally, implementing tax withholding and information reporting requirements generally imposes costs and burdens on the businesses that must implement them, and, in some cases, on taxpayers. For example, expanding information reporting on securities sales to include basis information will impose costs on the brokers who would track and report the information. Further, trying to close the entire tax gap with these enforcement tools could entail more intrusive recordkeeping or reporting than the public is willing to accept. Devoting more resources to enforcement has the potential to help reduce the tax gap by billions of dollars, as IRS would be able to expand its enforcement efforts to reach a greater number of potentially noncompliant taxpayers. However, determining the appropriate level of enforcement resources to provide IRS requires taking into account many factors, such as how effectively and efficiently IRS is currently using its resources, how to strike the proper balance between IRS’s taxpayer service and enforcement activities, and competing federal funding priorities. If Congress were to provide IRS more enforcement resources, the amount of the tax gap that could be reduced depends in part on the size of any increase in IRS’s budget, how IRS would manage any additional resources, and the indirect increase in taxpayers’ voluntary compliance that would likely result from expanded IRS enforcement. Given resource constraints, IRS is unable to contact millions of additional taxpayers for whom it has evidence of potential noncompliance. With additional resources, IRS would be able to assess and collect additional taxes and further reduce the tax gap. In 2002, IRS estimated that a $2.2 billion funding increase would allow it to take enforcement actions against potentially noncompliant taxpayers it identifies but cannot contact and would yield an estimated $30 billion in revenue. For example, IRS estimated that it contacted about 3 million of the over 13 million taxpayers it identified as potentially noncompliant through its matching of tax returns to information returns. IRS estimated that contacting the additional 10 million potentially noncompliant taxpayers it identified, at a cost of about $230 million, could yield nearly $7 billion in potentially collectible revenue. We did not evaluate the accuracy of the estimate, and as will be discussed below, many factors suggest that it is difficult to estimate reliably net revenue increases that might come from additional enforcement efforts. Although additional enforcement funding has the potential to reduce the tax gap, the extent to which it would help depends on several factors. First, and perhaps most obviously, the amount of tax gap reduction would depend in part on the size of any budget increase. Generally, larger budget increases should result in larger reductions in the tax gap. The degree to which revenues would increase from expanded enforcement depends on many variables, such as how quickly IRS can ramp up efforts, how well IRS selects the best cases to be worked, and how taxpayers react to enforcement efforts. Estimating those revenue increases would require assumptions about these and other variables. Because actual experience is likely to diverge from those assumptions, the actual revenue increases are likely to differ from the estimates. The lack of reliable key data compounds the difficulty of estimating the likely revenues. To the extent possible, obtaining better data on key variables would provide a better understanding of the likely results with any increased enforcement resources. With additional resources for enforcement, IRS would be able to assess and collect additional taxes, but the related tax gap reductions may not be immediate. If IRS uses the resources to hire more enforcement staff, the reductions may occur gradually as IRS is able to hire and train the staff. Also, several years can elapse after IRS assesses taxes before it actually collects these taxes. Similarly, the amounts of taxes actually collected can vary substantially from the related tax amounts assessed through enforcement actions by the type of tax or taxpayer involved. In a 1998 report, we found that 5 years after taxes were assessed against individual taxpayers with business income, 48 percent of the assessed taxes had been collected, whereas for the largest corporate taxpayers, 97 percent of assessed taxes had been collected. Over the last 2 years, IRS has requested and received additional funding targeted for enforcement activities that it estimated will result in additional revenue. In its fiscal year 2007 budget request, IRS requested an approximate 2 percent increase in funding from fiscal year 2006 to expand its enforcement efforts, including tax return examination and tax collection activities, with the goal of increasing individual taxpayer compliance and addressing concerns that we and others have raised regarding the erosion of IRS’s enforcement presence. In estimating the revenue that it would obtain from the increased funding, IRS accounted for several factors, including opportunity costs because of training, which draws experienced enforcement personnel away from the field; differences in average enforcement revenue obtained per full-time employee by enforcement activity; and differences in the types and complexity of cases worked by new hires and experienced hires. IRS forecasted that in the first year after expanding enforcement activities, the additional revenue to be collected is less than half the amount to be collected in later years. This example underscores the logic that if IRS is to receive a relatively large funding increase, it likely would be better to provide it in small but steady amounts. The amount of tax gap reduction likely to be achieved from any budget increase also depends on how well IRS can use information about noncompliance to manage the additional resources. Because IRS does not have compliance data for some segments of the tax gap and others are based on old data, IRS cannot easily track the extent to which compliance is improving or declining. IRS also has concerns with its information on whether taxpayers unintentionally or intentionally fail to comply with the tax laws. Knowing the reasons for taxpayer noncompliance can help IRS decide whether its efforts to address specific areas of noncompliance should focus on nonenforcement activities, such as improved forms or publications, or enforcement activities to pursue intentional noncompliance. To the extent that compliance data are outdated and IRS does not know the reason for taxpayer noncompliance, IRS may be less able to target resources efficiently to achieve the greatest tax gap reduction at the least taxpayer burden. IRS has taken important steps to better ensure efficient allocation and use. For example, the NRP study has provided better data on which taxpayers are most likely to be noncompliant. IRS is using the data to improve its audit selection processes in hopes of reducing the number of audits that result in no change, which should reduce unnecessary burden on compliant taxpayers and increase enforcement staff productivity (as measured by direct enforcement revenue). As part of an effort to make the best use of its enforcement resources, IRS has developed rough measures of return on investment in terms of tax revenue that it assesses from uncovering noncompliance. Generally, IRS cites an average return on investment for enforcement of 4:1, that is, IRS estimates that it collects $4 in revenue for every $1 of funding. Where IRS has developed return on investment estimates for specific programs, it finds substantial variation depending on the type of enforcement action. For instance, the ratio of estimated tax revenue gains to additional spending for pursuing known individual tax debts through phone calls is 13:1, versus a ratio of 32:1 for matching the amount of income taxpayers report on their tax returns to the income amounts reported on information returns. In addition to returns on investment estimates being rough, IRS lacks information on the incremental returns on investment from pursuing the “next best case” for some enforcement programs. It is the marginal revenue gain from these cases that matters in estimating the direct revenue from expanded enforcement. Developing such measures is difficult because of incomplete information on all the costs and all the tax revenue ultimately collected from specific enforcement efforts. Because IRS’s current estimates of the revenue effects of additional funding are imprecise, the actual revenue that might be gained from expanding different enforcement efforts is subject to uncertainty. Given the variation in estimated returns on investment for different types of IRS compliance efforts, the amount of tax gap reduction that may be achieved from an increase in IRS’s resources would depend on how IRS allocates the increase. Although it might be tempting to allocate resources heavily toward areas with the highest estimated return, allocation decisions must take into account diverse and difficult issues. For instance, although one enforcement activity may have a high estimated return, that return may drop off quickly as IRS works its way through potential noncompliance cases. In addition, IRS dedicates examination resources across all types of taxpayers so that all taxpayers receive some signal that noncompliance is being addressed. Further, issues of fairness can arise if IRS focuses its efforts only on particular groups of taxpayers. Beyond direct tax revenue collection, expanded enforcement efforts could reduce the tax gap even more, as widespread agreement exists that IRS enforcement programs have an indirect effect through increases in voluntary tax compliance. The precise magnitude of the indirect effects of enforcement is not known with a high level of confidence given challenges in measuring compliance; developing reasonable assumptions about taxpayer behavior; and accounting for factors outside of IRS’s actions that can affect taxpayer compliance, such as changes in tax law. However, several research studies have offered insights to help better understand the indirect effects of IRS enforcement on voluntary tax compliance and show that they could exceed the direct effect of revenue obtained. When taxpayers do not pay all of their taxes, honest taxpayers carry a greater burden to fund government programs and the nation is less able to address its long-term fiscal challenges. Thus, reducing the tax gap is important, even though closing the entire tax gap is neither feasible nor desirable because of costs and intrusiveness. All of the approaches I have discussed have the potential to reduce the tax gap alone or in combination, and no single approach is clearly and always superior to the others. As a result, IRS needs a strategy to attack the tax gap on multiple fronts with multiple approaches. Mr. Chairman and Members of the Committee, this concludes my testimony. I would be happy to answer any question you may have at this time. For further information on this testimony, please contact Michael Brostek on (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony include Tom Short, Assistant Director; Jeff Arkin; and Elizabeth Fan. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The tax gap--the difference between the tax amounts taxpayers pay voluntarily and on time and what they should pay under the law--has been a long-standing problem in spite of many efforts to reduce it. Most recently, the Internal Revenue Service (IRS) estimated a gross tax gap for tax year 2001 of $345 billion and estimated it would recover $55 billion of this gap, resulting in a net tax gap of $290 billion. When some taxpayers fail to comply, the burden of funding the nation's commitments falls more heavily on compliant taxpayers. Reducing the tax gap would help improve the nation's fiscal stability. For example, each 1 percent reduction in the net tax gap would likely yield $3 billion annually. GAO was asked to discuss the tax gap and various approaches to reduce it. This testimony discusses the need for taking multiple approaches and to what extent the tax gap could be reduced through three overall approaches--simplifying or reforming the tax system, providing IRS with additional enforcement tools, and devoting additional resources to enforcement. This statement is based on prior GAO work. Multiple approaches are needed to reduce the tax gap. No single approach is likely to fully and cost-effectively address noncompliance since, for example, it has multiple causes and spans different types of taxes and taxpayers. Simplifying or reforming the tax code, providing IRS more enforcement tools, and devoting additional resources to enforcement are three major approaches, but providing quality services to taxpayers also is a necessary foundation for voluntary compliance. Such steps as periodically measuring noncompliance and its causes, setting tax gap reduction goals, evaluating the results of any initiatives to reduce the tax gap, optimizing the allocation of IRS's resources, and leveraging technology to enhance IRS's efficiency would also contribute to tax gap reduction. Simplifying the tax code or fundamental tax reform has the potential to reduce the tax gap by billions of dollars. IRS has estimated that errors in claiming tax credits and deductions for tax year 2001 contributed $32 billion to the tax gap. Thus, considerable potential exists. However, these provisions serve purposes Congress has judged to be important and eliminating or consolidating them could be complicated. Fundamental tax reform would most likely result in a smaller tax gap if the new system has few, if any, exceptions (e.g., few tax preferences) and taxable transactions are transparent to tax administrators. These characteristics are difficult to achieve, and any tax system could be subject to noncompliance. Withholding and information reporting are particularly powerful tools to reduce the tax gap. They could help reduce the tax gap by billions of dollars, especially if they make underreported income transparent to IRS. These tools have led to high, sustained levels of taxpayer compliance and improved IRS resource allocation by helping IRS identify and prioritize its contacts with noncompliant taxpayers. As GAO previously suggested, reporting the cost, or basis, of securities sales is one option to improve taxpayers' compliance. However, designing additional withholding and information reporting requirements may be challenging given that many types of income are already subject to reporting, underreporting exists in many forms, and withholding and reporting requirements impose costs on third parties. Devoting additional resources to enforcement has the potential to help reduce the tax gap by billions of dollars. However, determining the appropriate level of IRS enforcement resources requires taking into account such factors as how well IRS uses its resources, the proper balance between taxpayer service and enforcement activities, and competing federal funding priorities. If Congress provides IRS more enforcement resources, the amount of tax gap reduction would depend on factors such as the size of budget increases, how IRS manages any additional resources, and the indirect increase in taxpayers' voluntary compliance resulting from expanded enforcement. Increasing IRS's funding would enable it to contact millions of potentially noncompliant taxpayers it identifies but does not contact.
FAA defines an unmanned aircraft as one that is operated without the possibility of direct human intervention from within or on the aircraft. In the past, these aircraft were sometimes called “unmanned aerial vehicles,” “remotely piloted vehicles,” or “unmanned aircraft.” FAA and the international community have adopted the term “unmanned aircraft system” to designate them as aircraft and to recognize that a UAS includes not only the airframe, but also the associated elements—the control station and communications links—as shown in figure 1. The capabilities of UASs differ from manned aircraft in several ways. A UAS can operate for far longer periods than an onboard pilot could safely operate an aircraft. Future scenarios envision UASs remaining aloft for weeks or even months using fuel cell technology or airborne refueling operations. UASs may fly at slower speeds than most manned aircraft; some operate at low altitude (between buildings) while others fly well above piloted aircraft altitudes. Some UASs can fly autonomously based on pre-programmed data or flight paths, while others fly based on commands from pilot-operated ground stations. UASs also vary widely in size, shape, and capabilities. Some UASs, such as the Global Hawk, have a wingspan as large as that of a Boeing 737. Others, because they do not need the power or physical size to carry a pilot, can be small and light enough to be launched by hand, as is the case for the SkySeer UAS shown in figure 2. DOD has pioneered UAS applications for wartime use and, in 2007, was the major user of UASs, primarily for ongoing conflicts in Iraq and Afghanistan. While many of DOD’s UAS operations currently take place outside the United States, DOD needs access to the national airspace system for UASs to, among other things, transit from their home bases for training in restricted military airspace or for transit to overseas deployment locations. DOD officials stated that the need for military UAS access to the national airspace system is under review, and also noted that increased access would also allow their UASs to be more easily used to aid in fighting wildfires. Several federal agencies have roles related to UASs. FAA is responsible for ensuring UASs are safely integrated into the national airspace system’s air traffic control procedures, airport operations, and infrastructure, and with existing commercial, military, and general aviation users of the system. When UASs operate in that system, they must meet the safety requirements of the U.S. Code of Federal Regulations, Title 14, parts 61 and 91. FAA approves, on a case-by-case basis, applications from government agencies and private-sector entities for authority to operate UASs in the national airspace system. Federal, state, and local government agencies must apply for Certificates of Waiver or Authorization (COA), while private-sector entities must apply for special airworthiness certificates. In either case, FAA examines the facts and circumstances of proposed UAS operations to ensure that the prospective operator has acceptably mitigated safety risks. Special airworthiness certificates are the only means through which private-sector entities can operate UASs in the national airspace system. Because special airworthiness certificates do not allow commercial operations, there is currently no means for authorizing commercial UAS operations. NASA has conducted UAS research in the past. NASA led the 9-year Environmental Research Aircraft and Sensor Technology Program that focused on UAS technology for high altitude, long-endurance aircraft engines, sensors, and integrated vehicles. NASA also played a key role in a partnership with other federal agencies and industry called “Access-5.” Access-5 incorporated the efforts of the UAV National Industry Team, known as UNITE, formed by six private-sector aerospace firms, as well as FAA, DOD, and other industry participants. The Access-5 partnership sought to achieve routine operations for high-level, long-endurance UASs in the national airspace system. NASA contributed about 75 percent of the funding for this effort and the partnership had laid out plans through 2010. Although the partnership ended in fiscal year 2006 when NASA cancelled its funding, the project claimed a number of accomplishments, including creating productive and cohesive working relationships among key stakeholders and recommendations to advance the introduction of UASs into the national airspace system. Other agencies and organizations have roles or interests relating to UASs. For example, DHS’s TSA has authority to regulate security of all transportation modes, including non-military UASs, to ensure that appropriate security safeguards are in place. GSA has the responsibility for maintaining an inventory of all federally-owned or -leased aircraft, as reported by federal agencies. Additionally, a number of associations, representing private-sector aviation industries, such as airframe and components manufacturers, and users of the national airspace system, have interest in UASs progressing toward routine access to the system. We refer to officials of these associations as stakeholders in this report. Several federal agencies are using UASs of varying sizes for missions ranging from forest fire monitoring to border security. These agencies are interested in expanded use of UASs and state and local governments would also like to begin using UASs for law enforcement or firefighting. UASs also could eventually have commercial applications. Federal agencies use UASs for many purposes. NASA, for example, uses UASs as platforms for gathering scientific research data and has partnered with other government agencies to demonstrate and use UASs’ unique capabilities. At its Wallops Island, Virginia, Flight Facility, NASA operates a small fleet of Aerosonde® UASs on a lease-to-fly basis for researchers. NASA also operates a modified Predator B UAS from its Dryden Flight Research Center, in California, and used it to aid firefighting efforts in southern California in 2007. During 2005, the Department of Commerce’s National Oceanographic and Atmospheric Administration (NOAA) partnered with NASA and industry to use a UAS to fill data gaps in several areas, including climate research, weather and water resources forecasting, ecosystem monitoring and management, and coastal mapping. During 2007, NOAA partnered with NASA to use an Aerosonde® UAS to gather data from Hurricane Noel and reported receiving valuable low- altitude data that could aid future weather forecasts and potentially reduce property damage and save lives. Several other federal agencies have benefited from using UASs. DHS’s Customs and Border Protection (CBP) uses Predator B UASs to help conduct surveillance along portions of the U.S. border with Mexico. (See fig. 3.) CBP credits its UAS operations as helping its agents make over 4,000 arrests and seize nearly 20,000 pounds of illegal drugs between September 2005 and March 2008. In the aftermath of Hurricane Katrina, UASs searched for survivors in an otherwise inaccessible area of Mississippi. Additionally, in 2004, the U.S. Geological Survey and the U.S. Forest Service used a UAS to study renewed volcanic activity at Mount St. Helens, Washington. The UAS proved useful in this study because it could operate above the extreme heat and toxic gases and solids emitted by the volcano. Recent events have contributed to increasing interest in expanding UAS operations. The nation’s industrial base has expanded to support current overseas conflicts. Moreover, personnel returning from duty in war theaters provide a growing number of trained UAS operators. Advances in computer technology, software development, light weight materials, global navigation, advanced data links, sophisticated sensors, and component miniaturization also contribute to the heightened interest in using UASs in civilian roles. In addition, the military’s use of UASs has raised the visibility of the possible benefits of using UASs in non-military applications. For example, the military recently demonstrated how operators can use UASs as communications platforms to bridge rugged terrain as shown in figure 4. Disaster recovery officials could use UASs in a similar manner to help establish and maintain communications when the infrastructure is disabled or overloaded. The latter was an issue in the hours immediately following the terrorist attacks of September 11, 2001. An industry forecast anticipates that federal agencies will continue to be the main users of large UASs for much of the coming decade. CBP is expanding its fleet of Predator B UASs. The agency received its fourth aircraft in February 2008 and expects to acquire two more during fiscal year 2008. CBP also plans to expand its UAS operations along the southern U.S. border, and in the spring of 2008, begin operations along the northern U.S. border, and then eventually expand operations to the Great Lakes and Caribbean. CBP’s Air and Marine Operations Center in Riverside, California, will eventually control most of the agency’s UASs via satellite link. DHS’s Coast Guard is evaluating various UAS designs for future use in maritime border protection, law and treaty enforcement, and search and rescue. Expanded UAS use for scientific applications is also possible. According to NOAA, UASs have the potential to continue to fill critical observation gaps in climate change research, weather and water resources forecasting, ecosystem monitoring and management, and coastal mapping. NOAA also anticipates further use of UASs for hurricane observation. Figure 5 illustrates how a high-altitude UAS might obtain hurricane data. The National Academies recently recommended that NASA should increasingly factor UAS technology into the nation’s strategic plan for Earth science. In 2007, NASA acquired two Global Hawk UASs from the Air Force for potential use in long endurance missions monitoring polar ice melt or for gathering data on hurricane development 2,500 miles off the U.S. Atlantic coast. State and local agencies and commercial users envision using smaller UAS models. To facilitate more rapid resolution of emergency situations, an official with the International Association of Chiefs of Police envisions police and firefighting units having small, hand-deployed UASs available to assist at crime scenes and wildfire locations. According to FAA, as of January 2008, about a dozen law enforcement agencies had contacted the agency to discuss potential use of UASs. An industry forecast of UAS growth from 2008 to 2017 predicts that interest among local law enforcement agencies in operating UASs could increase late in the forecast period. In the private sector, some entrepreneurs have become interested in obtaining authorization to use small UASs to provide real estate photography services. Small UASs could also help companies survey pipeline or transportation infrastructure. However, an industry forecast noted that, for commercial applications, manned aircraft continue to be less costly than UASs. Consequently, demand for commercial applications will be limited in the near term. While the forecast indicates that civil and commercial UAS markets will eventually emerge, the forecast notes that, for the next several years, a more likely scenario would be for a UAS leasing industry to emerge to serve the needs of businesses that do not want to invest in UAS ownership. UASs also could provide benefits to manned aviation. Efforts to move toward routine access for UASs could produce technological improvements in areas such as materials, fuel cells, antennae, and laser communications, which could also benefit manned aviation, according to one study of UAS impact. Some experts we surveyed had similar observations, noting that advancements in see and avoid technology could lead to reduced aircraft separation requirements and, in turn, to increased airspace capacity. Five experts indicated that technological improvements could benefit the airspace, and four indicated that such improvements could benefit airports. Additionally, five experts predicted that UASs could provide a variety of benefits by assuming some of the missions currently performed by manned aircraft or surface vehicles. These experts predicted that UASs might perform these missions in less congested airspace or with engines that burn less fuel or produce less air pollution. Some experts view the routine use of UASs in the national airspace system as a revolutionary change in aviation. According to one study, the state of UASs today resembles the early days of manned aviation where innovation and entrepreneurial spirit spawned a new market and permanently changed the transportation landscape. The UAS industry is poised to meet the potential demand for UASs. A 2004 study, prepared for JPDO, reported that 49 UAS manufacturers operated in the United States. According to a 2007 industry estimate, UAS development and components manufacturing involved over 400 companies in the United States. An industry forecast for UASs indicates that, over the coming decade, the United States will account for 73 percent of the world’s research and development investment for UAS technology. The aforementioned 2004 JPDO report notes that the emergence of a civil UAS industry could provide a number of economic, social, and national security benefits, such as extending U.S. aerospace leadership in the global UAS market; sustaining, and perhaps increasing, employment in the U.S. aerospace industry; contributing to expanding the U.S. economy by increasing domestic productivity and aerospace exports; and creating the potential for a UAS civil reserve fleet for use in major national and international emergencies. Routine UAS access to the national airspace system poses a variety of technological, regulatory, workload, and coordination challenges. Technological challenges include developing a capability for UASs to detect, sense, and avoid other aircraft; addressing communications and physical security vulnerabilities; improving UAS reliability; and improving human factors considerations in UAS design. A lack of regulations for UASs limits their operations and leads to a lack of airspace for UAS testing and evaluation and a lack of data that would aid in setting standards. Increased workload would stem from FAA’s expectation of increased demand for UAS operations in the national airspace system without a regulatory framework in place. In addition, coordination of efforts is lacking among diverse federal agencies as well as academia and the private sector in moving UASs toward meeting the safety requirements of the national airspace system. FAA requires UASs to meet the national airspace system’s safety requirements before they routinely access the system. However, UASs do not currently have the capability to detect, sense, and avoid other aircraft and airborne objects in a manner similar to manned aircraft. UASs also have communications and physical security vulnerabilities. Moreover, some UASs have demonstrated reliability problems and lack human- machine interface considerations in their design. Although research, development, and testing of sense and avoid technologies has been ongoing for several years, no suitable technology has been identified that would provide UASs with the capability to meet the detect, sense, and avoid requirements of the national airspace system. These requirements call for a person operating an aircraft to maintain vigilance so as to see and avoid other aircraft. Without a pilot on board to scan the sky, UASs do not have an on-board capability to directly “see” other aircraft. Consequently, the UAS must possess the capability to sense and avoid the object using on-board equipment, or do so with assistance of a human on the ground or in a chase aircraft, or by using other means, such as radar. Many UASs, particularly smaller models, will likely operate at altitudes below 18,000 feet, sharing airspace with other objects, such as gliders. Sensing and avoiding these other objects represents a particular challenge for UASs, since the other objects normally do not transmit an electronic signal to identify themselves and FAA cannot mandate that all aircraft or objects possess this capability so that UASs can operate safely. Many small UAS models do not have equipment to detect such signals and, in some cases, are too small to carry such equipment. The Aircraft Owners and Pilots Association, in a 2006 survey of its membership, found that UASs’ inability to see and avoid manned aircraft is a priority concern. Additionally, the experts we surveyed suggested, more frequently than any other alternative, conducting further work on detect, sense, and avoid technology as an interim step to facilitate UAS integration into the national airspace system while FAA develops a regulatory structure for routine UAS operations. The effort to develop the Traffic Alert and Collision and Avoidance System (TCAS), used widely in manned aircraft to help prevent collisions, demonstrates the challenge of developing a detect, sense, and avoid capability for UASs. Although FAA, airlines, and several private-sector companies developed TCAS over a 13-year period, at a cost of more than $500 million, FAA officials point out that the designers did not intend for TCAS to act as the sole means of avoiding collisions and that the on board pilot still has the responsibility for seeing and avoiding other aircraft. FAA officials also point out that TCAS computes collision avoidance solutions based on characteristics of manned aircraft, and does not incorporate UASs’ slower turn and climb rates in developing conflict solutions. Consequently, FAA officials and stakeholders we interviewed believe that developing the detect, sense, and avoid technology that UASs would need to operate routinely in the national airspace system poses an even greater challenge than TCAS did. FAA officials believe that an acceptable detect, sense, and avoid system for UASs could cost up to $2 billion to complete and is still many years away. Ensuring uninterrupted command and control for a UAS is important because without it, the UAS could collide with another aircraft or, if it crashes to the earth, cause injury or property damage. The lack of protected radio frequency spectrum for UAS operations heightens the possibility that an operator could lose command and control of the UAS. Unlike manned aircraft, which use dedicated, protected radio frequencies, UASs currently use unprotected radio spectrum and, like any other wireless technology, remain vulnerable to unintentional or intentional interference. This remains a key security vulnerability for UASs, because in contrast to a manned aircraft where the pilot has direct, physical control of the aircraft, interruption of radio frequency, such as by jamming, can sever the UASs’ only means of control. One of the experts we surveyed listed providing security and protected spectrum among the critical UAS integration technologies. To address the potential interruption of command and control, UASs generally have pre-programmed maneuvers to follow if the command and control link becomes interrupted (called a “lost-link scenario”) and a means for safe return to the ground if operators cannot reestablish the communications link before the UAS runs out of fuel. However, these procedures are not standardized across all types of UASs and, therefore, remain unpredictable to air traffic controllers. Predictability of UAS performance under a lost link scenario is particularly important for air traffic controllers who have responsibility for ensuring safe separation of aircraft in their airspace. Ensuring continuity of UAS command and control also depends on the physical security provided to UASs. Presently, UAS operations in the national airspace are limited and take place under closely controlled conditions. However, this could change if UASs have routine access to the national airspace. One study identifies security as a significant issue that could be exacerbated with the proliferation of UASs. TSA notes that in 2004, terrorists flew a UAS over northern Israel. One stakeholder questioned how we could prevent this from happening in the United States. UASs have the capability to deliver nuclear, biological, or chemical payloads, and can be launched undetected from virtually any site. In response to the events of September 11, 2001, entry doors to passenger airplane cockpits were hardened to prevent unauthorized entry. However, no similar security requirements exist to prevent unauthorized access to UAS ground control stations—the UAS equivalent of the cockpit. Security is a latent issue that could impede UAS developments even after all the other challenges have been addressed, according to one study. Although DOD has obtained benefits from its UAS operations overseas, the agency notes in its Unmanned Systems Roadmap that UAS reliability is a key factor in integrating UASs into the national airspace system. Our analysis of information that DOD provided on 199 military UAS accidents, of varying degrees of severity, that occurred over 4½ years during operations Enduring Freedom and Iraqi Freedom, indicates that reliability continues to be a challenge. About 65 percent of the accidents resulted from materiel issues, such as failures of UAS components. Studies indicate that a number of factors could contribute to UAS reliability problems. Many UASs have been designed primarily as expendable or experimental vehicles, where factors such as cost, weight, function, and performance outweigh reliability concerns, according to a 2004 study. The Congressional Research Service reported in 2006 that the lack of reliability stems from the fact that UAS technology is still evolving, and, consequently, less redundancy is built into the operating system of UASs than of manned aircraft, and until redundant systems are perfected, accident rates are expected to remain high. Reliability issues also stem from the nature of the components used in some UASs. A DOD report notes that there has been a tendency to design UASs at low cost using readily available materials that were not intended for use in an aviation environment. For example, one UAS used by DOD was equipped with a wooden propeller that could disintegrate in the rain. A composite or metal propeller could cost two to three times more than a wooden propeller. UAS developers have not yet fully incorporated human factors engineering in their products. Such engineering incorporates what is known about people, their abilities, characteristics, and limitations in the design of the equipment they use, the environments in which they function, and the jobs they perform. According to researchers and agency officials we interviewed, technology in its early developmental stages typically lacks human factors considerations. Researchers noted that UASs, similar to any new technology, have been designed by engineers who focused on getting the technology to work, without considering human factors, such as ease of use by non-engineers. FAA officials noted that UASs today are at a similar stage as personal computers in their early years before newer, more user-friendly operating systems became standard. Studies indicate that human factors issues have contributed to military UAS accidents and DOD has indicated the need for further work in this area. Our analysis of DOD’s data on UAS accidents during Operation Enduring Freedom and Operation Iraqi Freedom showed that 17 percent were due to human factors issues. Several human factors issues have yet to be resolved. For example, the number of UASs that a single ground-based pilot can safely operate remains undetermined, as some future scenarios envision a single pilot operating several UASs simultaneously. Other unresolved issues include how pilots or air traffic controllers respond to the lag in communication of information from the UAS, the skill set and medical qualifications required for UAS pilots, and UAS pilot training requirements. The variety of ground control station designs across UASs is another human factors concern. For example, pilots of the Predator B UAS control the aircraft by using a stick and pedals, similar to the actions of pilots of manned aircraft. In contrast, pilots of the Global Hawk UAS use a keyboard and mouse to control the aircraft. Differences in UAS missions could require some variation among control station designs, but the extent to which regulations should require commonalities across all ground control stations awaits further research. The transition from one crew to another while UASs are airborne serves as another human factors issue needing resolution. Because UASs have the capability of extended flight, one crew can hand off control to another during a mission. Several military UAS accidents have occurred during these handoffs, according to a 2005 research study. The National Transportation Safety Board cited a similar issue in its report on the April 26, 2006, crash of CBP’s Predator B UAS. According to the report, the pilot inadvertently cut off the UAS’s fuel supply when he switched from a malfunctioning console to a functioning one. When the switch was made, a lever on the second console remained in a position that would cut off the fuel supply if an operator used the console to control the aircraft. Although procedures required that the controls on the two consoles be matched prior to making such a switch, this procedure was not followed. CBP reports that it has taken action to address this issue and has also addressed nearly all of the board’s other recommendations stemming from this accident. A remote pilot’s lack of situational awareness serves as another human factors-related challenge for the safe operation of UASs. For example, FAA officials have noted that situational awareness remains a key factor for operators to detect and appropriately respond to turbulence. A pilot on board an aircraft can physically sense and assess the severity of turbulence being encountered, whereas a remote pilot cannot. A UAS could break apart and become a hazard to other aircraft or to persons or property on the ground if the pilot has no indication of turbulence or its severity. Even if a remote pilot had an awareness of the turbulence, the level of risk that the pilot might accept needs further study. Because a pilot does not risk his own safety when operating a UAS, the pilot may operate the UAS in situations unsuitable for the aircraft, such as flying through turbulence strong enough to destroy the UAS’s airframe. Although many experts and aviation stakeholders believe that the technical issues discussed above represent difficult challenges for UAS integration into the national airspace system, others do not. For example, DOD’s Unmanned Systems Roadmap asserts that the technology for detecting and maneuvering to avoid objects does not present a major obstacle. Some experts responding to our survey expressed similar opinions. For example, one noted that technology needed to safely integrate UASs into the national airspace system exists today and that implementation should be the focus. Another said that FAA is too slow in adopting new technology and that sense and avoid techniques are available today that, when used in combination with a qualified pilot at the ground station’s controls, would be sufficient to allow free access for larger UASs. However, FAA expects to continue its current practice of allowing UAS access to the national airspace system on a case-by-case basis, after a safety review, until technology, research, and regulations mature. The U.S. Code of Federal Regulations prescribes rules governing the operation of most aircraft in the national airspace system. However, these regulations were developed for manned aircraft. Minimum performance standards for UAS detect, sense, and avoid and communications, command, and control capabilities, as well as regulations that incorporate these minimum standards, do not exist. Moreover, existing regulations may need changes or additions to address the unique characteristics of UASs. For example, because UASs do not need to be large or powerful enough to carry a pilot, they can be much smaller than any aircraft that today routinely operates in the national airspace system. Existing regulations were developed for aircraft large enough to carry a human. The lack of a regulatory framework has limited the amount of UAS operations in the national airspace system, which has, in turn, contributed to a lack of operational data on UASs and a lack of airspace in which developers can test and evaluate their products. An industry forecast indicates that growth in a civil UAS market is not likely until regulations exist that allow UASs to operate routinely. The forecast assumes that such regulations would be in place by 2012, but notes that few civil-use UASs would be produced in the near term, with numbers increasing towards 2017. (See fig. 6.) Studies indicate that the lack of regulations can affect liability risk of UAS operations, which can increase insurance costs. For example, without airworthiness standards, insurers would be even more concerned about the liability hazard of UASs crashing in a dense urban environment. The lack of regulations to govern access to airspace has also posed challenges for developers of civil UASs. Officials of associations representing UAS developers told us of difficulties in finding airspace in which to test and evaluate UASs. One of these officials noted that some manufacturers have their own test ranges, and some have access to restricted military airspace, but other UAS developers have not had this access. Additionally, because UAS operations in the national airspace have been limited, operational data is scarce. Having data on UAS operations is an important element in developing regulations. Because UASs have never routinely operated in the national airspace system, the level of public acceptance is unknown. One researcher observed that as UASs expand into the non-defense sector, there will inevitably be public debate over the need for and motives behind such proliferation. One expert we surveyed commented that some individuals may raise privacy concerns about a small aircraft that is “spying” on them, whether operated by law enforcement officials or by private organizations, and raised the question of what federal agency would have the responsibility for addressing these privacy concerns. On the other hand, a study for JPDO noted that if UASs were increasingly used to produce public benefits in large-scale emergency response efforts, public acceptance could grow as the public notes the benefits that UASs can provide. As other countries work toward integrating UASs in their respective airspaces, FAA faces a challenge to work with the international community in developing harmonized standards and operational procedures so that UASs can seamlessly cross international borders and U.S. manufacturers can sell their products in the global marketplace. International bodies such as the European Organization for Civil Aviation Equipment (EUROCAE), and the European Organization for the Safety of Air Navigation (EUROCONTROL), as well as individual countries face challenges similar to those that the United States faces in integrating UASs into their respective airspaces. EUROCAE formed a working group—WG-73—in 2006 to focus on UAS issues. The working group completed its first product in January 2007—a preliminary inventory of airworthiness certification and operational approval items that need to be addressed. The working group also plans to develop a work plan that lays out work packages and timelines; a concept for UAS airworthiness certification and operational approval that will provide recommendations and a framework for safe UAS operations in non-segregated airspace; requirements for command, control, and communications, as well as for sense and avoid systems; and a catalog of UAS-air traffic management incompatibility issues that need to be addressed. EUROCONTROL has established a UAS Air Traffic Management Activity and is hosting workshops to seek feedback, suggestions, and advice from a broad range of aviation stakeholders on its approach to UAS integration into European airspace. The second workshop is scheduled for May 2008 and is open to all interested civil and military stakeholders, including air navigation service providers, UAS operators and manufacturers, regulators, as well as associations and professional bodies. EURCONTROL has also established an Operational Air Traffic Task Force that has developed high-level specifications for military UASs operating outside segregated airspace in a form suitable for European states to incorporate into their national regulations. The specifications state that UAS operations should not increase the risk to other airspace users, that air traffic management procedures should mirror those applicable to manned aircraft, and that the provision of air traffic services to UASs should be transparent to air traffic controllers. Table 1 illustrates the variety of individual country efforts to integrate UASs into their respective airspaces. With the variety of ongoing efforts around the world, FAA and other countries face a challenge in harmonizing UAS standards and procedures. FAA could face a workload challenge in conducting an increasing number of case-by-case safety reviews for proposed UAS operations in the national airspace system. FAA is already having difficulty in meeting its 60-calendar day goal for processing COAs, used for government requests to operate UASs. From December 2006 through January 2008, FAA’s COA processing time averaged 66 calendar days. FAA anticipates a substantial increase in requests for COAs, as well as for special airworthiness certificates, used by private-sector entities proposing UAS operations in the national airspace system, by 2010. (See figs. 7 and 8.) Increased demand could result in even longer processing times for COAs. A lack of knowledge of the number of federally-owned or -leased UASs adds uncertainty to FAA’s expected future workload. The number of COAs does not provide a count of federally-owned or -leased UASs because each COA reflects an authorization to operate a UAS, not the number of UASs owned or leased by an agency. According to FAA, an agency could have multiple copies of the same type of UAS whose operation is approved in a COA. Moreover, having multiple UASs of the same type could drive additional workload for FAA if the agency requests authorization to operate its UASs under different operating scenarios, each of which would require a separate COA. An agency could also have only one UAS, but more than one COA, if the agency required and received approval for the UAS to operate under different sets of conditions. GSA has responsibility for maintaining the inventory of federally-owned and -leased aircraft, but its regulations on reporting these aircraft have not been updated to require federal agencies to report UASs. Coordinating the efforts of numerous federal agencies, academic institutions, and private-sector entities that have UAS expertise or a stake in routine access to the national airspace system is a challenge. As discussed above, several federal agencies are involved to varying degrees in UAS issues. Additionally, academic institutions have UAS expertise to contribute and UAS manufacturers have a stake in supplying the demand for UASs that routine access could create. FAA and experts referenced the Access-5 program that, in the past, served as an overarching coordinating body and provided a useful community forum. While some experts believe that Access-5’s focus on high-altitude, long-endurance UASs is no longer appropriate, the program’s institutional arrangements demonstrated how federal government and the private-sector resources could be combined to focus on a common goal. Stakeholders and experts we surveyed believe that coordination and focus are lacking among the diverse entities working on UAS issues, and expressed concerns that the potential public and economic benefits of UASs could be delayed while FAA develops the safety regulations required to enable routine UASs operations in the national airspace system. They noted the numerous potential uses in public safety, law enforcement, weather forecasting, and national security, discussed previously, stating that these benefits will be delayed until standards are developed. Some also noted that economic benefits realized through industry growth and productivity gains in the commercial sector would also be delayed. Additionally, some experts believe that, at the current pace of progress, the United States would lose its leadership position and manufacturers would move to other countries where the regulatory climate is more receptive. However, as previously noted, an industry forecast indicates that the United States will account for about two-thirds of the worldwide UAS research and development in the coming decade. FAA and other agencies have roles in addressing technological, regulatory, and workload challenges, but no entity is in charge of coordinating these efforts. FAA and DOD are addressing some technological challenges, but TSA has not addressed the security implications of routine UAS operations. FAA is establishing a regulatory framework, but routine UAS access to the national airspace may not occur for over a decade. FAA is mitigating its expected increased workload by automating some of its COA processing steps. GSA is updating its federal aircraft reporting requirements to include UASs. Experts and stakeholders believe that an overarching entity could add focus to these diverse efforts and facilitate routine UAS access to the national airspace system. FAA is addressing technological issues by sponsoring research and taking steps to address UAS vulnerabilities in communications, command, and control. DOD is taking steps toward improving UAS reliability and the extent of human factors consideration in UAS design. An FAA-sponsored federal advisory committee is developing technical standards for FAA to use in developing UAS regulations. Although TSA issued an advisory circular in 2004 on UAS security concerns, it has not addressed the security implications of routine UAS access in the national airspace system. FAA has budgeted $4.7 million for fiscal years 2007 through 2009 for further UAS research on topics such as detect, sense, and avoid; command and control; and system safety management. NASA, FAA, and others have conducted tests to determine the capabilities of and potential improvements to detect, sense, and avoid technology. For example, in 2003, NASA installed radar on a manned aircraft that was equipped for optional control from the ground. The tests indicated that the radar detected intruding aircraft earlier than the onboard pilot, but also revealed the need for further work on the onboard sensing equipment to ensure adequate response time for the remote pilot. In another example, FAA and the Air Force Research Laboratory collaborated to execute flight tests for sense and avoid technology between October 2006 and January 2007. According to a summary of the lessons learned from these tests, the results showed some promise, but indicated that much work and technology maturation would need to occur before the tested system could be deemed ready for operational use. Addressing the challenge of radio frequency allocation for UAS operations is moving forward, but may not be completed for several years. The International Telecommunication Union allocates radio frequency spectrum and deliberates such issues at periodic World Radiocommunication Conferences, the most recent of which was held in the fall of 2007. To obtain spectrum allocation for UASs, FAA has participated with the Department of Commerce in a national preparation process to place spectrum allocation decisions on the conference’s future agenda. At the 2007 conference, delegates agreed to discuss at the next conference, in 2011, the spectrum requirements and possible regulatory actions, including spectrum allocations, needed to support the safe operation of UASs. The Department of Commerce and the Federal Communications Commission would jointly implement and manage the spectrum allocation decisions made at the 2011 conference, as these agencies manage, respectively, federal and non-federal use of frequency spectrum. DOD is urging manufacturers to increase UAS reliability while keeping costs low by using such practices as standard systems engineering, ensuring that replacement parts are readily available, and using redundant, fail-safe designs. DOD also notes in its Unmanned Systems Roadmap that, although UASs suffer accidents at one to two orders of magnitude greater than the rate incurred by manned military aircraft, accident rates have declined as operational experience increased. For some UASs, the accident rates have become similar to or lower than that of the manned F-16 fighter jet, according to the roadmap. According to a study by The MITRE Corporation, General Atomics designed the Predator B UAS with reliability in mind, and the Altair UAS, which is a modified version of the Predator, has, among other things, triple redundant avionics to increase reliability. The Army has made some progress in limiting the variety of ground control station designs for unmanned aircraft—a human factors concern— by developing its “One System®,” which involves a single ground control station capable of operating a variety of UASs. Further increasing standardization and interoperability across all unmanned systems is a continuing DOD goal. The Radio Technical Commission for Aeronautics (RTCA), a federal advisory committee sponsored by FAA, is establishing minimum performance standards for FAA to use in developing UAS regulations. RTCA established Special Committee 203 in October 2004 to develop such standards for UAS detect, sense, and avoid and for UAS communications, command, and control. Individuals from academia and the private sector serve on the committee without government compensation along with FAA, NASA, and DOD officials. Special Committee 203 has begun assessing the technological and regulatory landscape as it pertains to UASs to determine the scope of its task. The committee published guidance materials to provide a framework for its standards development effort and to help UAS designers, manufacturers, installers, service providers, and users understand the breadth of operational concepts and systems being considered for integration into the national airspace system. The committee anticipates that the guidance will be further refined and validated as the standards development process moves along. According to a committee co-chair, the committee did not realize, at the outset, that developing technical standards for UASs would be a project of unprecedented complexity and scope for RTCA. RTCA’s projects have been narrower in scope in the past, he said. Although the committee officials had previously estimated that the standards would be completed by 2011 or 2012, the completion date is now between 2017 and 2019. The additional time has been added to apply a data-driven, systems engineering approach that will require the collaborative efforts of FAA, DOD, and MITRE’s Center for Advanced Aviation System Development. RTCA anticipates that reliability and human factors requirements will be integrated into its minimum performance standards. The guidance materials note that UASs must meet the same reliability as manned aircraft, and that reliability is an important component of safety; flight control systems; certification requirements for detect, sense, and avoid avionics; and for command and control systems such as the UAS’s autopilot. According to RTCA officials, human factors will be an overarching consideration in standards development. Although UASs remain vulnerable to many of the same security risks as manned aircraft, little attention has been afforded to UAS security. In 2004, TSA issued an advisory that described possible terrorist interest in using UASs as weapons. The advisory noted the potential for UASs to carry explosives or disperse chemical or biological weapons. However, the advisory noted that there was no credible evidence to suggest that terrorist organizations plan to use UASs in the United States and advised operators to stay alert for UASs with unusual or unauthorized modifications or persons observed loitering in the vicinity of UAS operations, loading unusual cargo into a UAS, appearing to be under stress, showing identification that appeared to be altered, or asking detailed questions about UAS capabilities. In 2007, the agency advised model aircraft clubs to fly their aircraft only at chartered club facilities or at administered sites and to notify local authorities of scheduled flying events. TSA considers these actions appropriate to address the security threat posed by UASs. According to TSA, the agency uses a threat based, risk management approach to prioritize risk, threats, and vulnerabilities in order to appropriately apply resources and implement security enhancements. TSA informed us that the agency continues to monitor threat information regarding UASs and has processes in place to act quickly to mitigate and respond to any identified vulnerabilities. While these actions may be appropriate for the low tempo of today’s UAS operations, growth forecasts indicate that UASs could proliferate in the national airspace in the future. Such a proliferation could increase the risk of UASs being used by terrorists for attacks in the United States. A lack of analysis of security issues, while FAA develops the regulatory framework, could lead to further delays in allowing UASs routine access to the national airspace system. FAA has established a UAS program office and is reviewing the body of manned aviation regulations to determine the modifications needed to address UASs, but these modifications may not be completed until 2020. As an interim step, FAA has begun an effort to provide increased access to the national airspace system for small UASs. FAA is taking steps to develop data to use in developing standards, but has been slow to analyze the data that it has already collected. FAA is also coordinating with other countries to harmonize regulations. In February 2006, FAA created the Unmanned Aircraft Program Office (UAPO) to develop policies and regulations to ensure that UASs operate safely in the national airspace system. With 19 staff, UAPO serves as FAA’s focal point to coordinate efforts to address UAS technical and regulatory challenges and for outreach to other government agencies, the private sector, and other countries and international bodies working on UASs integration challenges. UAPO is developing a program plan to inform the aviation community of FAA’s perspective on all that needs to be accomplished and the time frames required to create a regulatory framework that will ensure UAS safety and allow UASs to have routine access to the national airspace system. Although officials informed us that this plan was in progress in December 2006, as of March 2008 the plan was awaiting final approval for release. Issuing the program plan could provide industry and potential UAS users with a framework that describes FAA’s vision and plans for integrating UASs into the national airspace system. While RTCA is developing minimum performance standards for UASs, FAA has begun to review the existing body of regulations for manned aviation to determine what regulations need to be modified or whether new regulations are needed to address the unique characteristics of UASs. Some of the rules for manned aircraft may not apply to UASs. For example, the rule requiring that oxygen be on board for passenger use on all aircraft operating above 14,000 feet would not apply to a UAS. On the other hand, new standards may be needed. For example, while FAA has developed standards for manned airframe stress, no similar standard exists for UASs. UASs may require unique standards because, as mentioned previously in this report, a remote pilot cannot physically experience and judge the severity of turbulence that could potentially harm the airframe and cause an accident. However, UASs may not receive routine access to the national airspace system until 2020. FAA’s final step in developing UAS regulations must wait until the 2017 to 2019 time frame, after RTCA’s Special Committee 203 develops minimum technical standards for UASs. FAA would then conduct a rulemaking to adopt the committee’s standards, which would require an additional year, according to an FAA official. As an interim effort to increase UAS access to the national airspace system, FAA began an effort in 2007 to establish regulations to incrementally allow small UASs to operate in the national airspace system, under low-risk conditions without undergoing the case-by-case approval process that is currently required. FAA has established a plan to publish a notice of proposed rulemaking by July 2009 and a final rule by 2010 or 2011. Although FAA has not reached any final decisions, FAA may limit these regulations to UASs weighing less than 30 pounds, operating within line of sight, and traveling at speeds less than 40 knots, according to an FAA official. FAA is considering using a nontraditional certification approach that would allow applicants to register small UASs using a Web- based tool. FAA anticipates that, following the rulemaking, it will obtain data and experience with UAS operations that could lead to further gradual expansion of small UAS access to the national airspace system. Allowing incremental access of certain UASs that pose low risks is consistent with pending legislation and local government agencies and potential commercial operators have expressed much interest in operating small UASs. However, FAA recognizes that some small UASs may never have routine access to the national airspace system because their small size limits their ability to carry detect, sense, and avoid equipment. Additionally, FAA notes that, like all UASs, small UASs will require secure radio frequency spectrum for command and control, and this issue has not yet been resolved. The absence of a comprehensive database on UAS safety and reliability that could inform the standards and regulations development process hinders FAA’s efforts to establish a regulatory framework for UASs. FAA has been working to leverage DOD’s decades of experience with UASs. Collaboration between FAA and DOD could provide mutual benefits. DOD plans to spend over $7 billion in research, development, test, and evaluation funds for UASs between fiscal years 2007 and 2013. Data from these efforts could facilitate FAA’s development of a regulatory framework to allow UASs to have routine access to the national airspace system. DOD would benefit from this access by being able to operate its UASs in the national airspace, without first obtaining a COA, as UASs transit from home bases to training areas or to overseas deployment. To this end, FAA and DOD finalized a memorandum of agreement in September 2007 that provides a formal mechanism for FAA to request, and DOD to provide, data on UAS operations to support safety studies. Through the memorandum, FAA will share the results of its studies with DOD and vice versa. FAA also participates with DOD on a joint integrated product team that is focusing on obtaining military UAS access to the national airspace system. According to DOD’s Unmanned Systems Roadmap, the team’s activities include modeling and simulation, technology development, acquisition, demonstrations, and flight tests. While DOD’s extensive experience with UAS operations and its accumulated data represent potentially rich sources of information on UAS operations, regulators should use such information with the understanding that it comes from a wartime operating environment. FAA and DOD officials acknowledge that military experience and operational data on UASs are not always directly transferable to operations in the national airspace system. The military’s use of UASs is focused on mitigating the danger to troops. Safety and reliability risks that may be appropriate in a war zone to protect troops may not be acceptable in the national airspace system. FAA’s efforts to develop and analyze UAS operations data are a good start, but FAA has not yet analyzed the data that it has already collected. The COA requires the applicant to provide FAA with a variety of operational data, such as the number of flights conducted, the pilot duty time per flight, equipment malfunctions, and information on any accidents. FAA has been archiving this information as it is received, but has not analyzed it because of resource constraints, according to a UAPO official. Analyzing this data could add to the information available for developing standards. As a vehicle for collecting data on UAS operations and to address the challenge that UAS developers have had in finding airspace for testing and evaluating their products, FAA has established a UAS test center at New Mexico State University in Las Cruces, New Mexico. FAA expects that UAS operations at the test center, which opened in the spring of 2008, will provide FAA with some of the data needed to develop standards and regulations for allowing routine UAS access to the national airspace system. The university will operate the 12,000 square mile test center, where UASs can operate at altitudes up to 18,000 feet. (See fig. 9.) The university has several years of experience in demonstrating, testing, and evaluating UAS technologies. The New Mexico environment has the advantage of a very low population density and a low volume of air traffic, and the test center is located over mostly undeveloped government-owned land. FAA will provide oversight of the test center operation by way of announced and unannounced visits, according to an FAA official. To address the challenge of coordinating U.S. efforts with those of other countries, FAA is working with international aviation bodies and maintaining contact with other countries as they also work to overcome the challenges of integrating UASs into their respective airspaces. For example, the manager of FAA’s UAPO serves as a vice chairman of EUROCAE’s WG-73, and FAA has established a collaborative effort with EUROCONTROL to leverage mutual expertise and resources. FAA told us that the International Civil Aviation Organization (ICAO) has formed a study group to identify changes needed in global standards and practices to address UAS issues. FAA has also established a memorandum of cooperation with the Netherlands’ Civil Aviation Authority to work on UAS technology, hazards, and risks. FAA plans to contribute, subject to appropriations, $1 million during fiscal years 2007 through 2011, to provide the Netherlands with data and expertise, while the Netherlands plans to contribute €160,000 ($251,279). FAA has received briefings on Japan’s use of UASs for pesticide spraying and has collaborated with several countries to address UAS issues with ICAO. FAA’s efforts to work with the international community could facilitate mutual sharing of experiences and substantially increase the amount of information available to all countries. One stakeholder suggested Israel as a potential source of data, as that country has had extensive experience with UAS operations. An Israel Space Agency official, noting the growing importance of UASs in that country, stated that the numbers of unmanned aircraft in the Israel Air Force will outnumber manned aircraft within 20 years. The official also stated that in a recent conflict, Israel’s UASs compiled more flying hours than manned aircraft. FAA has taken some actions to mitigate the workload challenge stemming from an anticipated increase in requests for COAs to operate UASs in the national airspace system. During the spring of 2007, FAA began to introduce more automation into its COA review process for UASs and has plans for increasing automation. For example, FAA established a Web- based COA application, which became mandatory for applicants’ use on July 1, 2007. FAA officials believe that the Web-based process allows applicants to more easily determine the application’s requirements, thereby eliminating rework and repeated reviews before FAA accepts the application. FAA also expects that the September 2007 memorandum of agreement with DOD will reduce the number of COA applications because it allows DOD to conduct certain operations with UASs weighing 20 pounds or less over military installations and in other specified airspace without obtaining a COA. Additionally, FAA is working to identify characteristics of routine COA applications, which FAA estimates constitute up to 80 percent of total COA applications, enabling agency staff to focus limited resources on non routine cases. Focusing less attention on routine cases is consistent with comments from three of our experts who noted the need for an expedited process for obtaining COAs and special airworthiness certificates. FAA officials also stated that because applicants are becoming more familiar with COA requirements, a higher percentage of applications do not need additional work and review. Knowledge of the number of federally-owned or -leased UASs could help FAA to plan for future workload. Forecasters indicate that UASs operated by federal agencies could be a major component of UAS growth in the immediate future. Although the current number of federally-owned or -leased UASs is unknown, GSA is taking steps to obtain this information. In response to our requests for data on the number of federally-owned or -leased UASs, GSA sent letters to federal agencies in February 2008, clarifying that FAA defines a UAS as an aircraft and requesting agencies to report their UASs by March 31, 2008. GSA is also in the process of revising regulations to require federal agencies to include owned or leased UASs in their aircraft inventory reports. GSA expects to have its regulation updated by February 2009. GSA anticipates that the first public reporting of UASs will be in the fiscal year 2008 Federal Aviation Report, due by March 31, 2009. This report could add a degree of certainty to FAA’s future workload requirements. In addition to FAA, DOD, TSA, and GSA, other federal agencies, academia, and the private sector also have UAS expertise or a stake in obtaining routine UAS access to the national airspace system. For example, RTCA notes that developing standards will require collaboration with DOD’s joint integrated product team and technical expertise from staff in MITRE’s Center for Advanced Aviation System Development. DOD seeks expanded access to the national airspace and, as previously discussed, has extensive experience with operating its own UASs. Beyond DOD and FAA, other entities also have UAS expertise or a stake in achieving routine UAS access to the national airspace system. For example, DHS’s CBP and Coast Guard need UAS access to the national airspace system to perform their missions. Several academic institutions have been involved in developing UAS technology in areas such as vehicle design and detect, sense, and avoid capability. Additionally, the private sector has a stake in being ready to respond to the anticipated market that could emerge when FAA makes routine access available to most UASs. Although FAA’s UAPO serves as a focal point within FAA, the office has no authority over other agencies’ efforts. Experts and stakeholders suggested that an overarching body might facilitate progress toward integrating UASs into the national airspace system. DOD, as the major user of UASs, is taking such an approach. DOD has recognized the need for coordination of UAS activities within its own sphere of influence, as each service has recognized the value of UASs for its respective missions. Consequently, DOD established an Unmanned Aircraft Systems Task Force to coordinate critical issues related to UAS acquisition and management within DOD. According to DOD, the task force will establish new teams or lead or coordinate existing Army, Navy, and Air Force teams to enhance operations, enable interdependencies, and streamline acquisitions. FAA is participating in a joint integrated product team that is part of this task force, and DOD has invited DHS to join the task force. The European Defense Agency has also recognized the challenge of channeling diverse entities, as well as multiple nation-states, toward the common goal of UAS access to non-segregated airspace. In January 2008, the agency announced that it had awarded a contract to a consortium of defense and aerospace companies to develop a detailed roadmap for integrating, by 2015, UASs into European airspace. The project is intended to help European stakeholders such as airworthiness authorities, air traffic management bodies, procurement agencies, industry, and research institutes to develop a joint agenda for common European UAS activities. The consortium held its first workshop in February 2008 and has since prepared a roadmap outline based on the needs and requirements expressed by the stakeholders. The consortium has also identified as a baseline, key actions to be undertaken and key topics for further investigation. The consortium has invited stakeholders to discuss this common baseline at a second workshop, scheduled for May 2008. Congress addressed a similar coordination challenge in 2003 when it passed legislation to create JPDO to plan for and coordinate a transformation of the nation’s current air traffic control system to the next generation air transportation system (NextGen) by 2025. NextGen involves a complex mix of precision satellite navigation; digital, networked communications; an integrated weather system; layered, adaptive security; and more. NextGen’s coordination and planning challenges are similar to those posed by UASs. For example, as required for UAS integration, the expertise and technology required for NextGen resides in several federal agencies, academia, and the private sector. DOD has expertise in “network centric” systems, originally developed for the battlefield, which are being considered as a framework to provide all users of the national airspace system with a common view of that system. JPDO’s responsibilities include coordinating goals, priorities, and research activities of several partner agencies, including DOD, FAA, the Department of Commerce, DHS, and NASA with aviation and aeronautical firms. Congress directed JPDO to prepare an integrated plan that would include, among other things, a national vision statement and a multiagency research and development roadmap for creating NextGen. The legislation called for the roadmap to identify obstacles, the research and development necessary to overcome them, and the roles of each agency, corporations, and universities. The impact of routine UAS operations on the national airspace system and the environment depends on a number of factors and remains generally speculative. UAS impact will depend on factors such as the number of UASs purchased for civil uses and the altitudes and geographic locations where they are used. Stakeholders whom we interviewed provided a variety of perspectives on UASs’ potential impact. One official told us that UASs that use airports will impact air traffic control, while the impact of small UASs that do not need to use airports is less clear. Officials also noted that the level of risk depends on factors such as the UAS’s weight and horsepower. For example, a small, 2- or 3-pound UAS would pose little risk to aircraft or people on the ground, but UASs weighing more than 20 pounds could do significant damage to an aircraft. Officials also noted that a UAS used over a sparsely populated area would have less impact than UAS operations over densely populated areas. Predictions of the impact of UASs on the national airspace system are speculative because there are few data upon which to base predictions. Predictions become even more speculative in view of RTCA’s recent estimate that minimum standards for UASs—a prerequisite for routine UAS access to the national airspace system—will require about another 10 years to complete. One study notes that more needs to be known about the needs and capabilities of future UASs as well as the potential market, but concluded that their operations could have a significant and potentially disruptive impact on aviation by affecting capacity and introducing more complexity. In 2007, RTCA’s Special Committee 203 reported similar concerns, indicating that UASs will create some unique challenges because they operate differently from typical manned aircraft. While manned aircraft generally go from one location to another, UASs may hover or circle in one location for a significant time. Additionally, UAS speed, maneuverability, climb rate and other performance characteristics may differ substantially from those of conventional aircraft. The committee believes that these characteristics could affect air traffic flow, air traffic controller workload, and departure and arrival procedures, among other things. Similarly, FAA officials noted that UASs pose airport safety and capacity questions that require further analysis. Most of the experts stated that the impact of UAS’s would be at least as significant as that of additional manned aircraft on airspace, airports, and air traffic control. For example, they predicted that, as with manned aircraft, UASs would add to the number of aircraft and, therefore, affect airspace and airport capacity and add to the workload of air traffic controllers. However, the experts also predicted that UASs could have a beneficial impact on the environment. The experts predicted that UASs could assume some missions currently performed by manned aircraft, but could perform these missions using engines that burn less fuel or produce less air pollution. Although ensuring that UASs operate safely in the national airspace system is a new and complex challenge for FAA, the national airspace system should be prepared to accommodate them. Understanding the issues, trends, and influences of UASs will be critical in strategically planning for the future airspace system. FAA is making progress in addressing the challenges. Establishing a UAS test center to provide UAS developers with airspace in which to test, evaluate, and refine their aircraft designs, and initiating efforts to increase airspace access for small UASs are significant steps. Moving forward, issuing FAA’s long-awaited program plan should benefit the aviation community by communicating FAA’s strategy of how it plans to address the interactive complexities and unique properties of UASs and how it plans to leverage the resources of multiple entities that have expertise and experience in this area. FAA’s efforts to accumulate and analyze data will be important to facilitate the regulatory development process. However, analyzing the data that it already has collected from recent UAS operations would further support decisions on the new regulations. FAA’s new estimate that the regulatory framework is not likely to be completed until sometime near 2020—about 8 years later than the date assumed by the industry forecast cited in this report—could further delay the time frame when civil-use UAS production begins to increase. While TSA’s risk assessment of UASs may be appropriate for today’s UAS environment, a national airspace system that allows routine UAS access for all government and private UASs will require increased safeguards to protect against security vulnerabilities like those exposed in the events of September 11, 2001. Proactively assessing and addressing these issues will help ensure that the benefits of UASs are not further delayed pending resolution of security challenges. Additionally, it will be important for GSA to follow through and ensure that federal agencies report all of their owned or leased UASs, so that FAA has a more accurate basis for workload planning. It remains to be seen whether Europe will be successful in integrating UASs into its airspace by 2015, which is considerably sooner than the 2020 time frame expected in the United States. An overarching entity, modeled after JPDO and set up to coordinate federal, academic, and private-sector entities, could facilitate progress in moving toward UASs having routine access to our national airspace system. To coordinate and focus the efforts of federal agencies and harness the capabilities of the private sector so that the nation may obtain further benefits from UASs as soon as possible, Congress should consider creating an overarching body within FAA, as it did when it established JPDO, to coordinate federal, academic, and private-sector efforts in meeting the safety challenges of allowing routine UAS access to the national airspace system. To obtain further benefits from UASs, we are recommending that the Secretary of Transportation direct the FAA Administrator to expedite efforts to ensure that UASs have routine access to the national airspace system by taking the following two actions: 1. Finalize and issue a UAS program plan to address the future of UASs. 2. Analyze the data FAA collects on UAS operations under its COAs and establish a process to analyze DOD data on its UAS research, development, and operations. To ensure that appropriate UAS security controls are in place when civil- use UASs have routine access to the national airspace system, we are recommending that the Secretary of Homeland Security direct the TSA Administrator to examine the security implications of future, non-military UAS operations in the national airspace system and take any actions deemed appropriate. We provided a draft of this report to DOT, DHS, DOD, GSA, NASA, and the Department of Commerce. DOT agreed to consider our recommendations as it moves forward in addressing UASs and DHS agreed with our recommendation to it. GSA commented that, although our report contained no recommendations to the agency, it will continue to work with federal agencies to ensure that FAA has accurate information on the number of federally-owned or –leased UASs. DOT commented that the report would benefit from additional information on the impact of UASs on airports. We revised the report to include DOT’s concern that the impact of UASs on safety and capacity at airports requires further study. DOT, DOD, and DHS provided technical comments, which we incorporated as appropriate. NASA and the Department of Commerce had no comments. We are sending electronic copies of this report to FAA, DHS, DOD, GSA, NASA, the Department of Commerce, and interested congressional committees. We also will make electronic copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objective was to assess the Federal Aviation Administration’s (FAA) efforts to ensure that unmanned aircraft systems (UAS) are safely integrated into the national airspace system and the potential impact of UASs on the national airspace system and the environment after integration occurs. To meet this objective, we developed the following research questions: (1) What are the current and potential uses and benefits of UASs? (2) What challenges exist in operating UASs safely and routinely in the national airspace system? (3) What is the federal government’s response to these challenges? and (4) Assuming that UASs have routine access to the national airspace system, how might they impact the system and the environment? To address these questions, we surveyed the literature and also obtained and reviewed documents and interviewed officials of government, academic, and private-sector entities involved with UAS issues. We discussed current and future use of UASs with officials at FAA, Department of Defense (DOD), National Aeronautics and Space Administration (NASA), and Department of Homeland Security (DHS). We interviewed leaders of the Radio Technical Commission for Aeronautics’ (RTCA) Special Committee 203, which is developing UAS standards, and met with officials from a federally-funded research and development center. We discussed potential use of UASs for cargo transport with the United Parcel Service and Federal Express. We also discussed our questions with officials of associations of UAS manufacturers and users of the national airspace system, specifically, the Air Transport Association; Aerospace Industries Association; Association for Unmanned Vehicle Systems International; Aircraft Owners and Pilots Association; Air Line Pilots Association, International; American Institute of Aeronautics and Astronautics; ASTM International, originally known as the American Society for Testing and Materials; Palm Bay Police Department; and Los Angeles Sheriff Department. We discussed UAS operations with officials and observed UAS operations at Fort Huachuca, Arizona, and met with DHS’s Customs and Border Protection (CBP) officials in Arizona to discuss UAS use in border protection. Additionally, we obtained industry forecasts of UAS growth and interviewed a senior analyst involved in preparing Teal Group Corporation’s UAS market profile and forecast. We also observed a demonstration of unmanned systems at Webster Field, St. Inigoes, Maryland. To obtain additional information on the challenges that must be overcome before UASs can safely and routinely operate in the national airspace system, we leveraged information that was originally obtained and analyzed for a related GAO engagement. For that engagement, we contacted the Army Combat Readiness Center, Naval Safety Center, and Air Force Safety Center to obtain data on each service’s UAS accidents from October 2001 to April or May 2006, depending on when the services queried their databases. The services provided data on class A, B, C, and D accidents. Using the descriptive information that the services provided for each accident, we determined whether human, materiel, environmental, or undetermined factors caused the accident and categorized each accordingly. We used the definitions of human, materiel, and environmental factors provided in Army Regulation 385-40, Accident Reporting and Records. We classified accidents as “undetermined” when descriptive information did not fall within one of the first three categories of factors. We discussed the results of our analysis with DOD officials and incorporated their comments as appropriate. To obtain additional information on the federal response to the challenge of integrating UASs into the national airspace system and the impact that UASs might have on the system after they have routine access, we reviewed agency documents and interviewed officials of the General Services Administration and the Department of Commerce’s National Telecommunications and Information Administration. We also obtained information from DHS’s Transportation Security Administration. Additionally, we surveyed 23 UAS experts, whose names were identified with the assistance of the National Academies. We asked the experts to provide, in narrative format, their views on the interim regulatory, technological, research, or other efforts that could be undertaken for UASs to operate, if not routinely, then to the maximum extent possible in the national airspace system while FAA develops the regulatory structure to enable all UASs to have routine access to the system. We also asked the experts to provide their predictions on how small and large UASs might impact the national airspace, airports, air traffic control, noise, and air quality, using a 7-point scale from large adverse impact to large beneficial impact, and asked that they explain their answers. Appendix II discusses how we developed and conducted the survey. The complete survey instrument appears as appendix III. We conducted this performance audit from October 2006 to May 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We administered a Web-based survey to gather the professional views of experts on the impact of UASs on the national airspace system and the actions needed to move toward safe and routine UAS operations. The structured survey questions ensured that all individuals had the opportunity to provide information in response to the same questions and enabled us to quantify the results. We contracted with the National Academies to identify experts to participate in our survey. Using criteria to ensure adequate representation across the criteria that we had specified, the National Academies identified 26 experts. The criteria ensured that we achieved: balance in terms of the type of expertise (i.e., aircraft and avionics manufacturing officials, association representatives, engineers, academics, foreign civil aviation authorities, and researchers involved in aviation safety); balance of knowledge across relevant content areas (i.e., aviation regulations and safety, UAS technology, next generation air transportation system planning, airport operations, human factors, and international issues); and balance in representation of relevant organizations (i.e., academia, business, government, and professional organizations). The survey responses represent the professional views of the experts. Their expertise can be derived from formal education, professional experience, or both. The experts were identified by the National Academies as individuals who are recognized by others who work in the same subject matter area as having knowledge that is greater in scope or depth than that of most people working in the area. The experts included researchers, consultants, vice presidents, directors, and professors who were associated with private sector firms, associations, or academic institutions involved with UASs. Some of the experts were retired federal officials. We recognize that it is likely that no one individual possessed complete knowledge in each of the content areas addressed in the survey. However, through our selection criteria, we attempted to identify a set of individuals who, when their responses were considered in the aggregate, could be viewed as representing the breadth of knowledge in each of the areas addressed in the survey. We identified the information to collect in our surveys based on our congressional request, Internet and literature searches, professional conferences we attended, background interviews, and through discussions with external expert advisors. A social science survey specialist collaborated with staff with subject matter expertise on the development of the surveys. We pretested the survey to ensure that the questions appropriately addressed the topics, were clearly stated, easy to comprehend, unbiased, and did not place undue burden on respondents. We also evaluated the usability of the Web-based survey. Based on the pretest results, we made necessary changes to the survey prior to implementation. We administered the Web-based survey during August and September 2007. We used email to inform the respondents of the survey administration, and provided them with the Web link for the survey and their log-in name and password. In the email message, we informed respondents that our report will not contain individual survey responses; instead, it may present the aggregated results of all participants. To maximize the response rate, we sent follow up email reminders and followed up by telephone as necessary to encourage survey participation. The survey was sent to 26 experts; three did not respond, giving the survey a response rate of 89 percent. The narrative responses in question 1 and the explanations for the closed- ended items in questions 2 and 3 were analyzed and coded into categories. A reviewer checked the resulting categories and coded responses and, where interpretations differed, agreement was reached between the initial coder and the reviewer. The coded results were tallied and provide the basis for our survey findings for these items. Because we did not report on aggregate responses to question 4, we did not perform content analysis on this question. The number of responses reported for the closed-ended questions may vary by question because a number of experts responded “Don’t know” or “No basis to judge,” or did not answer specific questions. The survey was administered via the Web and is reproduced as a graphic image in appendix III. Welcome to the U.S. Government Accountability Office's (GAO) Survey of Experts on Unmanned Aircraft Systems (UAS). GAO is conducting this survey as a part of our study on the future of UASs in the national airspace system which was requested by the Aviation Subcommittee of the House Committee on Transportation and Infrastructure. The purpose of the survey is to collect information on the impact of UASs on the national airspace sytem and the actions needed to move toward safe and routine UAS operations. To begin, you will need the user name and password from the e-mail message we sent you. In addition, please click here to download important information that will help you complete the questionnaire. The questionnaire will be available on the web for one week. During this time, you may log into the questionnaire to enter and edit information as often as you like. It will take between 30 and 45 minutes to complete the questionnaire. You may bookmark this page to make it easier to start the questionnaire again. If you want to print a blank questionnaire for reference, you will need the Adobe Acrobat Reader software to do this. If you do not already have this software, click on the Adobe icon to download the software. If you want to print a blank questionnaire for reference, click here to download a copy. You will not be able to enter responses into this PDF file. If you have questions, please contact: Ed Menoche ([email protected]) at 202- 512-3420 or Teresa Spisak ([email protected]) at 202-512-3952. Click on the button below to start this questionnaire. In addition to the contact named above, Teresa Spisak, Assistant Director; Edmond Menoche, Senior Analyst; Colin Fallon; Jim Geibel; Evan Gilman; David Hooper; Jamie Khanna; Patty Lentini; Josh Ormond; Manhav Panwar; and Larry Thomas made significant contributions to this report.
Government and private-sector interest is growing in unmanned aircraft systems (UAS) for use in a variety of missions such as U.S. border protection, hurricane research, law enforcement, and real estate photography. However, UASs can fly only after the Federal Aviation Administration (FAA) conducts a case-by-case safety analysis. GAO's research questions included (1) What are the current and potential uses and benefits of UASs? (2) What challenges exist in operating UASs safely and routinely in the national airspace system? and (3) What is the federal government's response to these challenges? To address these questions, GAO reviewed the literature, interviewed agency officials and aviation stakeholders, and surveyed 23 UAS experts. UASs are currently being used by federal agencies for border security, science research, and other purposes. Local governments see potential uses in law enforcement or firefighting and the private sector sees potential uses, such as real estate photography. An industry survey states that UAS production could increase in the future to meet such government and private-sector uses. Experts predict that UASs could perform some manned aircraft missions with less noise and fewer emissions. UASs pose technological, regulatory, workload, and coordination challenges that affect their ability to operate safely and routinely in the national airspace system. UASs cannot meet aviation safety requirements, such as seeing and avoiding other aircraft. UASs lack security protection--a potential challenge if UASs proliferate as expected after obtaining routine airspace access. The lack of FAA regulations for UASs limits their operation to case-by-case approvals by FAA. Anticipated increases in requests to operate UASs could pose a workload challenge for FAA. Coordinating multiple efforts to address these challenges is yet another challenge. FAA and the Department of Defense (DOD) are addressing technological challenges. DHS has not addressed the national security implications of routine UAS access to the airspace. FAA estimates that completing UAS safety regulations will take 10 or more years, but has not yet issued its program plan to communicate the steps and time frames required for providing routine UAS access. FAA is working to allow small UASs to have airspace access and has designated specific airspace for UAS testing. It plans to use data from this testing and from DOD to develop regulations, but has not yet analyzed data that it has already collected. To address its workload challenge, FAA is using more automation. Aviation stakeholders and experts suggested that an overarching entity could help coordinate and expedite federal, academic, and private-sector efforts. In 2003, Congress created a similar entity in FAA to coordinate planning for the next generation air transportation system among multiple federal agencies and the private sector.
Effective leadership is the key driver of successful human capital management. Simply put, the tone starts from the top. As one example, in September 2011, OPM and the CHCO Council, as part of ongoing discussions between OPM, OMB, and us on progress needed to address the federal government’s human capital high risk area, established a working group to identify and mitigate critical skills gaps. At the request of this Subcommittee, we are reviewing the progress of the working group. Our preliminary findings show that the working group has, to date, taken some important steps forward, including developing a framework and timeline for identifying and addressing both government-wide and agency-specific skills gaps. Importantly, the effort is receiving the commitment and support of agency leadership. For example, agencies’ chief human capital officers and their representatives were involved in forming the working group and participated in its deliberations. Further, the working group’s efforts were designated a cross-agency priority goal within the Administration’s fiscal year 2013 federal budget. The working group expects to complete its initial efforts in March 2013. We will continue to assess the working group’s progress and anticipate issuing a report to you later this year. In addition, OPM has demonstrated leadership in its efforts to improve the hiring process, with an eye toward making it easier and faster for people to apply for a federal job and strengthen the ability of agencies to compete with the private sector for filling entry-level positions. For example, OPM issued final regulations implementing the Pathways Programs (Pathways) which took effect on July 10, 2012. Pathways created two new conduits into government service: the Internship Program for students currently in high school, college, and other qualifying programs, and the Recent Graduates Program for individuals who, within the previous two years, earned an associate, bachelors, masters, professional or other qualifying degree or certificate. Pathways also modified the existing Presidential Management Fellows Program making it more student friendly by, among other changes, expanding the eligibility window for applicants. Individuals in all three programs are eligible for noncompetitive conversion to permanent positions after meeting certain requirements. If successfully implemented, initiatives such as the CHCO working group and Pathways could help agencies identify and close critical skills gaps. Still, work is needed in other human capital areas. For example, as we noted in our February 2012 testimony before this Subcommittee, OPM needs to improve the paper-intensive processes and antiquated information systems it uses to support the retirement of civilian federal employees in part because of the volume of retirement processing expected in the coming years given projected retirement trends. Strategic human capital planning that is integrated with broader organizational strategic planning is essential for ensuring that agencies have the talent, skill, and experience mix they need to cost-effectively execute their mission and program goals. Workforce planning is especially important now because, as shown in figure 1, agencies are facing a wave of potential retirements. Government-wide, around 30 percent of federal employees on board at the end of fiscal year 2011 will become eligible to retire by 2016. At some agencies, however, such as the Department of Housing and Urban Development and the Small Business Administration, at least 40 percent of those on board at the end of fiscal year 2011 are already eligible or will become eligible to retire in the next five years. The government’s top leadership and management ranks also face potentially high levels of retirement. About 58 percent of senior executives and 45 percent of GS-15s who were on board at the end of fiscal year 2011 will be eligible to retire by 2016. Likewise, certain occupations face the potential of large numbers of retirements. Around 46 percent of air traffic controllers and 68 percent of administrative law judges will be eligible to retire by 2016. Although a number of factors affect when employees actually retire, a 2008 OPM study found that the median number of years an employee stays with the government after first becoming retirement-eligible is four years, although nearly 25 percent remain for nine years or more. not carefully monitored and managed, as experienced employees leave, gaps could develop in an organization’s leadership and institutional knowledge. OPM, An Analysis of Federal Employee Retirement Data: Predicting Future Retirements and Examining Factors Relevant to Retiring from the Federal Service (Washington, D.C.: March, 2008). GAO, Foreign Language Capabilities: Departments of Homeland Security, Defense, and State Could Better Assess Their Foreign Language Needs and Capabilities and Address Shortfalls, GAO-10-715T (Washington, D.C.: July 29, 2010). develop a comprehensive strategic plan with measurable goals, objectives, milestones, and feedback mechanics that links all of State’s efforts to meet its foreign language requirements. State generally agreed with our recommendations and in response, in March 2011, it published a strategic plan for foreign language capabilities that links its language incentive program to its efforts to enhance its recruitment program and expand training, among other activities. Our prior work has also identified human capital planning issues at individual agencies. For example, the Federal Emergency Management Agency (FEMA) continues to face historical workforce planning and training challenges that need to be addressed. In our April 2012, assessment which we prepared for this Subcommittee and other requesters, we reported that FEMA is in the early stages of integrating its workforce planning and training efforts with initiatives underway by other FEMA program offices. These efforts could help FEMA ensure that it has a workforce of the proper size and skills to meet its mission. However, we also noted that FEMA’s workforce planning and training efforts could benefit from quantifiable performance measures, such as metrics to gauge the agency’s progress building a comprehensive leadership development program and integrating it with agency succession planning. FEMA’s parent agency, DHS, concurred with our recommendations and is taking steps to implement them. For example, FEMA’s Strategic Human Capital Plan for fiscal years 2012 through 2016 will have milestones and metrics for addressing key workforce planning efforts. In another example, in our July 2012 report, we found that the Department of the Interior continues to face workforce planning challenges following a reorganization effort to improve its oversight of oil and gas activities in the wake of the April 2010 oil spill in the Gulf of Mexico. In particular, we found that Interior has not developed a strategic workforce plan that outlines specific strategies to help it address the recruitment, retention, and training challenges it is facing, particularly for engineers and inspectors. Interior has also not specifically determined when it will develop such a plan. To address this, we recommended that the relevant components of Interior develop a strategic workforce plan that, among other actions, determines the critical skills and competencies that will be needed to achieve current and future programmatic results and to develop strategies to address critical skills gaps. Interior agreed with this recommendation. Progress in talent management has been made on a number of fronts. However, our work had identified additional actions federal agencies can take to recruit, develop, and retain personnel with the skills essential to maintaining a workforce that will help agencies meet their vital missions. More than a decade ago, it was widely recognized that the federal hiring process was lengthy and cumbersome and hampered agencies’ ability to hire the people they needed to achieve their goals and missions. The processes of that time failed to meet the needs of managers in filling positions with the right talent and also failed to meet the needs of applicants for a timely, efficient, transparent, and merit-based process. The processes were also hampered by narrow federal classification standards for defining federal occupations, the quality of certain applicant assessment tools, and time-consuming processes to evaluate applicants. Both Congress and OPM have taken a series of important actions over the years to improve recruiting and hiring in the federal sector. For example, in 2004 Congress provided agencies with hiring flexibilities that (1) permit agencies to appoint individuals to positions through a streamlined hiring process where there is a severe shortage of qualified candidates or a critical hiring need, and (2) allow agency managers more latitude in selecting among qualified candidates through category rating, an alternative to the traditional numerical rating procedure which limited selection to the top three ranked candidates. In addition, Congress provided agencies with enhanced authority to pay recruitment bonuses and with the authority to credit relevant private sector experience when computing annual leave amounts. In 2005, and again in 2008, OPM issued guidance on the use of hiring authorities and flexibilities, in 2006 developed the Hiring Toolkit to assist agency officials in determining the appropriate hiring flexibilities to use given their specific situations, and in 2008 launched an 80-day hiring model to help speed up the hiring process. Also in 2008, OPM established standardized vacancy announcement templates for common occupations, such as contract specialist and accounting technician positions, in which agencies can insert summary information concerning their specific jobs prior to posting for public announcement. As mentioned earlier, in 2010, OPM launched the Pathways program in order to make it easier to recruit and hire students and recent graduates. Individual agencies have also taken actions to meet their specific needs for acquiring the necessary talent. For example, we have reported that the National Aeronautics and Space Administration has used a combination of techniques to recruit workers with critical skills, including targeted recruitment activities, educational outreach programs, improved compensation and benefits packages, professional development programs, and streamlined hiring authorities.many challenges remain with federal recruiting and hiring, as noted earlier in discussing critical skills gaps. Effective training and development programs are an integral part of a learning environment that can enhance the federal government’s ability to attract and retain employees with the skills and competencies needed to achieve results. Agency training and development programs should be part of an overall management strategy and include processes to assess and ensure the training’s effectiveness. Our recent work has also underscored the value of collaborative training. For example, in our 2010 overview of 225 professional development activities intended to improve interagency collaboration at nine key national security agencies (including DOD, State, and DHS), we noted that because no single federal agency has the ability to address these threats alone, agencies must work together in a whole-of-government approach to protect our nation and its interests. We found that interagency training and other professional development activities build foundational knowledge, skills, and networks that are intended to improve collaboration across agencies. For example, in fiscal year 2009, the military services or combatant commands led an estimated 84 joint- military exercise programs that addressed a range of national security matters and sought to improve the ability of participants to work across agency lines by encouraging interagency participation. In addition, DHS offers an introductory online course which is available to personnel across federal, state, and local government and provides an overview of the roles and responsibilities of various agencies and how they are supposed to work together in different emergency situations. Some agencies also use interagency rotations as a type of professional development activity that can help improve collaboration across agencies. For example, Army’s Interagency Fellowship Program is a 10- to 12- month rotation that places Army officers in intermediate-level positions at other federal agencies and allows them to learn the culture of the host agency, hone collaborative skills such as communication and teamwork, and establish networks with their civilian counterparts. In a 2012 report, we identified key policies and practices that help such interagency personnel rotation programs achieve collaboration-related results. These policies and practices include, for example, the importance of creating shared goals, establishing incentives, and undertaking careful preparation. Elsewhere, improvements are needed. Our work at State found that while the department has taken many steps to incorporate the interrelated elements of an effective training program, State’s strategic approach to its workforce training still has several key weaknesses.lacks a systematic, comprehensive training needs assessment process, incorporating all bureaus and overseas posts. State also lacks formal guidance for curriculum design and for data collection and analysis, and thus cannot be assured that proper practices and procedures are systematically and comprehensively applied. Moreover, the performance measures for training generally do not fully address training goals, and are generally output- rather than outcome-oriented. We made several recommendations for State to improve strategic planning and evaluation of its efforts to train personnel, including improvements to State’s efforts to assess training needs. State generally agreed with our recommendations and noted that it would look for ways to enhance its ability to assess the effectiveness of training and development efforts across employee groups and locations. State has not yet provided us with evidence that it has taken action to implement the report’s recommendations. More broadly, given current budget constraints, it is essential that agencies identify the appropriate level of investment and establish priorities for employee training and development, so that the most important training needs are addressed first. Our report to you issued earlier this week compared agencies’ training investment practices and OPM guidance against leading federal training investment practices identified from our past work and expert studies. included prioritizing investment funding; identifying the most appropriate mix of centralized and decentralized approaches for training and development programs; and tracking the cost and delivery of training and development programs agency-wide. GAO, Federal Training Investments: OPM and Agencies Can Do More to Ensure Cost- Effective Decisions, GAO-12-878 (Washington, D.C.: Sept. 17, 2012). that their components or sub-agencies are more knowledgeable about their mission-specific training needs, while the central human capital staff can add the most value by managing investment decisions for more general training across the department. However, many CHCOs reported that they do not set a level of investment agency-wide, do not prioritize training agency-wide, and do not have information from component or sub-agency leaders regarding their level of investments and priorities. Consequently, agencies reported that they are duplicating internal training investments and missing opportunities to leverage economies of scale across their agencies. Officials from all four agencies we interviewed (the Departments of Energy, the Interior, DHS, and Veterans Affairs) to obtain additional perspective beyond our survey of 27 CHCOs reported that they were unaware of the total amount their agencies invest in federal training and cannot provide reliable training data to OPM, which requests these data to address its government-wide training responsibilities. We found that agencies independently purchase or develop training for the same mandated or common occupational training. Several agencies and OPM officials reported that a website administered by OPM to provide training for the HR community could be expanded to provide mandatory or other common training for federal occupations, which, OPM reported, could save millions and help standardize training. We recommended, among other things, that OPM improve guidance and assistance to agencies in establishing a process for setting and prioritizing training investments; improve the reliability of agency training investment information; and identify the best existing courses that fulfill government-wide training requirements and offer them to all agencies through their existing online training platform or another appropriate platform. OPM generally agreed with most of our recommendations. In broad terms, human capital flexibilities represent the policies and practices an agency has the authority to implement in managing its workforce to accomplish its mission and achieve its goals. The tailored use of such flexibilities helps agencies recruit, develop and retain people with the knowledge, skills, and abilities that agencies need to accomplish their critical missions and compete with the private sector for top talent. Human capital flexibilities include monetary incentives such as recruitment, relocation, and retention bonuses; special hiring authorities such as veteran-related hiring authorities; incentive awards such as performance-based cash and time-off awards; and work-life policies and programs such as flexible work schedules, telework, and child care centers and assistance. Our 2010 report on the use of recruitment, relocation, and retention incentives found that these flexibilities were widely used by agencies, and that retention incentives accounted for the majority of these incentive costs. Our review of the steps OPM has taken to help ensure that agencies have effective oversight of their incentive programs found that while OPM provided oversight of such incentives through various mechanisms, including guidance and periodic evaluations and accountability reviews, there are opportunities for improvement. We recommended that OPM require agencies to incorporate succession planning efforts into the decision process for awarding retention incentives. OPM agreed with our recommendation and stated that it will develop future guidance on the importance of considering succession planning in the decision process for awarding retention incentives. In January 2011, OPM issued proposed regulations to add succession planning to the list of factors an agency may consider before approving a retention incentive for an employee who would be likely to leave the federal service in the absence of the incentive. OPM has stated that specifically listing this factor in the regulations will strengthen the relationship between succession planning and retention incentives. OPM expects to issue the final regulations before the end of 2012. To assist and guide agencies in developing and administering their work/life programs, OPM has established working groups, sponsored training for agency officials, promulgated regulations implementing work/life programs, and provided guidance. In our December 2010 report on agencies’ satisfaction with OPM’s assistance, we found that most agency officials were satisfied with OPM’s help, guidance, and information sharing. At the same time, we determined that OPM is potentially missing opportunities to provide federal agencies with additional information that may help them develop and implement work/life programs. As such, we recommended that OPM more systematically track data already being collected by individual federal agencies on their work/life programs such as program usage, and share this information with federal agencies. OPM agreed with our recommendations and said it is exploring the use of a Web-based tool that would provide an ability to collect data from agencies and present it in a more meaningful and systematic manner. According to OPM, the goal would be to allow users to note the connection between work/life programs being offered and related outcomes/results, encouraging agencies to engage in similar efforts. Leading organizations have found that to successfully transform themselves they must often fundamentally change their cultures so that they are more results-oriented, customer-focused, and collaborative in nature. An effective performance management system is critical to achieving this cultural transformation. We have found that having a performance management system that creates a “line of sight” showing how unit and individual performance can contribute to overall organizational goals helps individuals understand the connection between their daily activities and the organization’s success. The federal government’s senior executives need to lead the way in transforming their agencies’ cultures. The performance-based pay system for members of the Senior Executive Service (SES), which seeks to provide a clear and direct linkage between individual performance and organizational results as well as pay, is an important step in government-wide transformation. The importance of explicitly linking senior executive expectations to results-oriented organizational goals is consistent with findings from our past work on performance management. In January 2012, OPM and OMB released a government-wide SES performance appraisal system that provides agencies with a standard framework to managing the performance of SES members. While striving to provide greater clarity and equity in the development of performance standards and link to compensation, among other things, the Directors of OPM and OMB stated that the new system will also provide agencies with the necessary flexibility and capability to customize the system in order to meet their needs. Effective implementation of this new system will be important because, as we reported in 2008, OPM had found that some executive performance plans in use at that time did not fully identify the executives’ performance measures. Leading organizations also develop and maintain inclusive and diverse workforces that reflect all segments of society. Such organizations typically foster a work environment in which people are enabled and motivated to contribute to continuous learning and improvement as well as mission accomplishment and provide both accountability and fairness for all employees. As with any organizational change effort, having a diverse top leadership corps is an organizational strength that can bring a wider variety of perspectives and approaches to bear on policy development and implementation, strategic planning, problem solving, and decision making. In November 2008, we reported on the diversity of the SES and the SES developmental pool, from which most SES candidates are selected, noting that the representation of women and minorities in the SES increased government-wide from October 2000 through September 2007, but increases did not occur in all major executive branch agencies. In November 2011, OPM reinforced the importance of promoting the federal workplace as a model of equality, diversity, and inclusion through the issuance of the Government-Wide Diversity and Inclusion Strategic Plan. Organized around three strategic goals—workforce diversity, workplace inclusion, and sustainability—the plan provides a shared direction, encourages commitment, and creates alignment so that according to OPM, agencies can approach their workplace diversity and inclusion efforts in a coordinated, collaborative, and integrated manner. In helping to ensure diversity in the pipeline for appointments to the SES as well as recruitment at all levels, it is important that agencies have strategies to identify and develop a diverse pool of talent for selecting the agencies’ potential future leaders and to reach out to a diverse pool of talent when recruiting. For example, to recruit diverse applicants, agencies will need to consider active recruitment strategies such as widening the selection of schools from which to recruit, building formal relationships with targeted schools to ensure the cultivation of talent for future applicant pools, and partnering with multicultural organizations to communicate their commitment to diversity and to build, strengthen, and maintain relationships. To promote diversity and inclusion in the federal workforce OPM is also focusing on increasing the hiring and retention of people with disabilities and veterans. In 2010, we were asked to identify barriers to the employment of people with disabilities in the federal workforce and leading practices that could be used to overcome these barriers. In response, we convened a forum to identify leading practices that federal agencies could implement within the current legislative context. Participants said that the most significant barrier keeping people with disabilities from the workplace is attitudinal, which can include bias and low expectations for people with disabilities. According to participants, there is a fundamental need to change the attitudes of hiring managers, supervisors, coworkers, and prospective employees, and that cultural change within the agencies is critical to this effort.identified practices that agencies could implement to help the federal government become a model employer for people with disabilities. Participants Also in July 2010, the President issued Executive Order 13548 to increase the number of individuals with disabilities in the federal workforce. Nearly two years after the executive order was signed, we found that the federal government was not on track to achieve the executive order’s hiring goals. To ensure that the federal government is well positioned to become a model employer of individuals with disabilities, we recommended that the Director of OPM incorporate information about agency deficiencies in hiring individuals with disabilities into its regular reporting to the President on implementing the executive order; expedite the development of the mandatory agency training plans required by the order; and assess the accuracy of the data used to measure progress toward the order’s goals.recommendations and is taking steps to implement them. Finally, the Uniformed Services Employment and Reemployment Rights Act (USERRA) of 1994 protects the employment and reemployment rights of federal and nonfederal employees who leave their civilian employment to perform military and other uniformed services. And the Veterans’ Benefits Act of 2010 (VBA) directed the Department of Labor (Labor) and Office of Special Counsel (OSC) to establish a 36-month demonstration project (2011-2014) for receiving, investigating, and resolving USERRA claims filed against federal executive agencies. The VBA also required that we evaluate how Labor and OSC designed the demonstration project and assess their relative performance during and after the demonstration project. In September 2012, as part of our mandated effort to assess the relative performance of USERRA claim processing at Labor and OSC, we determined that both agencies had implemented comparable processes that should allow Congress to evaluate their relative performance at the conclusion of the 3-year demonstration project established by Congress. However, to improve agencies’ ability to assess relative performance, we recommended that both agencies take additional steps to ensure data integrity for the performance data they plan to report. Although Labor and OSC neither agreed nor disagreed with our recommendations, they discussed actions that they both plan to take to implement our suggestions. For example, Labor said it will review cost data on a quarterly basis for inconsistent or questionable data and correct and report any identified data issues each quarter, as necessary. OSC said it is reviewing its procedures for compiling and reporting cost data during the demonstration project, and is committed to making any necessary changes to ensure the demonstration project satisfies Congress’s goals. Strategic human capital management must be the centerpiece of any serious effort to ensure federal agencies operate as high-performing organizations. A high-quality federal workforce is especially critical now given the complex, multi-dimensional issues facing the nation. Achievement of this goal is challenging, especially in light of the fiscal pressures confronting our national government. When we first identified strategic human capital management as a high risk area in 2001, it was because many agencies faced challenges in key areas including leadership; workforce planning; talent management; and creating results-oriented organizational cultures. Since then, the federal government has made substantial progress in beginning to address human capital challenges and, in many ways, is taking a far more strategic approach to managing personnel. Through a variety of initiatives, Congress, OPM, and individual agencies have strengthened the federal human capital infrastructure. As a result of these improvements, in 2011 we narrowed the focus of our high risk assessment to closing current and emerging critical skills gaps. These challenges must be addressed for agencies to cost-effectively execute their missions and respond to emerging challenges. In short, while much progress has been made over the last 11 years in modernizing federal human capital management, the job is far from over. Making greater progress requires agencies to continue to address their specific personnel challenges, as well as work with OPM and through the CHCO Council to address critical skills gaps. Central to success will be the continued attention of top-level leadership, effective planning, responsive implementation, and robust measurement and evaluation, as well as continued congressional oversight to hold agencies accountable for results. Chairman Akaka, Ranking Member Johnson, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions. For further information regarding this statement, please contact Robert Goldenkoff, Director, Strategic Issues, at (202) 512-6806, or [email protected], or Yvonne D. Jones, Director, Strategic Issues, at (202) 512-6806, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Trina Lewis, Assistant Director; Shea Bader, Analyst-in-Charge; Margaret Best, Barbara Bovbjerg, Sara Daleski, Timothy DiNapoli, William Doherty, Brenda Farrell, Michele Fejfar; Robert Gebhart, Shirley Jones, Steven Lozano, Erik Kjeldgaard, Latesha Love, Signora May, Rebecca Rose, Jeffrey Schmerling, Rebecca Shea, Wesley Sholtes and Jason Vassilicos. Key contributors for the earlier work that supports this testimony are listed in each product. Federal Training Investments: OPM and Agencies Can Do More to Ensure Cost-Effective Decisions. GAO-12-878. Washington, D.C.: September 17, 2012. Veterans’ Reemployment Rights: Department of Labor and Office of Special Counsel Need to Take Additional Steps to Ensure Demonstration Project Data Integrity. GAO-12-860R. Washington, D.C.: September 10, 2012. Oil and Gas Management: Interior’s Reorganization Complete, but Challenges Remain in Implementing New Requirements. GAO-12-423. Washington, D.C.: July 30, 2012. Human Capital: HHS and EPA Can Improve Practices Under Special Hiring Authorities. GAO-12-692. Washington, D.C.: July 9, 2012. Managing for Results: GAO’s Work Related to the Interim Crosscutting Priority Goals under the GPRA Modernization Act. GAO-12-620R. Washington, D.C.: May 31, 2012. Disability Employment: Further Action Needed to Oversee Efforts to Meet Federal Government Hiring Goals. GAO-12-568. Washington, D.C.: May 25, 2012. Disaster Assistance Workforce: FEMA Could Enhance Human Capital Management and Training. GAO-12-538. Washington, D.C.: May 25, 2012. Federal Emergency Management Agency: Workforce Planning and Training Could Be Enhanced by Incorporating Strategic Management Principles. GAO-12-487. Washington, D.C.: April 26, 2012. Modernizing the Nuclear Security Enterprise: Strategies and Challenges in Sustaining Critical Skills in Federal and Contractor Workforces. GAO-12-468. Washington, D.C.: April 26, 2012. Interagency Collaboration: State and Army Personnel Rotation Programs Can Build on Positive Results with Additional Preparation and Evaluation. GAO-12-386. Washington, D.C.: March 9, 2012. OPM Retirement Modernization: Progress Has Been Hindered by Longstanding Information Technology Management Weaknesses. GAO-12-430T. Washington, D.C.: February 1, 2012. Cybersecurity Human Capital: Initiatives Need Better Planning and Coordination. GAO-12-8. Washington, D.C.: November 29, 2011. Emergency Preparedness: Agencies Need Coordinated Guidance on Incorporating Telework into Emergency and Continuity Planning. GAO-11-628. Washington, D.C.: July 22, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Department of State: Additional Steps Are Needed to Improve Strategic Planning and Evaluation of Training for State Personnel. GAO-11-241. Washington, D.C.: January 25, 2011. Federal Work/Life Programs: Agencies Generally Satisfied with OPM Assistance, but More Tracking and Information Sharing Needed. GAO-11-137. Washington, D.C.: December 16, 2010. National Security: An Overview of Professional Development Activities Intended to Improve Interagency Collaboration. GAO-11-108. Washington, D.C.: November 15, 2010. Highlights of a Forum: Participant-Identified Leading Practices That Could Increase the Employment of Individuals with Disabilities in the Federal Workforce. GAO-11-81SP. Washington, D.C.: October 5, 2010. Foreign Language Capabilities: Departments of Homeland Security, Defense, and State Could Better Assess Their Foreign Language Needs and Capabilities and Address Shortfalls. GAO-10-715T. Washington, D.C.: July 29, 2010. Human Capital: Continued Opportunities Exist for FDA and OPM to Improve Oversight of Recruitment, Relocation, and Retention Incentives. GAO-10-226. Washington, D.C.: January 22, 2010. Department of State: Comprehensive Plan Needed to Address Persistent Foreign Language Shortfalls. GAO-09-955. Washington, D.C.: September 17, 2009. Human Capital: Sustained Attention to Strategic Human Capital Management Needed. GAO-09-632T. Washington, D.C.: April 22, 2009. Office of Personnel Management: Retirement Modernization Planning and Management Shortcomings Need to Be Addressed. GAO-09-529. Washington, D.C.: April 21, 2009. Department of Defense: Additional Actions and Data Are Needed to Effectively Manage and Oversee DOD’s Acquisition Workforce. GAO-09-342. Washington, D.C.: March 25, 2009. Human Capital: Diversity in the Federal SES and Processes for Selecting New Executives. GAO-09-110. Washington, D.C.: November 26, 2008. Results-Oriented Management: Opportunities Exist for Refining the Oversight and Implementation of the Senior Executive Performance- Based Pay System. GAO-09-82. Washington, D.C.: November 21, 2008. Department of Homeland Security: A Strategic Approach Is Needed to Better Ensure the Acquisition Workforce Can Meet Mission Needs. GAO-09-30. Washington, D.C.: November 19, 2008. Office of Personnel Management: Improvements Needed to Ensure Successful Retirement Systems Modernization. GAO-08-345. Washington, D.C.: January 31, 2008. NASA: Progress Made on Strategic Human Capital Management, but Future Program Challenges Remain. GAO-07-1004. Washington, D.C.: August 8, 2007. Strategic Plan, 2007-2012. GAO-07-1SP. Washington, D.C.: March 30, 2007. Office of Personnel Management: Key Lessons Learned to Date for Strengthening Capacity to Lead and Implement Human Capital Reforms. GAO-07-90. Washington, D.C.: January 19, 2007. Office of Personnel Management: Retirement Systems Modernization Program Faces Numerous Challenges. GAO-05-237. Washington, D.C.: February 28, 2005. Diversity Management: Expert-Identified Leading Practices and Agency Examples. GAO-05-90. Washington, D.C.: January 14, 2005. High Risk Series: An Update, GAO-05-207. Washington, D.C.: January 1, 2005. Human Capital: Senior Executive Performance Management Can Be Significantly Strengthened to Achieve Results. GAO-04-614. Washington, D.C.: May 26, 2004. Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. GAO-04-546G. Washington, D.C.: March 1, 2004. High-Performing Organizations: Metrics, Means, and Mechanisms for Achieving High Performance in the 21st Century Public Management Environment. GAO-04-343SP. Washington, D.C.: February 13, 2004. Human Capital: Selected Agencies’ Experiences and Lessons Learned in Designing Training and Development Programs. GAO-04-291. Washington, D.C.: January 30, 2004. Human Capital: Key Principles for Effective Workforce Planning, GAO-04-39. Washington, D.C.: December 11, 2003. Human Capital: Opportunities to Improve Executive Agencies’ Hiring Processes. GAO-03-450. Washington, D.C.: May 30, 2003. High-Risk Series: An Update. GAO-01-263. Washington, D.C.: January 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO designated strategic human capital management as a governmentwide high risk area in 2001 because of a long-standing lack of leadership. Since then, important progress has been made. However, the area remains high risk because of a need to address current and emerging critical skills gaps that undermine agencies' abilities to meet their vital missions. The federal government is facing evolving and crosscutting challenges that require a range of skills and competencies to address. Moreover, retirements and the potential loss of leadership and institutional knowledge, coupled with fiscal pressures, underscore the importance of a strategic and efficient approach to acquiring and retaining individuals with needed critical skills. This testimony is based on a large body of GAO work from January 2001 through September 2012 and focuses on the progress made by executive branch agencies, the CHCO Council, and OPM, and the challenges that remain in four key areas of human capital management: (1) leadership; (2) strategic human capital planning; (3) talent management; and (4) resultsoriented organizational culture. Since 2001, Congress, the Office of Personnel Management (OPM), and executive branch agencies have taken action to address the government's human capital challenges. For example, in 2002, Congress passed legislation creating the CHCO Council, composed of the Chief Human Capital Officers (CHCO) of 24 executive agencies and chaired by the Director of OPM. In 2004, through the Federal Workforce Flexibility Act, Congress provided agencies greater hiring flexibilities. OPM issued guidance on hiring reforms, developed the Hiring Toolkit, and launched an 80-day model to speed the hiring process. Leadership: The CHCO Council advises and coordinates the activities of member agencies on current and emerging personnel issues. Among its recent initiatives, OPM and the CHCO Council established a working group in September 2011 to identify and mitigate critical skills gaps. To date the group has taken important steps, including developing a framework and timeline for identifying and addressing government-wide and agency-specific skills gaps. However, the substantive work of addressing skills gaps remains, including defining workforce plans, implementing recruitment and retention strategies, and measuring the effects of these initiatives. Strategic human capital planning: Integrating human capital planning with broader organizational strategic planning is essential for ensuring that agencies have the talent and skill mix needed to cost-effectively execute their mission and program goals. If not carefully managed, anticipated retirements could cause skills gaps to develop further and adversely impact the ability of agencies to carry out their diverse responsibilities. GAO's work has identified skills shortages in areas government-wide, such as cybersecurity, acquisition management, and foreign language capabilities. Talent management: Ensuring that federal agencies are able to recruit, develop, and retain personnel with the necessary skills is essential to closing any skills gaps and maintaining a workforce that will meet its vital missions. Congress, OPM, and some individual agencies have taken important actions, such as providing and using flexibilities, to improve the hiring process and making investments in training and development. However, much work remains. For example, GAO recently reported that OPM can improve its guidance and assistance to agencies in establishing a process for setting and prioritizing training investments. Results-oriented organizational culture: Leading organizations have found that to successfully transform themselves they must often fundamentally change their cultures to be more results-oriented, customer-focused, and collaborative. As part of that, GAO has shown that agencies need to create clear "lines of sight" that align organizational and individual performance. These lines of sight help individual staff understand the connection between their daily activities and agency success. Over the years, GAO has made numerous recommendations to agencies and OPM to improve their strategic human capital management efforts. This testimony discusses agencies' actions to implement key recommendations.
Located in FAA’s Office of Aviation Safety, the Aircraft Certification Service (Aircraft Certification) and Flight Standards Service (Flight Standards) issue certificates and approvals for the operators and aviation products used in the national airspace system based on standards set forth in federal aviation regulations. FAA inspectors and engineers working in Aircraft Certification and Flight Standards interpret and implement the regulations governing certificates and approvals via FAA policies and guidance, such as orders, notices, and advisory circulars. Aircraft Certification’s approximately 950 engineers and inspectors in 38 field offices issue approvals to the designers and manufacturers of aircraft and aircraft engines, propellers, parts, and equipment, including the avionics and other equipment required for the Next Generation Air Transportation System (NextGen)—a federal effort to transform the U.S. national airspace system from a ground-based system of air traffic control to a satellite-based system of air traffic management. These approvals are issued in three areas: (1) design—including type certificates for new aircraft, engine, or propeller designs, amended type certificates (issued only to the type certificate holder) for derivative models, and supplemental type certificates for major changes to existing designs by either the type certificate holder or someone other than the original type certificate holder; (2) production—including production certificates, which certify a manufacturer’s ability to build an aircraft, engine, or propeller in accordance with an FAA-approved design, and parts manufacturer approvals for spare and replacement parts; and (3) flight approval—original airworthiness certificates and approvals for newly manufactured aircraft, engines, propellers, and parts. Aircraft Certification, along with Flight Standards, provides a safety performance management system intended to assure the continued operational safety of all aircraft operating in the national airspace system and of U.S.-built aircraft operating anywhere in the world. Aircraft Certification is also responsible for the appointment and oversight of designees and delegated organizations that play a critical role in acting on behalf of FAA to perform many certification and approval activities, such as the issuance of design and airworthiness approvals for aircraft parts. Since 2005, Aircraft Certification has used project sequencing to prioritize certification submissions on the basis of available resources. Projects are evaluated against several criteria, including safety attributes and their impact on the air transportation system. In fiscal year 2009, Aircraft Certification issued 4,248 design approvals, 2,971 production approvals, and 508 airworthiness certificates. Figure 1 shows the Aircraft Certification approvals issued for fiscal years 2005 through 2009. As of June 2010, according to FAA, Aircraft Certification had a backlog of 47 projects. (According to a senior FAA official, the number of approvals decreased from fiscal year 2006 to fiscal year 2007 because Aircraft Certification implemented a new data collection system in fiscal year 2007 that improved data collection definitions and processes.) Figure 2 contains key information about Aircraft Certification’s organization, and figure 3 indicates key phases in Aircraft Certification’s product approvals process. Flight Standards’ nearly 4,000 inspectors issue certificates allowing individuals and entities to operate in the national airspace system. Flight Standards also issues approvals for programs, such as training and minimum equipment lists. Flight Standards field office managers in over 100 field offices use the Certification Services Oversight Process to initiate certification projects within their offices. According to FAA, the field offices are also assisted by a headquarters-based office that provides experts on specific aircraft and airlines. Accepted projects are processed on a first-in, first-out basis within each office once FAA determines that it has the resources to oversee an additional new certificate holder. Flight Standards issued 599 air operator and air agency certificates in fiscal year 2009. These include certificates to commercial air carriers under 14 C.F.R. part 121, operators of smaller commercial aircraft under 14 C.F.R. part 135, repair stations under 14 C.F.R. part 145, and pilot schools and training centers under 14 C.F.R. parts 141 and 142, respectively. According to its Director, Flight Standards also issues over 6,000 approvals daily. Figure 4 shows the number of air operator and air agency certificates issued by Flight Standards in fiscal years 2005 through 2009. FAA officials noted that certification projects within and among the categories of air operators and air agencies require various amounts of FAA resources. For example, FAA indicated that an agricultural operator certification requires fewer FAA resources than a repair station certification. Additionally, certifications of small commercial aircraft operations that are single pilot, single plane require a different set of resources than operations that are dual pilot and/or fly more aircraft. As of July 2010, Flight Standards had 1,142 certifications in process and a backlog of 489 applications. According to an FAA official, Flight Standards has more wait-listed applications than Aircraft Certification because it receives numerous requests for certificates, and its certifications are substantially different in nature from those issued by Aircraft Certification. Flight Standards is also responsible for assuring the continued operational safety of the national airspace system by overseeing certificate holders, monitoring (along with Aircraft Certification) operators’ and air agencies’ operation and maintenance of aircraft, and overseeing designees and delegated organizations. Flight Standards inspectors were tasked with overseeing 13,089 air operators and air agencies, such as repair stations, as of March 2010. Unless assigned to a large commercial air carrier issued a certificate under part 121, a Flight Standards inspector is typically responsible for overseeing several entities that often perform different or several functions within the system—including transporting passengers, repairing aircraft, and training pilots. Figures 5 and 6 contain key information about Flight Standards’ organization and certification process for air operators and air agencies. Studies we reviewed and aviation stakeholders and experts we spoke with indicated that variation in FAA’s interpretation of standards for certification and approval decisions is a long-standing issue that affects both Aircraft Certification and Flight Standards, but the extent of the problem has not been quantified in the industry as a whole. Inconsistent or variant FAA interpretations have been noted in studies published over the last 14 years. A 1996 study by Booz Allen & Hamilton, conducted at the request of the FAA Administrator to assess challenges to the agency’s regulatory and certification practices, reported that, for air carriers and other operators, the agency’s regulations are often ambiguous; subject to variation in interpretation by FAA inspectors, supervisors, and policy managers; and in need of simplification and consistent implementation. A 1999 task force, convened at the request of the FAA Administrator to assess FAA’s certification process, found that the agency’s requirements for the various approvals—such as type certificates and supplemental type certificates—varied substantially because of differences in standards and inconsistent application of those standards by different FAA field offices. While FAA has put measures in place since these two reports appeared, a 2008 Independent Review Team, which was commissioned by the Secretary of Transportation to assess FAA’s safety culture and approach to safety management, found that a wide degree of variation in “regulatory ideology” among FAA staff continues to create the likelihood of wide variation in decisions within and among field offices. Industry officials and experts representing a broad range of large and small aviation businesses told us that variation in interpretation and subsequent decisions occurs in both Aircraft Certification and Flight Standards, but we found no evidence that quantified the extent of the problem in the industry as a whole. Specifically, 10 of the 13 industry group and individual company representatives we interviewed said that they or members of their organization experienced variation in FAA’s certification and approval decisions on similar submissions; the remaining 3 industry representatives did not raise variation in interpretations and decisions as an issue. For example, an official from one air carrier told us that variation in decisions occurs regularly when obtaining approvals from Flight Standards district offices, especially when dealing with inspectors who are newly hired or replacing a previous inspector. He explained that new inspectors often task air carriers to make changes to previously obtained minimum equipment lists or conformity approvals for an aircraft. The official further noted that inspector assignments often change for reasons such as transfers, promotions, or retirement and that four different principal operations inspectors were assigned to his company during the past 18 months. Experts on our panel and most industry officials we interviewed indicated that, though variation in decisions is a long-standing, widespread problem, it has rarely led to serious certification and approval process problems. Experts on our panel generally noted that serious problems with the certification and approval processes occur less than 10 percent of the time. However, when we asked them to rank certification and approval process problems we summarized from their discussion, they chose inconsistent interpretation of regulations, which can lead to variation in decisions, as the most significant problem for Flight Standards and as the second most significant problem for Aircraft Certification. Panelists’ concerns about variation in decisions included instances in which approvals are reevaluated and sometimes revised or revoked in FAA jurisdictions other than those in which they were originally granted. Industry officials we interviewed, though most had experienced it, did not mention the frequency with which variation in decisions occurred. However, 8 of the 13 said that their experiences with FAA’s certification and approval processes were generally free of problems compared with 3 who said they regularly experienced problems with the process. FAA’s Deputy Associate Administrator for Aviation Safety and union officials representing FAA inspectors and engineers acknowledged that variation in certification and approval decisions occurs. The Deputy Associate Administrator noted that variation in interpretation and certification and approval decisions occurs in both Aircraft Certification and Flight Standards. He acknowledged that a nonstandardized process for approvals exists and has been a challenge for, and a long-term criticism of, the agency. Furthermore, he explained that efforts were being made to address the issue, including the establishment of (1) an Office of Aviation Safety quality management system (QMS) to standardize processes across Aircraft Certification and Flight Standards, (2) a process for industry to dispute FAA decisions, and (3) standardization offices within Aircraft Certification directorates. The first two efforts are discussed in greater detail later in this report. Variation in FAA’s interpretation of standards and certification and approval decisions occurs as a result of factors related to performance- based regulations and the use of professional judgment by FAA inspectors and engineers, according to industry stakeholders. FAA uses performance- based regulations, which identify a desired outcome and are flexible about how the outcome is achieved. For example, performance-based regulations on aircraft braking would establish minimum braking distances for aircraft but would not call for a particular material in the brake pads or a specific braking system design. According to officials in FAA’s rulemaking office, about 20 percent of FAA’s regulations are performance-based. Performance-based regulations, which are issued governmentwide, provide a number of benefits, according to literature on the regulatory process. By focusing on outcomes, for example, performance-based regulations give firms flexibility in achieving the stated level of performance; such regulations can accommodate technological change in ways that prescriptive regulations that focus on a specific technology generally cannot. For those certifications and approvals that relate to performance-based regulations, variation in decisions is a consequence of such regulations, according to one air carrier, since performance-based regulations allow the applicant multiple avenues to comply with regulations and broader discretion by FAA staff in making certification and approval decisions. According to senior FAA officials, performance-based regulations allow innovation and flexibility while setting a specific safety standard. The officials added that the benefits of performance-based regulations outweigh the potential for erroneous interpretation by an individual inspector or engineer. While agreeing with this statement, a panel member pointed out that the potential for erroneous interpretation also entails a risk of inconsistent decisions. In addition, FAA oversees a large, diverse industry, and its certification and approval processes rely, in part, on FAA staffs’ exercise of professional judgment in the unique situations they encounter. In the opinion of senior FAA officials, some differences among inspectors may be due to situation-specific factors that industry stakeholders may not be aware of. According to officials from Flight Standards, because differences may exist among regions and district offices, operators changing locations may encounter these differences. Many industry stakeholders and experts stated that FAA’s certification and approval processes contribute positively to the safety of the national airspace system. For example, industry stakeholders who participated in our expert panel ranked the office’s safety culture and record as the greatest strength of Flight Standards’ certification and approval processes and the third greatest strength of Aircraft Certification’s processes. Industry stakeholders and experts also noted that the certification and approval processes work well most of the time because of FAA’s long- standing collaboration with industry, flexibility within the processes, and committed, competent FAA staff. In most instances, stakeholders and experts said, when industry seeks certifications and approvals, its experiences with FAA’s processes are positive. For example, two aviation manufacturers and an industry trade association with over 400,000 members noted that most of their experiences or their members’ experiences were positive. Seventeen of 19 panelists indicated positive or very positive experiences with Aircraft Certification, and 9 of 19 panelists indicated positive experiences with Flight Standards. Panelists ranked FAA’s collaboration with applicants highly—as the second greatest strength of both Aircraft Certification and Flight Standards. In addition, representatives of two trade associations representing over 190 aviation companies said that the processes provide flexibility for a large, diverse industry. Additionally, panelists ranked FAA inspectors’ and engineers’ expertise as the greatest strength of Aircraft Certification and the third greatest strength of Flight Standards, while officials from two industry trade groups cited the inspectors’ and engineers’ competence and high level of expertise. Industry stakeholders and experts noted that negative certification and approval experiences, although infrequent, can result in costly delays for them, which can disproportionately affect smaller operators. While industry stakeholders indicated that negative experiences occur in dealings with both Aircraft Certification and Flight Standards, experts on our panel noted that negative experiences are more likely to occur with Flight Standards than with Aircraft Certification. For example, three experts noted that, overall, industry’s experience in obtaining certifications and approvals from Flight Standards has been negative or very negative, while no experts thought industry’s experience with Aircraft Certification was negative. The panelists indicated that negative experiences occur during the processing of certifications and approvals and as applicants wait for FAA resources to become available to commence their certification or approval projects. For example, an aviation industry representative reported that his company incurred a delay of over 5 years and millions of dollars in costs when it attempted to obtain approvals from Aircraft Certification and Flight Standards field offices. Another industry representative indicated that it abandoned an effort to obtain an operating certification after spending $1.2 million and never receiving an explanation from FAA as to why the company’s application was stalled. One panelist indicated that the negative experiences focus more on administrative aspects of the certification and approval processes and not on safety-related items. The processing of original certifications and approvals in Aircraft Certification and Flight Standards involves progressing through a schedule of steps or phases. Responsibilities of both FAA and the applicant are delineated. However, even with this framework in place, industry stakeholders noted that the time it takes to obtain certifications and approvals can differ from one FAA field office to another because of differences in office resources and expertise. In some cases, delays may be avoided when FAA directs the applicant to apply at a different field office. Nevertheless, applicants who must apply to offices with fewer resources can experience costly delays in obtaining certifications or approvals. Delays also occur when FAA wait-lists certification submissions because it does not have the resources to begin work on them. Aircraft Certification meets weekly to review certification project submissions. If it determines that a submission is to be wait-listed, the applicant is sent a 90-day delay letter and if, after the initial 90 days, the submission is still wait-listed, the applicant is sent another letter. Additionally, Aircraft Certification staff and managers periodically contact applicants to advise them of the status of their submissions. Flight Standards also notifies applicants when their certification submissions are wait-listed, and Flight Standards staff are encouraged to communicate with applicants regularly about the status of their submissions. However, according to an FAA notice, staff are advised not to provide an estimate of when an applicant’s submission might be processed. While Aircraft Certification tracks in a national database how long individual submissions are wait-listed, Flight Standards does not. Without data on how long submissions are wait-listed, Flight Standards cannot assess the extent of wait-listing delays or reallocate resources to better meet demand. Further, industry stakeholders face uncertainty with respect to any plans or investments that depend on obtaining a certificate in a timely manner. Industry stakeholders have also raised concerns about the effects of inefficiencies in the certification and approval processes on the implementation of NextGen. As NextGen progresses, operators will need to install additional equipment on their aircraft to take full advantage of NextGen capabilities, and FAA’s certification and approval workload is likely to increase substantially. According to our October 2009 testimony on NextGen, airlines and manufacturers said that FAA’s certification processes take too long and impose costs on industry that discourage them from investing in NextGen equipment. We reported that this inefficiency in FAA’s processes constitutes a challenge to delivering NextGen benefits to stakeholders and that streamlining FAA’s processes will be essential for the timely implementation of NextGen. FAA is working to address the certification issues that may impede the adoption and acceleration of NextGen capabilities. Flight Standards has identified NextGen-dedicated staff in each of its regional offices to support the review and approval of NextGen capabilities within each region. Aircraft Certification has created a team of experts from different offices to coordinate NextGen approvals and identify specialists in Aircraft Certification offices with significant NextGen activity. FAA also plans a number of other actions to facilitate the certification and approval of NextGen-related technology, including new procedures and criteria for prioritizing certifications, updating policy and guidance, developing additional communication mechanisms, and developing training for inspectors and engineers. Since many of these actions have either just been implemented or have not yet been completed, it is too early to tell whether they will increase the efficiency of FAA’s certification and approval processes and reduce unanticipated delays and costs for the industry. Industry stakeholders also noted that the efficiency of the certification and approval processes was hampered by a lack of sufficient staff resources to carry out certifications and approvals and a lack of effective communication mechanisms for explaining the intent of the regulations to both FAA staff and industry. The stakeholders said that these inefficiencies have resulted in costly delays for them. Stakeholders and experts said that, at some FAA offices, delays in obtaining certifications and approvals were due to heavy staff workloads, a lack of staff, or a lack of staff with the appropriate expertise. Staff and managers at one FAA field office told us that in the past a lack of staff had contributed to delays in completing certifications. The relative priority of certifications and approvals within FAA’s overall workload also affects the availability of staff to process certifications and approvals. According to FAA, its highest priority is overseeing the continued operational safety of the people and products already operating within the national airspace system, but the same staff who provide this oversight are also tasked with the lower-priority task of processing new certifications and approvals. Additionally, Flight Standards field staff we contacted said that the system under which their pay grades are established and maintained provides a disincentive for inspectors to perform certification work because the system allocates no credit toward retention of their pay grades for doing certification work. Flight Standards headquarters officials pointed out that there is an incentive for field office inspectors to perform initial certifications because once certificated the new entities add points to an inspector’s complexity calculation, which is used to determine his or her pay grade. FAA has addressed staff resource issues by increasing the number of inspectors and engineers. Over the past 3 years, FAA has steadily increased its hiring of Aircraft Certification engineers and Flight Standards inspectors, thereby reducing the risk of certification delays. According to agency data, FAA’s hiring efforts since fiscal year 2007 have resulted in an 8.8 percent increase in the number of Aircraft Certification engineers and a 9.4 percent increase in the number of Flight Standards inspectors on board. FAA hired 106 engineers in Aircraft Certification and 696 inspectors in Flight Standards from the beginning of fiscal year 2007 to March 15, 2010. FAA also hired 89 inspectors in Aircraft Certification from fiscal year 2007 through August 2010. In addition, Flight Standards headquarters staff are available to assist field staff with the certification of part 121 air carriers—an average of 35 of these staff were available for this assistance annually from 2005 through 2009, and they helped with 16 certification projects. Furthermore, FAA delegates many certification activities to individuals and organizations (called designees) to better leverage its resources. As we previously reported, FAA’s designees perform more than 90 percent of FAA’s certification activities. We have reported that designees generally conduct routine certification functions, such as approvals of aircraft technologies that the agency and designees already have experience with, allowing FAA staff to focus on new and complex aircraft designs or design changes. Panelists ranked the expanded use of designees second and fifth, respectively, among actions that we summarized from their discussions that would have the most positive impact on improving Aircraft Certification’s and Flight Standards’ certification and approval processes. FAA is increasing organizational delegations under its organization designation authorization (ODA) program and expects the ODA program will allow more effective use of its resources over time. Stakeholders pointed to a lack of effective communication mechanisms as another problem with the certification and approval processes, especially deficiencies in the guidance FAA issues and a lack of additional communication mechanisms for sharing information on the interpretation of regulations. Stakeholders said that the lack of effective communication mechanisms can lead to costly delays when, for example, methods or guidance for complying with regulations is not clear. Stakeholders and experts had several issues with the FAA guidance that interprets the regulations and provides supplemental information to the industry. Stakeholders said there are sometimes discrepancies between the guidance and the regulations. For example, one stakeholder reported informing an FAA training course instructor that a particular piece of guidance contradicted the regulations. The instructor agreed that the contradiction existed but told the stakeholder that FAA teaches to the guidance, not the regulations. One employee group representing some FAA inspectors was concerned that not all guidance has been included in an online system that FAA has established to consolidate regulations, policy, and guidance. FAA acknowledged that it is working to further standardize and simplify the online guidance in the Flight Standards information management system. Stakeholders also identified a lack of opportunities for sharing information about the interpretation of regulations and guidance. An industry expert noted that FAA lacks a culture that fosters communication and discussion among peer groups. Moreover, an industry group with over 300 aviation company members suggested that FAA should support and promote more agencywide and industrywide information sharing in less formal, less structured ways to enhance communication. Finally, according to an official of an employee group representing some FAA inspectors, because their workloads tend to be heavy, inspectors are less able to communicate with the companies they oversee, and the reduced level of communication contributes to variation in the interpretation of regulations. FAA officials disagreed with these assertions and indicated that FAA staff participate in numerous committees and conferences, share methods of compliance in technical areas via forums with stakeholders, and communicate resolutions to problems in various formats, such as by placing legal decisions online. Other FAA actions could identify and potentially address some of the shortcomings in the agency’s certification and approval processes as follows: In 2004, FAA’s Office of Aviation Safety introduced QMS, which is intended to ensure that processes are being followed and improved and to provide a methodology to standardize processes. QMS is expected to help ensure that processes are followed by providing a means for staff to report nonconformance with FAA procedures or processes and was established as part of the office’s effort to achieve certification by the International Organization for Standardization (ISO). Any employee can submit a report and check the status of an issue that has been reported. From October 2008 to March 2009, approximately 900 reports were submitted, and 46 internal audits were completed. For example, in July 2009, an FAA staffer noted that a required paragraph on aging aircraft inspection and records review was missing from a certificate holder’s operations specifications. The issue was resolved and closed in August 2009 when the missing paragraph was issued to the certificate holder. Some FAA staff told us that QMS has helped improve the processes because it requires management action to respond to report submissions. To provide industry stakeholders with a mechanism for appealing certification and other decisions, the Office of Aviation Safety implemented the Consistency and Standardization Initiative (CSI) in 2004. Appeals must begin at the field office level and can eventually be taken to FAA headquarters. CSI requires that FAA staff document their safety decisions and that stakeholders support their positions with specific documentation. Within Aircraft Certification and Flight Standards, CSI cases at each appeal level are expected to be processed within 30 working days. The total length of the CSI process depends on how many levels of appeal the stakeholder chooses. Aircraft Certification has had over 20 CSI cases, and Flight Standards has had over 300. Most CSI cases in Aircraft Certification involved clarification of a policy or an approved means of complying with a regulation, while most of those submitted to Flight Standards involved policy or method clarification, as well as scheduling issues, such as delays in addressing a stakeholder’s certification, approval, or other issue. The large discrepancy between the number of cases filed for the two services, according to FAA officials, may be due to the fact that Aircraft Certification decisions are the result of highly interactive, deliberative processes, which are not typical in granting approvals in Flight Standards, where an inspector might find the need to hand down a decision without prolonged discussion or deliberation. Stakeholders told us that CSI lacks agencywide buy-in and can leave stakeholders who use the program potentially open to retribution from FAA staff. However, others noted that CSI is beneficial because it requires industry stakeholders to use the regulations as a basis for their complaints, which often leads to resolution. According to one of our panelists, inconsistencies occur when FAA does not start with the regulations as the basis for decisions. Although QMS and CSI are positive steps toward identifying ways to make the certification and approval processes more efficient, FAA does not know whether the programs are achieving their stated goals because it has not established performance measures for determining program accomplishments. One of the goals for QMS is to reduce inconsistencies and increase standardization. A QMS database documents the reports submitted and, through information in these reports, FAA says it has identified instances of nonconformance and initiated corrective action to prevent recurrence; revised orders to ensure they are consistent with actual practice; and improved its processes to collect feedback from stakeholders and take action on trends. However, FAA does not know whether its actions have reduced inconsistencies because its measures describe the agency’s output—for example, number of audits conducted— rather than any outcomes related to reductions in process inconsistencies. FAA officials described CSI goals as promoting early resolution of disagreements and consistency and fairness in applying FAA regulations and policies. They provided us with data on the number of CSI cases in both Aircraft Certification and Flight Standards, the types of complaints, and the percentage of resolutions that upheld FAA’s original decision, but as with the overall QMS program, we could find no evidence that FAA has instituted CSI performance measures that would allow it to determine progress toward program outcomes, such as consistency and fairness in applying regulations and policies. Outcome-based performance measures would also allow QMS and CSI program managers to determine where to better target program resources to improve performance. FAA has taken actions to address variation in decisions and inefficiency in its certification and approval processes, although the agency does not have outcome-based performance measures and a continuous evaluative process to determine if these actions are having their intended effects. Because the number of certification and approval applications is likely to increase for NextGen technologies, achieving more efficiency in these processes will help FAA better manage this increased workload, as well as its current workload. In addition, while both Aircraft Certification and Flight Standards notify applicants whether resources are available to begin their projects, Flight Standards does not monitor how long applicants are wait-listed and is therefore unaware how long projects are wait-listed and unable to reallocate resources to better meet demand for certification services. To ensure that FAA actions contribute to more consistent decisions and more efficient certification and approval processes, we recommend that the Secretary of Transportation direct the Administrator of FAA to take the following two actions: Determine the effectiveness of actions to improve the certification and approval processes by developing a continuous evaluative process and use it to create measurable performance goals for the actions, track performance toward those goals, and determine appropriate process changes. To the extent that this evaluation of agency actions identifies effective practices, consider instituting those practices agency wide. Develop and implement a process in Flight Standards to track how long certification and approval submissions are wait-listed, the reasons for wait-listing them, and the factors that eventually allowed initiation of the certification process. Use the data generated from this process to assess the extent of wait-listing delays and to reallocate resources, as appropriate, to better meet demand. We provided a copy of a draft of this report to the Department of Transportation (DOT) for its review and comment. DOT provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 21 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Transportation, the Administrator of FAA, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions or would like to discuss this work, please contact me at (202) 512-2834 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report provides information on the Federal Aviation Administration’s (FAA) processes for granting certifications and approvals to air operators, air agencies such as repair stations, and designers and manufacturers of aircraft and aircraft components. It describes the processes and discusses (1) the extent of variation in FAA’s interpretation of standards with regard to the agency’s certification and approval decisions and (2) key stakeholder and expert views on how well the certification and approval processes work. To address these objectives, we reviewed relevant studies, reports, and FAA documents and processes; convened a panel of aviation industry and other experts; and interviewed aviation industry members, an expert, and FAA officials. We did not address FAA processes for issuing certifications to individuals, such as pilots and mechanics. We contracted with the National Academy of Sciences (the Academy) to convene a panel on FAA’s certification and approval processes on December 16, 2009. The panel was selected with the goal of obtaining a balance of perspectives and included FAA senior managers; officials representing large and small air carriers, aircraft and aerospace product manufacturers, aviation services firms, repair stations, geospatial firms, and aviation consultants; and academics specializing in aviation and organization theory. (See table 1.) In the first session, FAA and industry officials presented their organizations’ perspectives on these processes and responded to questions. The presenters then departed and did not participate in the remaining sessions. In the next three discussion sessions, the panelists— led by a moderator—shared their views on various aspects of FAA’s certification and approval processes. After the first two discussion sessions, panelists voted in response to questions posed by GAO. (See app. II for the questions and responses.) The views expressed by the panelists were their own and do not necessarily represent the views of GAO or the Academy. We shared a copy of an earlier draft of this report with all of the presenters and panelists for their review and to ensure that we correctly captured information from their discussions and, on the basis of their comments, made technical corrections to the draft as necessary. We interviewed aviation industry certificate and approval holders, trade groups, an industry expert, officials of unions that represent FAA inspectors and engineers, and FAA staff in Aircraft Certification and Flight Standards (see table 2). The industry and trade groups were selected to provide a range of large and small companies and a variety of industry sectors (e.g., aircraft and parts manufacturers, air carriers, and repair stations). The interviews were conducted to gain an understanding of the extent of variation in FAA’s certification and approval decisions and interviewees’ views on FAA’s certification and approval processes. The FAA interviews provided an understanding of the key aspects of FAA’s certification and approval processes, information on data collection and analysis related to the processes, and current and planned process improvement efforts. In addition to using information from the individual interviews, as relevant throughout the report, we analyzed the content of the interviews to identify and quantify the key issues raised by the interviewees. This appendix summarizes the responses the panelists provided to questions we posed at the close of their discussion sessions. The response options were based on the contents of their discussions. To develop the rankings in questions 1, 2, and 12, we asked the panelists, in a series of three questions, to vote for the option he or she believed was the first, second, and third greatest, most significant, or most positive. To rank order the items listed for these questions, we assigned three points to the option identified as greatest, most significant, or most positive; two points to the second greatest, most significant, or most positive; and one point to the third greatest, most significant, or most positive option. We then summed the weighted values for each option and ranked the options from the highest number of points to the lowest. 1. What is the greatest strength of the certification and approval processes? 2. What is the most significant problem with the certification and approval processes? 3. What leading factor has contributed to problems with the certification and approval processes? Leading factor in process problems FAA’s prioritization system for managing certifications and approvals FAA’s rulemaking process and development of guidance (e.g., amount of time required to develop or change regulations, etc.) Culture of FAA (e.g., stove-piping, resistance to change, etc.) Organizational structure of FAA (e.g., decentralization, varying procedures among local offices, etc.) 4. How often do serious problems occur each year with the certification and approval processes? 5. Overall, how positive or negative do you think industry’s experience has been in obtaining certifications and approvals from Aircraft Certification and Flight Standards? 6. How would you assess the overall impact of the certification and approval processes on the safety of the national airspace system? 7. Overall, how would you characterize efforts to improve the certification and approval processes? 8. Overall, how would you characterize efforts to prioritize certifications and approvals? 9. Overall, how would you characterize efforts to improve dispute resolution through the Consistency and Standardization Initiative (CSI)? 10. Regarding efforts to improve dispute resolution through CSI, what is the key factor hindering the progress of efforts? 11. What should be done to mitigate the effects of this factor? FAA should establish support for efforts FAA should improve data collection and analysis related to efforts Do not believe efforts are ineffective Do not know/no basis to judge This response option was not available to the panelists. 12. What action will have the most positive impact on improving the certification and approval processes? Expand use of designees/organization designation authorizations (ODA) Expand use of designees/organization designation authorizations (ODA) Gerald L. Dillingham, Ph.D., (202) 512-2834 or [email protected]. In addition to the contact named above, Teresa Spisak (Assistant Director), Sharon Dyer, Bess Eisenstadt, Amy Frazier, Brandon Haller, Dave Hooper, Michael Silver, and Pamela Vines made key contributions to this report.
Among its responsibilities for aviation safety, the Federal Aviation Administration (FAA) issues thousands of certificates and approvals annually. These certificates and approvals, which FAA bases on its interpretation of federal standards, indicate that such things as new aircraft, the design and production of aircraft parts and equipment, and new air operators are safe for use in the national airspace system. Past studies and industry spokespersons assert that FAA's interpretations produce variation in its decisions and inefficiencies that adversely affect the industry. GAO was asked to examine the (1) extent of variation in FAA's interpretation of standards for certification and approval decisions and (2) views of key stakeholders and experts on how well these processes work. To perform the study, GAO reviewed industry studies and reports and FAA documents and processes; convened a panel of aviation experts; and interviewed officials from various industry sectors, senior FAA officials, and unions representing FAA staff. Studies, stakeholders, and experts indicated that variation in FAA's interpretation of standards for certification and approval decisions is a long-standing issue, but GAO found no evidence that quantified the extent of the problem in the industry as a whole. Ten of the 13 industry group and company officials GAO interviewed said that they or members of their organization had experienced variation in FAA certification and approval decisions on similar submissions. In addition, experts on GAO's panel, who discussed and then ranked problems with FAA's certification and approval processes, ranked inconsistent interpretation of regulations, which can lead to variation in decisions, as the first and second most significant problem, respectively, with these processes for FAA's Flight Standards Service (which issues certificates and approvals for individuals and entities to operate in the national airspace system) and Aircraft Certification Service (which issues approvals to the designers and manufacturers of aircraft and aircraft parts and equipment). According to industry stakeholders, variation in FAA's interpretation of standards for certification and approval decisions is a result of factors related to performance-based regulations, which allow for multiple avenues of compliance, and the use of professional judgment by FAA staff and can result in delays and higher costs. Industry stakeholders and experts generally agreed that FAA's certification and approval processes contribute to aviation safety and work well most of the time, but negative experiences have led to costly delays for the industry. Industry stakeholders have also raised concerns about the effects of process inefficiencies on the implementation of the Next Generation Air Transportation System (NextGen)--the transformation of the U.S. national airspace system from a ground-based system of air traffic control to a satellite-based system of air traffic management. They said that the processes take too long and impose costs that discourage aircraft operators from investing in NextGen equipment. FAA has taken actions to improve the certification and approval processes, including hiring additional inspectors and engineers and increasing the use of designees and delegated organizations--private persons and entities authorized to carry out many certification activities. Additionally, FAA is working to ensure that its processes are being followed and improved through a quality management system, which provides a mechanism for stakeholders to appeal FAA decisions. However, FAA does not know whether its actions under the quality management system are achieving the intended goals of reducing inconsistencies and increasing consistency and fairness in the agency's application of regulations and policies because FAA does not have outcome-based performance measures and a continuous evaluative process that would allow it to determine progress toward these goals. Without ongoing information on results, FAA managers do not know if their actions are having the intended effects. GAO recommends that FAA develop a continuous evaluative process with measurable performance goals to determine the effectiveness of the agency's actions to improve its certification and approval processes. DOT did not comment on the recommendations but provided technical comments, which were included as appropriate.
Some context for my remarks is appropriate. The threat of terrorism was significant throughout the 1990s; a plot to destroy 12 U.S. airliners was discovered and thwarted in 1995, for instance. Yet the task of providing security to the nation’s aviation system is unquestionably daunting, and we must reluctantly acknowledge that any form of travel can never be made totally secure. The enormous size of U.S. airspace alone defies easy protection. Furthermore, given this country’s hundreds of airports, thousands of planes, tens of thousands of daily flights, and the seemingly limitless ways terrorists or criminals can devise to attack the system, aviation security must be enforced on several fronts. Safeguarding airplanes and passengers requires, at the least, ensuring that perpetrators are kept from breaching security checkpoints or gaining access to ramps and doorways leading to aircraft. FAA has developed several mechanisms to prevent criminal acts against aircraft, such as adopting technology to detect explosives and establishing procedures to ensure that passengers are positively identified before boarding a flight. Still, in recent years, we and others have often demonstrated that significant weaknesses continue to plague the nation’s aviation security. The current aviation security structure and its policies, requirements, and practices have evolved since the early 1960s and were heavily influenced by a series of high-profile aviation security incidents. Historically, the federal government has maintained that providing security is the responsibility of air carriers and airports as part of their cost of doing business. Beginning in 1972, air carriers were required to provide screening personnel, and airport operators were required to provide law enforcement support. However, with the rise in air piracy and terrorist activities that threatened not only commercial aviation but also national security, discussions began to emerge as to who should have the responsibility for providing security at our nation’s airports. With the events of the last week, concerns have arisen again as to who should be responsible for security and screening passengers at our nation’s airports. This issue has evoked many discussions through the years and just as many options concerning who should provide security at our nation’s airports and how security should be handled. But as pointed out in a 1998 FAA study, there was no consensus among the various aviation-related entities. To identify options for assigning screening responsibilities, we surveyed aviation stakeholders—security officials at the major air carriers and the largest airports, large screening companies, and industry associations— and aviation and terrorism experts. We asked our respondents to provide their opinions about the current screening program, criteria they believe are important in considering options, the advantages and disadvantages of each option, and their comments on implementing a different screening approach. It is important to understand that we gathered this information prior to September 11, 2001, and some respondents’ views may have changed. Control of access to aircraft, airfields, and certain airport facilities is a critical component of aviation security. Existing access controls include requirements intended to prevent unauthorized individuals from using forged, stolen, or outdated identification or their familiarity with airport procedures to gain access to secured passenger areas or to ramps and doorways leading to aircraft. In May 2000, we reported that our special agents, in an undercover capacity, obtained access to secure areas of two airports by using counterfeit law enforcement credentials and badges. At these airports, our agents declared themselves as armed law enforcement officers, displayed simulated badges and credentials created from commercially available software packages or downloaded from the Internet, and were issued “law enforcement” boarding passes. They were then waved around the screening checkpoints without being screened. Our agents could thus have carried weapons, explosives, chemical/biological agents, or other dangerous objects onto aircraft. In response to our findings, FAA now requires that each airport’s law enforcement officers examine the badges and credentials of any individual seeking to bypass passenger screening. FAA is also working on a “smart card” computer system that would verify law enforcement officers’ identity and authorization for bypassing passenger screening. The Department of Transportation’s (DOT) Inspector General has also uncovered problems with access controls at airports. The Inspector General’s staff tested the access controls at eight major airports in 1998 and 1999 and gained access to secure areas in 68 percent of the tests; they were able to board aircraft 117 times. After the release of its report describing its successes in breaching security, the Inspector General conducted additional testing between December 1999 and March 2000 and found that, although improvements had been made, access to secure areas was still gained more than 30 percent of the time. Screening checkpoints and the screeners who operate them are a key line of defense against the introduction of dangerous objects into the aviation system. Over 2 million passengers and their baggage must be checked each day for articles that could pose threats to the safety of an aircraft and those aboard it. The air carriers are responsible for screening passengers and their baggage before they are permitted into the secure areas of an airport or onto an aircraft. Air carriers can use their own employees to conduct screening activities, but mostly air carriers hire security companies to do the screening. Currently, multiple carriers and screening companies are responsible for screening at some of the nation’s larger airports. Concerns have long existed about screeners’ ability to detect and prevent dangerous objects from entering secure areas. Each year, weapons were discovered to have passed through one checkpoint and to have later been found during screening for a subsequent flight. FAA monitors the performance of screeners by periodically testing their ability to detect potentially dangerous objects carried by FAA special agents posing as passengers. In 1978, screeners failed to detect 13 percent of the objects during FAA tests. In 1987, screeners missed 20 percent of the objects during the same type of test. Test data for the 1991 to 1999 period show that the declining trend in detection rates continues. Furthermore, the recent tests show that as tests become more realistic and more closely approximate how a terrorist might attempt to penetrate a checkpoint, screeners’ ability to detect dangerous objects declines even further. As we reported last year, there is no single reason why screeners fail to identify dangerous objects. Two conditions—rapid screener turnover and inadequate attention to human factors—are believed to be important causes. Rapid turnover among screeners has been a long-standing problem, having been identified as a concern by FAA and by us in reports dating back to at least 1979. We reported in 1987 that turnover among screeners was about 100 percent a year at some airports, and according to our more recent work, the turnover is considerably higher. From May 1998 through April 1999, screener turnover averaged 126 percent at the nation’s 19 largest airports; 5 of these airports reported turnover of 200 percent or more, and 1 reported turnover of 416 percent. At one airport we visited, of the 993 screeners trained at that airport over about a 1-year period, only 142, or 14 percent, were still employed at the end of that year. Such rapid turnover can seriously limit the level of experience among screeners operating a checkpoint. Both FAA and the aviation industry attribute the rapid turnover to the low wages and minimal benefits screeners receive, along with the daily stress of the job. Generally, screeners are paid at or near the minimum wage. We reported last year that some of the screening companies at 14 of the nation’s 19 largest airports paid screeners a starting salary of $6.00 an hour or less and, at 5 of these airports, the starting salary was the minimum wage—$5.15 an hour. It is common for the starting wages at airport fast- food restaurants to be higher than the wages screeners receive. For instance, at one airport we visited, screeners’ wages started as low as $6.25 an hour, whereas the starting wage at one of the airport’s fast-food restaurants was $7 an hour. The demands of the job also affect performance. Screening duties require repetitive tasks as well as intense monitoring for the very rare event when a dangerous object might be observed. Too little attention has been given to factors such as (1) improving individuals’ aptitudes for effectively performing screening duties, (2) the sufficiency of the training provided to screeners and how well they comprehend it, and (3) the monotony of the job and the distractions that reduce screeners’ vigilance. As a result, screeners are being placed on the job who do not have the necessary aptitudes, or sufficient knowledge to perform the work effectively, and who then find the duties tedious and dull. We reported in June 2000 that FAA was implementing a number of actions to improve screeners’ performance. However, FAA did not have an integrated management plan for these efforts that would identify and prioritize checkpoint and human factors problems that needed to be resolved, and identify measures—and related milestone and funding information—for addressing the performance problems. Additionally, FAA did not have adequate goals by which to measure and report its progress in improving screeners’ performance. FAA is implementing our recommendations to develop an integrated management plan. However, two key actions to improving screeners’ performance are still not complete. These actions are the deployment of threat image projection (TIP) systems—which place images of dangerous objects on the monitors of X-ray machines to keep screeners alert and monitor their performance—and a certification program to make screening companies accountable for the training and performance of the screeners they employ. Threat image projection systems are expected to keep screeners alert by periodically imposing the image of a dangerous object on the X-ray screen. They also are used to measure how well screeners perform in detecting these objects. Additionally, the systems serve as a device to train screeners to become more adept at identifying harder-to-spot objects. FAA is currently deploying the threat image projections systems and expects to have them deployed at all airports by 2003. The screening company certification program, required by the Federal Aviation Reauthorization Act of 1996, will establish performance, training, and equipment standards that screening companies will have to meet to earn and retain certification. However, FAA has still not issued its final regulation establishing the certification program. This regulation is particularly significant because it is to include requirements mandated by the Airport Security Improvement Act of 2000 to increase screener training—from 12 hours to 40 hours—as well as to expand background check requirements. FAA had been expecting to issue the final regulation this month, 2-½ years later than it originally planned. According to FAA, it needed the additional time to develop performance standards based on screener performance data. Concerned about the performance of screeners, the Subcommittee on Aviation, House Committee on Transportation and Infrastructure, asked us to examine options for conducting screening and to outline some advantages and disadvantages associated with these alternatives. This work is still ongoing, but I will provide a perspective on the information we have obtained to date. Many aviation stakeholders agreed that a stable, highly trained, and professional workforce is critical to improving screening performance. They identified compensation and improved training as the highest priorities in improving performance. Respondents also believed that the implementation of performance standards, team and image building, awards for exemplary work, better supervision, and certification of individual screeners would improve performance. Some respondents believed that a professional workforce could be developed in any organizational context and that changing the delegation of screening responsibilities would increase the costs of screening. We identified four principal alternative approaches to screening. Each alternative could be structured and implemented in many different ways; for instance, an entity might use its own employees to screen passengers, or it might use an outside contractor to perform the job. For each alternative, we assumed that FAA would continue to be responsible for regulating screening, overseeing performance, and imposing penalties for poor performance. Table 1 outlines the four options. Shifting responsibility for screening would affect many stakeholders and might demand many resources. Accordingly, a number of criteria must be weighed before changing the status quo. We asked aviation stakeholders to identify key criteria that should be used in assessing screening alternatives. These criteria are to establish accountability for screening performance; ensure cooperation among stakeholders, such as airlines, airports, FAA, efficiently move passengers to flights; and minimize legal and liability issues. We asked airline and airport security officials to assess each option for reassigning screening responsibility against the key criteria. Specifically, we asked them to indicate whether an alternative would be better, the same, or worse than the current situation with regard to each criterion. Table 2 summarizes their responses. At the time of our review, FAA was finalizing a certification rule that would make a number of changes to the screening program, including requiring FAA- certification of screening companies and the installation of TIP systems on X-ray machines at screening checkpoints. Our respondents believed that these actions would improve screeners’ performance and accountability. Some respondents approved of the proposed changes, since they would result in FAA having a direct regulatory role vis-a-vis the screening companies. Others indicated that the installation of TIP systems nationwide could improve screeners’ awareness and ability to detect potentially threatening objects and result in better screener performance. Respondents did not believe that this option would affect stakeholder cooperation, affect passenger movement through checkpoints, or pose any additional legal issues. No consensus existed among aviation stakeholders about how making airports responsible for screening would affect any of the key criteria. Almost half indicated that screeners’ performance would not change if the airport authority were to assume responsibility, particularly if the airport authority were to contract out the screening operation. Some commented that screening accountability would likely blur because of the substantial differences among airports in management and governance. Many respondents indicated that the airport option would produce the same or worse results than the current situation in terms of accountability, legal/liability issues, cooperation among stakeholders, and passenger movement. Several respondents noted that cooperation between air carriers and airports could suffer because the airports might raise the cost of passenger screening and slow down the flow of passengers through the screening checkpoint—to the detriment of the air carriers’ operations. Others indicated that the legal issue of whether employees of a government-owned airport could conduct searches of passengers might pose a significant barrier to this option. Screening performance and accountability would improve if a new agency were created in DOT to control screening operations, according to those we interviewed. Some respondents viewed having one entity whose sole focus would be security as advantageous and believed it would be fitting for the federal government to take a more direct role in ensuring aviation security. Respondents indicated that federal control could lead to better screener performance because a federal entity most likely would offer better pay and benefits, attract a more professional workforce, and reduce employee turnover. There was no consensus among the respondents preferring this option on how federal control might affect stakeholder cooperation, passenger movement, or legal and liability issues. For some of the same reasons mentioned above, respondents believed that screening performance and accountability would improve under a government corporation charged with screening. The majority of the respondents preferred the government corporation to the DOT agency, because they viewed it as more flexible and less bureaucratic than a federal agency. For instance, the corporation would have more autonomy in funding and budgeting requirements that typically govern the operations of federal agencies. Respondents believed that the speed of passengers through checkpoints was likely to remain unchanged. No consensus existed among respondents preferring the government corporation option about how federal control might affect stakeholder cooperation or legal and liability issues. We visited five countries—Belgium, Canada, France, the Netherlands, and the United Kingdom—viewed by FAA and the civil aviation industry as having effective screening operations to identify screening practices that differ from those in the United States. The responsibility for screening in most of these countries is placed with the airport authority or with the government, not with the air carriers as it is in the United States. In Belgium, France, and the United Kingdom, the responsibility for screening has been placed with the airports, which either hire screening companies to conduct the screening operations or, as at some airports in the United Kingdom, hire screeners and manage the checkpoints themselves. In the Netherlands, the government is responsible for passenger screening and hires a screening company to conduct checkpoint operations, which are overseen by a Dutch police force. We note that, worldwide, of 102 other countries with international airports, 100 have placed screening responsibility with the airports or the government; only 2 other countries—Canada and Bermuda—place screening responsibility with air carriers. We also identified differences between the United States and the five countries in three other areas: screening operations, screeners’ qualifications, and screeners’ pay and benefits. As we move to improve the screening function in the United States, practices of these countries may provide some useful insights. First, screening operations in some of the countries we visited are more stringent. For example, Belgium, the Netherlands, and the United Kingdom routinely touch or “pat down” passengers in response to metal detector alarms. Additionally, all five countries allow only ticketed passengers through the screening checkpoints, thereby allowing the screeners to more thoroughly check fewer people. Some countries also have a greater police or military presence near checkpoints. In the United Kingdom, for example, security forces—often armed with automatic weapons—patrol at or near checkpoints. At Belgium’s main airport in Brussels, a constant police presence is maintained at one of two glass-enclosed rooms directly behind the checkpoints. Second, screeners’ qualifications are usually more extensive. In contrast to the United States, Belgium requires screeners to be citizens; France requires screeners to be citizens of a European Union country. In the Netherlands, screeners do not have to be citizens, but they must have been residents of the country for 5 years. Training requirements for screeners were also greater in four of the countries we visited than in the United States. While FAA requires that screeners in this country have 12 hours of classroom training before they can begin work, Belgium, Canada, France, and the Netherlands require more. For example, France requires 60 hours of training and Belgium requires at least 40 hours of training with an additional 16 to 24 hours for each activity, such as X-ray machine operations, that the screener will conduct. Finally, screeners receive relatively better pay and benefits in most of these countries. Whereas screeners in the United States receive wages that are at or slightly above minimum wage, screeners in some countries receive wages that are viewed as being at the “middle income” level in those countries. In the Netherlands, for example, screeners received at least the equivalent of about $7.50 per hour. This wage was about 30 percent higher than the wages at fast-food restaurants in that country. In Belgium, screeners received the equivalent of about $14 per hour. Not only is pay higher, but the screeners in some countries receive benefits, such as health care or vacations—in large part because these benefits are required under the laws of these countries. These countries also have significantly lower screener turnover than the United States: turnover rates were about 50 percent or lower in these countries. Because each country follows its own unique set of screening practices, and because data on screeners’ performance in each country were not available to us, it is difficult to measure the impact of these different practices on improving screeners’ performance. Nevertheless, there are indications that for least one country, practices may help to improve screeners’ performance. This country conducted a screener-testing program jointly with FAA that showed that its screeners detected over twice as many test objects as did screeners in the United States.
A safe and secure civil aviation system is critical to the nation's overall security, physical infrastructure, and economy. Billions of dollars and countless programs and policies have gone into developing such a system. Although many of the specific factors contributing to the terrible events of September 11 are still unclear, it is apparent that our aviation security system is plagued by serious weaknesses that can have devastating consequences. Last year, as part of an undercover investigation, GAO special agents used fake law enforcement badges and credentials to gain access to secure areas at two airports. They were also issued tickets and boarding passes, and could have carried weapons, explosives, or other dangerous items onto the aircraft. GAO tests of airport screeners also found major shortcomings in their ability to detect dangerous items hidden on passengers or in carry-on luggage. These weaknesses have raised questions about the need for alternative approaches. In assessing alternatives, five outcomes should be considered: improving screener performance, establishing accountability, ensuring cooperation among stakeholders, moving people efficiently, and minimizing legal and liability issues.
On the day of the terrorist attacks on the World Trade Center, the President’s declaration of a major disaster under the Stafford Act activated the Federal Response Plan (superseded by and incorporated into the National Response Plan). The Federal Response Plan established the process and structure for the federal government to provide assistance to state and local governments when responding to major disasters and emergencies declared under the Stafford Act. Under the Federal Response Plan, FEMA coordinated this assistance through mission assignments and interagency agreements, which assigned specific tasks to federal agencies with the expertise necessary to complete them. The Congress authorized $20 billion to respond to the attacks, of which $8.8 billion was provided through FEMA, for the New York City area. Under the Federal Response Plan (and the National Response Plan today), EPA served as coordinator during large-scale disasters for 1 of 15 emergency support functions (ESF)—ESF 10, which addresses oil and hazardous material releases. ESF 10 encompasses various phases of hazardous material response, including assessment and cleanup. In the first 6 months after the WTC disaster, EPA responded to FEMA mission assignments to assist with the response efforts and, among other tasks, provided wash stations for responders and disposed of waste from the WTC site. There are an estimated 330 office buildings in Lower Manhattan below Canal Street and roughly 900 residential buildings with approximately 20,000 apartments. In 2002, after initial efforts by the city of New York to advise New York residents how to clean the World Trade Center dust in their homes, FEMA and EPA entered into an interagency agreement to address indoor spaces affected by the disaster. While EPA has responded to hazardous material releases for decades, the WTC disaster was the first large-scale emergency for which EPA provided testing and cleanup in indoor spaces. WTC dust is a fine mixture of materials that resulted from the collapse and subsequent burning of the twin towers and includes pulverized concrete, asbestos, and glass fibers. WTC dust entered homes and offices through open windows, was tracked in, or was picked up by air-conditioning system intakes. Figures 1 and 2 show the dust generated by the WTC disaster. The amount of dust in indoor spaces in and around Lower Manhattan varied due to a variety of factors, including distance from the WTC site; weather conditions, such as wind; and damage to individual buildings. In the years since the disaster, the level of WTC dust in indoor spaces has varied, depending upon the cleaning performed by residents and other groups, including EPA and professional cleaning companies. In May 2002, EPA, New York City, and FEMA officials announced a program, to be overseen by EPA, offering a cleanup of residences in Lower Manhattan. Between September 2002 and May 2003, residences were cleaned and tested, or tested only, for airborne asbestos. EPA analyzed samples from 4,167 apartments in 453 buildings and 793 common areas in 144 buildings. This program cost $37.9 million—$30.4 million for indoor cleaning and testing by the New York City Department of Environmental Protection and $7.5 million for EPA oversight and sample analysis. Figure 3 shows the area in Lower Manhattan eligible for participation in EPA’s program. Residents could choose either an aggressive or modified aggressive testing method for providing samples of indoor air to EPA. For the modified aggressive method, the contractor ran a 20-inch fan for the duration of testing. For the aggressive method, a leaf blower was used, in addition to the 20-inch fan, to direct a jet of air toward corners, walls, fabric surfaces, and the ceiling to dislodge and resuspend dust. The contractors HEPA vacuumed and wet-wiped hard surfaces, including floors, ceilings, ledges, trims, furnishings, appliances, and equipment; and they HEPA vacuumed soft surfaces, such as curtains, two times. In addition, in cases where there were still significant amounts of WTC dust and debris, contractors used asbestos abatement procedures such as the use of personal protective equipment, including respirators and a properly enclosed decontamination system; posting of warning signs; isolation barriers to seal off openings; and disposal of all waste generated during the cleaning in accordance with applicable rules and regulations for asbestos-containing waste. The New York City Department of Health and Mental Hygiene and the U.S. Department of Health and Human Services’ Agency for Toxic Substances and Disease Registry (ATSDR) collected samples from in and around 30 buildings in Lower Manhattan from November though December 2001. In September 2002, these agencies released their assessment of the public’s exposure to contaminants in air and dust, recommended additional monitoring of residential spaces in Lower Manhattan, and referred residents to EPA’s program. Before EPA finalized its second indoor program plan, several assessments related to indoor contamination were conducted: an August 2003 EPA Inspector General report; an expert technical review panel that EPA conducted from March 2004 through December 2005; and three EPA studies. The studies identified background levels of contamination in New York City (“background study”); the WTC-related contaminants of potential concern, and associated cleanup benchmarks (“COPC study”); and the efficacy of various cleaning methods in eliminating WTC-related contaminants of concern (“cleaning study”). During the time EPA met with the WTC Expert Technical Review Panel, some expert panel members encouraged EPA to develop a method for differentiating between contaminants found in the New York City urban environment and those found in WTC dust. This method would have served as the basis for determining the extent of WTC-related contamination, and EPA officials believed it would have enabled the agency to limit its focus to contamination specific to the WTC disaster. Early in the panel process, EPA formed a subpanel of these experts to assist EPA’s Office of Research and Development in developing such a methodology. In August 2005, EPA released its final report describing its methodology, which was peer reviewed. In their October 2005 final report, the peer reviewers criticized the reliability of EPA’s method and provided suggestions on improving EPA’s approach. In a November 2005 letter, EPA officials told expert panel members that in the absence of a valid method, EPA could not definitively distinguish between WTC contaminants in dust and levels of the same contaminants found in an urban environment. At the same time, 2 weeks before the final panel meeting, the EPA chairman informed the panel that it would be disbanded as of the final meeting and that EPA would not be implementing a plan that included determining the extent of WTC contamination. Experts that were a part of the subpanel addressing this method reported that the peer-review comments could be addressed and that EPA should perform additional sampling. Nonetheless, EPA ultimately decided not to pursue developing this methodology. Figure 4 shows the chronology of events preceding the second program. In January 2006, EPA formally requested funds from FEMA. EPA and FEMA signed an interagency agreement to conduct EPA’s second program in July 2006, and EPA announced the agency’s second program to test indoor spaces in Lower Manhattan in December 2006. Appendix III provides information regarding EPA’s first and second indoor programs. In response to recommendations and additional input from the Inspector General and expert panel members, EPA’s second program incorporates some additional testing elements. However, EPA’s second program does not incorporate other items. Figure 5 shows the key recommendations and additional input the EPA Inspector General and expert panel members provided to EPA. While EPA tested solely for airborne asbestos in order to trigger cleanup in the first program, it agreed to test for three additional contaminants in its second program—man-made vitreous fibers, polycyclic aromatic hydrocarbons, and lead. These contaminants, as well as two additional ones—dioxin and silica, were identified as WTC contaminants of potential concern in a May 2003 report issued by EPA and other federal, New York City, and New York state agencies. EPA did not include dioxin and silica in the second program for several reasons. Regarding dioxin, EPA noted that concentrations were elevated in the weeks following the disaster when fires were still burning, but concentrations returned to predisaster levels by December 2001. Furthermore, because “only eight” of 1,500 dioxin samples exceeded cleanup benchmarks during tests in 2002 and 2003, EPA decided not to sample for this contaminant in its second program. Regarding silica, EPA noted that in 2002 an ATSDR/New York City Department of Health and Mental Hygiene report stated that short- term exposure to silica is unlikely to cause adverse health effects and that adverse health effects from chronic exposure are possible but unlikely if recommended cleaning is conducted. EPA also explained that levels of silica are likely to have been reduced by cleaning activities over the past 3 years. EPA also agreed to test for contaminants in dust. To do so, EPA developed site-specific cleanup benchmarks for asbestos and man-made vitreous fibers in dust over the course of nearly a year. In its second program plan, EPA explains that these benchmarks are not risk based but rather are based on, among other things, work by experts in the field as to what constitutes contamination and how it compares with site-specific background levels, and the benchmarks employed for cleanup at a Superfund site with asbestos-contaminated residences. Though EPA expanded the number of contaminants tested for in its second program, it did not adopt recommendations and additional input from the EPA Inspector General or the expert panel that addressed the following issues: Evaluating risks in geographic areas north of Canal Street and in Brooklyn. EPA did not expand the scope of testing north of Canal Street, or to Brooklyn, as advisory groups had advised. EPA reported it did not expand the scope of testing because it could not differentiate between normal urban dust and WTC dust; differentiating between the two would have enabled EPA to determine the geographic extent of WTC contamination. Some expert panel members had suggested that EPA investigate whether it was feasible to develop a method for distinguishing between normal urban dust and WTC dust. EPA initially agreed to do so. Beginning in 2004—almost 3 years after the disaster—EPA conducted this investigation into developing a WTC dust signature. However, EPA officials told us that because so much time had passed since the terrorist attack, it was difficult to distinguish between WTC dust and urban dust. EPA ultimately abandoned this effort because peer reviewers questioned its methodology; EPA decided not to explore alternative methods that some of the peer reviewers had proposed. Instead, EPA will test only in an area where visible contamination has been confirmed by aerial photography conducted soon after the WTC attack, although aerial photography does not reveal indoor contamination. Furthermore, EPA officials told us that some WTC dust was found immediately after the terrorist attacks in areas, including Brooklyn, that are outside the area eligible for its first and second program. Testing in HVACs and inaccessible areas. In its November 2005 draft plan for the second program, EPA had proposed collecting samples from a number of locations in HVACs. In some buildings, HVACs are shared; in others, each residence has its own system. In either case, contaminants in the HVAC could recontaminate the residence unless the system is also professionally cleaned. However, EPA’s second program will not provide for testing in HVACs under any circumstances but will offer cleaning in HVACs if tests in common areas reveal that cleanup benchmarks for any of four contaminants have been exceeded. EPA officials told us that EPA will sample near HVAC outlets in common areas and will obtain dust samples in proximity to these locations. EPA explained in the second plan that it will not sample within HVACs because it is no longer assessing the extent of contamination resulting from the WTC disaster and because it is attempting to devote the maximum resources to testing requests. Similarly, EPA had proposed sampling for contaminants in “inaccessible” locations, such as behind dishwashers and rarely moved furniture within apartments and common areas. Again, because it was unable to differentiate between normal urban dust and WTC dust, EPA stated that it would not test in inaccessible locations in order to devote its resources to as many requests as possible. EPA told us that 272 residents and 25 building owners had enrolled in the second program, compared with 4,167 residents and 144 building owners that participated in the first program. Evaluating risks to workers/workplaces. According to EPA, its second program plan is “the result of ongoing efforts to respond to concerns of residents and workers.” Workers were concerned that workplaces in Lower Manhattan experienced the same contamination as residences. In its second program, EPA will test and clean common areas in commercial buildings, but only if an individual owner or manager of the property requests the service. EPA stated that employees who believe their working conditions are unsafe as a result of WTC dust may file a complaint with OSHA or request an evaluation by HHS’s National Institute of Occupational Safety and Health (NIOSH). Concerns remain, however, because these other agencies do not have authority to conduct cleanup in response to contaminant levels that exceed cleanup benchmarks. In addition, OSHA’s benchmarks are designed primarily to address airborne contamination, while EPA’s test and clean program is designed to address contamination in building spaces, whether the contamination is airborne or in settled dust. OSHA requires individual employers to adopt work practices to reduce employee exposure to airborne contaminants, whereas EPA’s test and clean program is designed to remove contaminants from affected spaces. Addressing whole buildings. Between March 2004 and December 2005, when EPA met with expert panel members, officials discussed sampling a representative number of each buildings’ apartments in order to “characterize the building,” which would have allowed EPA to characterize areas in Lower Manhattan. This information would have been used to inform decision-making regarding the extent of indoor contamination. According to EPA officials, all residents from each building would need to volunteer their individual apartments, and EPA would select the units it then tested. The approach that EPA developed entailed cleaning a building, including all units, common areas and HVACs, if there was a high degree of certainty that the average concentration of at least one contaminant, across all apartments tested, exceeded the benchmark, and dust could be associated with the WTC. While this method addressed the Inspector General recommendation that buildings be treated as a system so that potentially contaminated apartments did not contaminate previously cleaned apartments, EPA did not ultimately include this particular methodology in its second program plan due to the lack of a method to identify WTC dust. Instead, EPA will clean whole common areas, such as lobbies, and HVACs in buildings. It will clean common areas when at least one contaminant is found to exceed the cleanup benchmark in that area. It will clean HVACs and common areas when there is a high degree of certainty that the mean contaminant level for accessible areas, infrequently accessed areas, or air samples in common areas exceeds one contaminant benchmark. The expert panel’s ability to meet its goals was limited by two factors: (1) EPA officials’ belief that some panel goals were more appropriately addressed by other agencies and (2) EPA’s approach to managing the panel process. Furthermore, the majority of expert panel members do not believe the panel successfully met any of its goals. All of the panel members who responded to our follow-up inquiry regarding EPA’s second program (10 out of 10 members) told us the program is not responsive to the concerns of residents and workers affected by the collapse of the WTC towers. Appendix IV provides the full range of responses from structured interviews with expert panel members about EPA’s management of the panel process. According to EPA officials, some panel goals were more appropriately addressed by other agencies. We believe this view limited the panel’s ability to address these issues. In particular, one panel goal, as stated by CEQ, was to advance the identification of unmet public health needs. However, EPA officials believed that other federal agencies, such as HHS, were better equipped to address the issue of public health. Therefore, rather than having the expert panel members discuss and identify actions to address this issue, EPA allowed time during panel meetings for public health presentations. EPA officials believe that the panel met CEQ’s charge by including health experts on the panel and by including health presentations during panel meetings. While the panel was provided with these presentations, the majority of expert panel members (16 out of 18) told us the panel did not successfully identify unmet public health needs. Outside of the panel, a multiagency effort established a WTC health registry to assess the health impact of the WTC collapse. The EPA panel chairman noted that panel member recommendations to maintain the WTC health registry for more than 20 years and to link the results of subsequent indoor testing to the registry had been provided to the appropriate agencies. In addition, EPA officials believed that, despite the panel’s broader goal, which was to help guide EPA in its ongoing efforts to “monitor the situation for New York residents and workers impacted by the collapse of the WTC towers,” OSHA should address the issue of workplace safety because that is OSHA’s mission. Consequently, as noted earlier, the second program does not address workers’ concerns, and employers and workers are not eligible to request testing or cleaning. EPA stated that employees who believe their working conditions are unsafe as a result of WTC dust may file a complaint with OSHA or request an evaluation by HHS’s National Institute of Occupational Safety and Health (NIOSH). EPA’s management of the panel process limited the panel’s ability to successfully meet its goals. According to 9 or more of the 18 expert panel members we interviewed, problematic aspects of EPA’s management included (1) the lack of a consensus approach, (2) inadequate time for technical discussion, and (3) no fully transparent decision-making process. In addition, a number of expert panel members told us that failure to document recommendations created other concerns. Lack of a consensus approach. EPA did not allow the panel to reach consensus on key issues and prepare a final report; instead it obtained recommendations from each member of the expert panel. The majority of expert panel members (13 out of 18) told us that EPA’s approach was not appropriate, and one panel member noted that the lack of a consensus approach prevented the resolution of key issues. The EPA chairman told the panel that the panel would not be asked to reach consensus because this approach might limit the contribution of individual panel members. EPA officials also noted that it would have been difficult to reach consensus with such a diverse panel of experts and the technical nature of the discussion. Inadequate time for technical discussion. The majority of expert panel members (14 out of 18) told us there was not adequate time on the agenda for the panel to discuss issues. According to several panel members, EPA dedicated half or less of each daylong panel meeting to technical discussions, devoting the remainder of each day to public comment. Lack of a fully transparent decision-making process. EPA’s reasons for accepting or rejecting expert panel members’ recommendations were not at all transparent, according to half of the panel members (9 out of 18). Furthermore, six panelists said that EPA did not respond to their recommendations or provide any explanation for rejecting recommendations. In contrast, the two EPA panel chairmen we interviewed told us they believed the decision-making process was completely transparent. Failure to document recommendations. Although EPA stated in its operating principles that it would keep detailed minutes of each panel meeting, including all individual recommendations, whether oral or written, EPA did not maintain a list of recommendations. Instead, EPA provided “summaries” of each meeting that included an overview of issues raised, and, starting with the fifth meeting, EPA provided audio recordings of six of the remaining panel meetings. The majority of expert panel members (10 out of 18) said that having written transcripts of the meetings available would have been somewhat or very helpful. Some expert panel members told us the lack of transcripts presented a problem because they had no record of EPA agreement with several recommendations that were later not adopted. The majority of expert panel members told us that the panel was unable to meet its goals as outlined by EPA. As figure 6 shows, these included guiding EPA in: (1) developing the second program, (2) identifying unmet public health needs, (3) identifying any remaining risks using exposure and health surveillance information, and (4) determining steps to further minimize risks. According to all expert panel members who responded to our follow-up inquiry regarding EPA’s second program (10 out of 10 members), this program does not respond to the concerns of residents and workers affected by the collapse of the WTC towers. At the final panel meeting, some expert panel members said publicly that they would discourage participation in EPA’s program and several expert panel members said that the data yielded by the test and clean program will not be useful and the program is unlikely to adequately identify or clean up contaminants. In addition, the Community-Labor Coalition distributed information that also discouraged participation, citing lack of expert panel member support. EPA did not provide complete information in its second plan to allow the public to make informed choices about their participation in its voluntary program. While EPA stated that the number of samples in its first program exceeding risk levels for airborne asbestos was “very small,” EPA did not provide the following additional information to help inform residents’ decisions regarding participation in the second program: Voluntary program participation. Participation in the first program came from about 20 percent of the residences eligible for participation. In addition, participation was voluntary, which may suggest that the sample of apartments was not representative of all residences eligible for the program. Only asbestos tested. EPA’s conclusions were based only on tests for asbestos, rather than other contaminants, and the conclusions focused on airborne contamination rather than contamination in dust inside residences. Sampling protocols varied. EPA did not explain that over 80 percent of the samples were taken after professional cleaning was completed as a part of EPA’s program. In addition, EPA did not identify the portion of the samples that were collected following aggressive, as opposed to modified aggressive, techniques. In the first case, the air inside apartments was more actively circulated before sampling occurred. In these instances, about 6 percent of apartments tested were found to exceed EPA’s asbestos level, compared with roughly 1 percent that used the modified aggressive technique. Out of 4,167 apartments sampled, 276 were sampled using the aggressive method. Discarded sample results. EPA also did not explain in its second program plan that its first program’s test results may have been affected by sample results that were discarded because they were “not cleared”—that is, they could not be analyzed because the filter had too many dust particles to be analyzed under a microscope. However, EPA’s final report on its first program stated that residences with more than one inconclusive result, such as filter overload, were encouraged to have their residences recleaned and retested. Without complete explanations of EPA’s sampling data, residents who could have elected to participate might have decided not to do so. The number of participants declined from roughly 4,200 residents and 144 building owners in the first program to 272 residents and 25 building owners in the second program. In addition, community leaders on the panel believed that allowing participants to choose between two sampling techniques, coupled with the voluntary nature of the program, had the effect of making the overall program appear unnecessary. EPA did not take steps to ensure that it would have adequate resources to effectively implement the second program. Instead, EPA is implementing this program with the approximately $7 million in Stafford Act funds remaining after its first program. Although this program increases the number and type of contaminants being sampled, the funds available are less than 20 percent of those used in the first program. EPA is implementing its second program with the funding remaining after completion of its first program—approximately $7 million—but EPA did not determine whether this amount would support the effective implementation of its second program. According to EPA officials, they could not estimate the cost of the second program without information on the number of program participants and the size of residences, which vary widely throughout Lower Manhattan. Nevertheless, the interagency agreement between FEMA and EPA for the first program included estimated costs, although EPA faced the same challenges. This first estimate of $19.6 million was based on projections for the number of eligible residents participating in the program—specifically, 10,000 residences requesting cleaning and 3,000 residences requesting testing only—and included, among other things, detailed estimates for sample analysis, equipment and supplies, and EPA salary and travel costs. In the first program, EPA spent $7.5 million—of $19.6 million obligated by FEMA to EPA—on program oversight and analysis of air samples, while New York City spent approximately $30.4 million to collect air samples and clean residences. EPA returned $12.1 million in unspent funds to FEMA. According to FEMA officials, when the agency learned about the establishment of the expert panel, FEMA retained $7 million for additional EPA activities. EPA officials told us that in discussions with FEMA about whether the amount was appropriate, FEMA responded that only $7 million was available. In July 2006, an interagency agreement was signed by EPA and FEMA for the second program that describes EPA’s role as developing and implementing a program to test and clean in the specified area. After EPA entered into this agreement, EPA officials told us that if the number of registrants for the program exceeded the number that could be covered by the $7 million, they were unsure where additional funds could be obtained. EPA did not provide information to FEMA in the agreement about how many residents and building owners could potentially be served under the program. Thirteen of the 18 expert panel members told us they did not believe the $7 million for the sampling and cleanup was sufficient. According to one of the expert panel’s chairmen, the $7 million was sufficient for initial sampling in the second program but not for sampling and cleanup. In its final plan, EPA noted that requests for participation from eligible residents and building owners would be prioritized based on proximity to the WTC site. Although EPA’s second program increases the number and type of contaminants being sampled, the $7 million available is less than 20 percent of the $37.9 million spent on the first program. While only 1 percent of roughly 20,000 eligible residences are participating in the second program, compared with 20 percent who participated in the first program, it is not clear whether funding for the second program will be adequate without a cost estimate. EPA has acted upon lessons learned from the WTC disaster to prepare for future disasters, such as clarifying internal roles and responsibilities and improving health-related cleanup benchmarks. Nevertheless, we are uncertain about how completely these activities address EPA’s ability to respond to contamination in indoor environments in the face of future disasters. For example, EPA has not yet addressed certain methodological challenges raised by expert panel members regarding the WTC disaster, such as how it will determine the extent of contamination, which we believe are important for addressing future disasters. Without addressing this and other challenges, it is uncertain whether people in affected areas will be protected adequately from risks posed by indoor contamination stemming from future disasters. Since the WTC disaster, EPA has taken actions to improve its ability to respond to future disasters. However, EPA’s approach to emergency response does not differentiate between indoor and outdoor contamination, and therefore it is difficult to determine how EPA’s preparedness actions have improved EPA’s readiness to respond specifically to indoor contamination. EPA’s actions are consistent with several Inspector General recommendations, as the following examples of EPA’s preparedness actions illustrate: Clarified roles and responsibilities. EPA has completed response policies, established various specialized response teams, and conducted training. Though not specific to indoor contamination, EPA’s June 2003 National Approach to Response policy outlines EPA roles and responsibilities in the event of future large-scale disasters. Its October 2004 Homeland Security Strategy also notes that in the event of a national incident, EPA has the lead responsibility for decontaminating affected buildings and neighborhoods and for advising and assisting public health authorities on when it is safe to return to these areas and on what the safest disposal options for contaminants are. EPA’s National Decontamination Team provides general scientific support and technical expertise for identifying technologies and methods for decontaminating buildings and other infrastructure. EPA also expanded the capabilities of its existing Environmental Response Team (ERT), which is responsible for technological support and training through the establishment of an additional ERT office in Las Vegas, Nevada. Along with the Radiological Emergency Response Team and the National Decontamination Team, these teams provide support during emergencies. In addition, EPA officials noted that they have developed and delivered a training course on the Incident Command System, to be used under the National Response Plan, to 2,000 staff as well as senior managers in all regions to provide additional guidance on roles and responsibilities. Finally, in its newly developed Crisis Communication Plan, EPA outlines the responsibilities of agency staff in providing the public with information during disasters. EPA officials told us they have added 50 on-scene coordinators to their emergency response staff to improve preparedness and response capabilities. Shared information on likely targets and threats and developed approaches to address them. EPA’s Office of Research and Development (ORD) has several efforts to develop approaches to address future threats, including research on building decontamination, and EPA’s Office of Solid Waste and Emergency Response has begun to establish a network of environmental laboratories. In 2003, EPA created the National Homeland Security Research Center (NHSRC), part of ORD, to develop expertise and products to prevent, prepare for, and recover from public health and environmental emergencies arising from terrorist threats and incidents. Its research focuses on five areas: threat assessment, decontamination, water infrastructure protection, response capability, and technology evaluation. In November 2004, NHSRC reported on several threat scenarios for buildings and water systems; these threat scenarios guide NHSRC’s research, which is focused heavily on chemical, biological, and radiological (CBR) agents. EPA also participates on a number of interagency workgroups, including policy coordination committees formed by the White House Homeland Security Council; DHS work groups addressing sampling and other issues; and FEMA work groups that address various aspects of the National Response Plan. Although an interagency team, including EPA, has developed tabletop exercises to respond to nationally significant incidents, these exercises have not yet included residential contamination. EPA has also developed standardized analytical methods that environmental laboratories can use to analyze biological and chemical samples during disasters caused by terrorist attacks, and the agency has begun to establish a network of environmental laboratories capable of analyzing CBR agents, which would benefit from these methods. Improved health-related benchmarks for assessing health risks in emergencies. According to EPA officials, EPA’s Office of Prevention, Pesticides and Toxic Substances is leading the agency’s participation in developing acute exposure guideline levels (AEGL), an international effort aimed at describing the risk resulting from rare exposure to airborne chemicals. The AEGLs focus on exposures of 10 minutes, 30 minutes, 1 hour, 4 hours, and 8 hours. To date, AEGLs have not been developed under emergency situations; however, EPA officials told us the availability of methodologies such as those used to derive AEGLs make it possible to develop emergency benchmarks quickly, if necessary. EPA is also developing subchronic exposure guidance—provisional advisory levels (PAL)—to bridge the gap between acute exposure durations addressed by AEGLs and the chronic lifetime exposure guidance. EPA officials told us that NHSRC is developing this guidance for contaminants in air and water, and it will focus on exposure periods of 1 day, 30 days, and 2 years. EPA officials noted that, to date, it has developed PALs for over 20 chemical agents. In addition, EPA officials told us that the agency has completed a method to assess risk from exposure to contaminated building surfaces and that it is also completing guidance on how to address future incidents involving asbestos. Additional monitoring capabilities. The Deputy Director of EPA’s Office of Emergency Management told us the agency has five total suspended particulate (TSP) monitors in each region; however, these are not real- time monitors. For real-time data monitoring, each region has portable air monitors—Data-Rams—to provide approximate measures of ambient particulate matter concentrations. EPA officials told us they also have mobile monitoring labs, as well as specialized vans and aircraft, that can be deployed during disasters to conduct monitoring. EPA officials said they are evaluating other monitors—electronic beta attenuation monitors (EBAM)—that have the capability to work with higher dust loads. The Deputy Director of EPA’s Office of Emergency Management also told us that fixed near real-time radiation monitors, part of the environmental radiation ambient monitoring system (ERAMS), are currently being deployed at a rate of five per month at cities across the United States. While EPA has taken actions since the WTC disaster to prepare for future incidents, it has not demonstrated how it will overcome several methodological challenges that expert panel members identified. These challenges include determining the extent of contamination; developing appropriate cleanup benchmarks; and testing for contaminants that cause acute or short-term health effects. In addition, some expert panel members questioned EPA’s reliance on visual evidence, rather than sample data, as the primary basis for its actions, as well as its use of the modified aggressive sampling technique. Assessing extent of contamination. Some expert panel members recommended that EPA reconsider its decision to abandon its efforts to develop a method for differentiating between normal, urban dust, and WTC dust, which would have allowed EPA to determine the extent of WTC contamination. Several panel members encouraged EPA to continue to refine the method and collect applicable sample data, saying that collecting data now could provide critical information for future responses. EPA was unable to develop a WTC dust signature that would have allowed it to determine the extent of WTC contamination, in part, because of the limited number of dust samples taken immediately after the disaster, and the length of time that elapsed between the event and development of the signature. EPA officials told us they would need to identify contamination signatures in responding to future disasters. Developing cleanup benchmarks. Some expert panel members also expressed concerns regarding the cleanup benchmarks that EPA developed in response to the WTC disaster. Some expert panel members agreed with the concept of dividing sampled spaces into categories, such as accessible and inaccessible areas, with associated cleanup benchmarks; however, these panel members disagreed with how EPA defined the categories. For example, an expert panel member noted that children access areas under beds, which were not considered “accessible” by EPA’s definitions, and workers such as telecommunications technicians and housing inspectors access areas defined by EPA as “inaccessible” on a daily basis. In addition, expert panel members disagreed with some cleanup benchmarks that EPA developed for the various categories. For example, two panel members asserted that EPA’s proposed cleanup benchmark for man-made vitreous fibers was not stringent enough. While EPA then changed the benchmark for man-made vitreous fibers in inaccessible areas from 100,000 fibers/cm to 50,000 fibers/cm Using the modified aggressive sampling technique. Some expert panel members questioned EPA’s use of the modified aggressive sampling technique. The number of samples exceeding cleanup benchmarks was greater when the aggressive sampling technique was used. EPA’s rationale for departing from the technique specified by the Asbestos Hazard Emergency Response Act (AHERA) is that the aggressive technique does not appropriately represent conditions of human exposure in a residence. EPA has not identified in its protocols how these methodological concerns can be overcome, such as how and when data collection will occur, in order to facilitate determining the extent of contamination. Without clarifying actions that are appropriate for EPA and other federal agencies in these scenarios, important determinations about risk from disaster- related contamination may not be promptly addressed. Shortcomings in EPA’s second program to test and clean residences for WTC contamination raise questions about the agency’s preparedness for addressing indoor contamination resulting from future disasters. With respect to communication, the public relies on EPA to provide accurate and complete information about environmental hazards that may affect them. However, in announcing its plan for the second program, EPA did not fully disclose the limitations of its earlier test results. Consequently, some eligible residents of Lower Manhattan may have concluded that they were not at risk from contaminated dust and therefore elected not to participate in the second program. EPA did not develop a cost estimate to support its use of available Stafford Act funds for its second program. Without this information, EPA and other decision makers could not know how many residents and building owners could potentially be served by the program. Given limited federal disaster response funds and competing priorities, the federal government must carefully consider how best to allocate these monies to be sure that these funds are used most cost effectively. In the future, unless officials justify the Stafford Act funds necessary for achieving program objectives prior to implementation, EPA will not have a sound basis for securing needed funds and, as a result, may be forced to scale back its programs in ways that limit their effectiveness. Moreover, EPA has reported that it faced several challenges in addressing WTC indoor contamination, including limited indoor sampling protocols, health benchmarks, and background data for urban areas. In addition, since the National Response Plan does not explicitly address indoor contamination, it is unclear how EPA, in concert with other agencies— including the Departments of Homeland Security, Health and Human Services, and Labor—will address these challenges. Unless these agencies establish an approach for responding to indoor contamination, the nation may face the same challenges after future disasters. To enhance EPA’s ability to provide environmental health risk information to the public that is complete and readily understandable, we recommend that the Administrator of EPA facilitate the implementation of the recently issued Crisis Communication Plan by issuing guidance that, among other things, ensures the presentation of environmental data in an appropriate context, with appropriate technical caveats noted in plain language. To provide decision makers with a sound basis for the Stafford Act funds needed for future disaster response programs, we recommend that the Administrator of EPA establish guidelines for developing program cost estimates. These cost estimates should support the programs’ objectives and promote the efficient and effective use of government resources. To ensure that EPA is better prepared for future disasters that involve indoor contamination and that it captures important information that could guide future cleanup decisions, we recommend that the Administrator of EPA, in concert with the Departments of Homeland Security, Health and Human Services, and Labor, and other appropriate federal agencies, develop protocols or memorandums of understanding under the National Response Plan that specifically address indoor contamination. These protocols should define when the extent of contamination is to be determined, as well as how and when indoor cleanups are to be conducted. EPA should seek additional statutory authority if it determines that such additional authority is necessary. In commenting on a draft of this report, EPA’s Assistant Administrator for Research and Development and Assistant Administrator for Solid Waste and Emergency Response identified actions that EPA has begun taking that are responsive to these recommendations. EPA also provided comments on aspects of the report it considered misleading or inaccurate, such as our characterization of the Expert Technical Review Panel process, including the panel’s goals. Though EPA preferred that we present the charges identified by CEQ, we reported the goals that EPA provided directly to the expert panel at its first meeting, and we believe this accurately characterizes the priorities that EPA established for the panel. In addition, EPA asserted that the report creates a misleading impression that EPA did not fully disclose the limitations of test results from its first program. EPA refers to an appendix in its second plan, which includes a discussion of EPA’s methodology; raw data, such as the total number of samples taken; and the results of sampling efforts, but does not include a discussion of the factors that may have influenced these results. We continue to believe that EPA did not include appropriate caveats that clearly articulated the limitations in the results in its discussion, such as that 20 percent of eligible residents participated and, therefore, the results may not have been representative of all residences. We believe that the report offers a balanced portrayal of EPA’s development of its second program, the expert panel process, and EPA’s actions to better prepare for future disasters. EPA also provided technical comments, which we incorporated as appropriate. EPA’s letter and our detailed response to it appear in appendix V. We are sending copies of this report to the Administrator, EPA; appropriate congressional committees; and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Since the Environmental Protection Agency (EPA) was given the authority to classify information in May 2002, it has classified information in three documents. However, none of these documents address the World Trade Center (WTC) or the environmental impact of its destruction. In May 2002, through Executive Order 12958, the President gave the EPA Administrator the authority to classify information as “Secret.” Section 1.4 of the executive order, as amended, prescribes a uniform system for classifying, safeguarding, and declassifying national security information, including information relating to defense against transnational terrorism. It also identifies the types of information that should be considered for classification: military plans, weapon systems, and operations; foreign government information; intelligence activities, sources, and methods, and cryptology; scientific, technological, and economic matters relating to the national security, which includes defense against transnational terrorism; U.S. programs for safeguarding nuclear materials and facilities; vulnerabilities and capabilities of systems, installations, infrastructures, projects, plans, and protection services relating to the national security, which includes defense against transnational terrorism; and weapons of mass destruction. The executive order also describes several different classification types and levels. Original classification refers to the classification of information that has not already been classified by another authority. Derivative classification refers to the classification of a document that uses information that has already been classified. The levels of classification— “Top Secret,” “Secret,” or “Confidential”—refer to the severity of national security damage that disclosure of the information would result in. Since it received its classification authority in May 2002, EPA has originally classified information in three documents, according to EPA’s review of classified information, and identified 51 documents with derivative classification. This assessment concurs with our review of National Archives program data, as table 2 shows. In information that EPA submitted to the National Archives, it explained that, although EPA did not originally classify information in any documents in fiscal year 2006, the three documents containing originally classified information significantly increased the number of derivative classification decisions made by EPA because subsequent documents included the originally classified information. EPA has not classified any WTC information, including environmental information, according to our review of the three documents that EPA has classified. According to nonclassified portions of these three documents, they discuss threat scenarios for buildings, water systems and drinking water infrastructure, and water decontamination. We were asked to determine (1) the extent to which the Environmental Protection Agency (EPA) incorporated recommendations and additional input from the expert panel and its Inspector General in its second program; (2) what factors, if any, limited the expert panel’s ability to meet its goals; (3) the completeness of information EPA provided to the public in its second plan; (4) the way EPA estimated the resources needed to conduct the second program; and (5) the extent to which EPA has acted upon lessons learned to better prepare for indoor contamination that could result from future large-scale disasters. In addition, owing to concerns raised in the media about EPA’s use of classification authority, we were asked to determine the extent to which EPA has classified information, and, if so, whether any classified information discusses the environmental impact of the towers’ collapse. To examine EPA’s actions to incorporate recommendations and additional input from the expert panel and its Inspector General, we reviewed four Inspector General recommendations on EPA’s test and clean program; all 13 WTC Expert Technical Review Panel meeting summaries, which included input from the WTC Community-Labor Coalition representatives to the panel and other panel members; and EPA’s 2002-2003 indoor test and clean program plan and all drafts leading to the 2006 program plan. We analyzed the December 2006 Final Test and Clean Plan to determine whether EPA had incorporated individual panel member and Inspector General input. We relied upon EPA’s summaries of the panel meetings to obtain information on individual panel member input because EPA did not have a comprehensive list of panel recommendations. We also conducted interviews with EPA officials from headquarters (Washington, D.C.) and Region 2 (New York City) to identify actions EPA took to incorporate the expert panel and Inspector General input into the test and clean program plan. Finally, we conducted structured interviews with all 18 expert panel members, as well as the two chairs of the WTC Expert Technical Review Panel. The expert panel members included community representatives, local and federal government officials from the Federal Emergency Management Agency (FEMA), the Department of Labor’s Occupational Safety and Health Administration, the New York City’s Department of Environmental Protection and Department of Health and Mental Hygiene, and nongovernment members. To determine the factors that affected the expert panel’s ability to meet its goals, we conducted structured interviews with all 18 WTC expert panel members, as well as the two former EPA Assistant Administrators for the Office of Research and Development who chaired the panel. We analyzed expert panel member and panel chair responses to both qualitative and quantitative questions in order to describe the panel process and obtain information on EPA’s management of the process. In follow-up correspondence, we asked panel members whether EPA’s second program was responsive to the concerns of residents and workers; we were only able to obtain 10 panel member responses. We also reviewed all 13 panel meeting summaries and reviewed selected video or audio recordings of meetings. To evaluate the completeness of information EPA provided to the public in its second plan, we reviewed EPA’s 2002-2003 program plan and all drafts leading to the December 2006 program plan, information on testing data included on EPA’s Web site, the 2003 EPA Inspector General report, and all 13 summaries of EPA’s Expert Technical Review Panel meetings. To examine EPA efforts to estimate the resources needed to conduct the second program, we obtained and analyzed funding documentation, including interagency agreements between FEMA and EPA, as well as documentation related to funding and expenditure data for the WTC indoor test and clean program. We found discrepancies in the data EPA and FEMA provided. We assessed the reliability of expenditure data received from EPA but were unable to assess the reliability of expenditure data provided by FEMA. We assessed the reliability of the EPA expenditure data by interviewing officials knowledgeable about the data and reviewing existing information about the data and the system that produced them. We determined that EPA’s funding data were sufficiently reliable for the purposes of our review. We also interviewed agency officials to gather information on EPA’s expenditures, its plans to spend funding, and whether EPA plans to seek additional funds. To examine the extent to which EPA has acted upon lessons learned for addressing indoor contamination resulting from future large-scale disasters, we interviewed officials from EPA headquarters, including the Office of Research and Development and the Office of Solid Waste and Emergency Response; from Region 2, which is responsible for New York City; and from EPA’s National Homeland Security Research Center, among others. We compared EPA’s activities with the Inspector General’s recommendations on preparedness and with recommendations in EPA’s Lessons Learned in the Aftermath of September 11, 2001. We also attended a National Institute of Standards and Technology technical seminar on WTC materials and observed the disaster area with a FEMA official. To determine the extent to which EPA has classified information, and, if so, whether any classified information discusses the environmental impact of the towers’ collapse, we requested a statement from EPA on (1) whether any EPA officials, including former EPA Administrators, authorized by Executive Order 12958 to classify information as secret have done so since the executive order was promulgated; and (2) whether any of the classified information pertains to the environmental impact of the WTC collapse, including the indoor test and clean program, contaminants of potential concern, or geographic boundaries, that are relevant to EPA’s approach to addressing indoor contamination. After EPA responded, we requested access to and we reviewed all classified information to determine whether it was related to the WTC disaster. In addition, we obtained and reviewed data from the National Archives to determine the number of documents EPA has classified since receiving authority to do so. Appendix I provides the results of our analysis of EPA’s classification of information under this authority. We performed our work between June 2006 and September 2007 in accordance with generally accepted government auditing standards. Lower Manhattan indoor dust test and clean program (December 2006) In general, a cleanup will be offered if a benchmark for any contaminant is exceeded in any unit or building common area tested. EPA will conduct surveys to determine if contamination levels exceeding benchmarks may be attributed to sources within or adjacent to the place of business or residence. This information will be considered with information on building cleaning history to determine whether additional sampling or further cleaning will be offered. Streets based on the EPIC visual residents: owners or renters residents: owners or renters residential buildings: common areas, as well as buildings: residential or commercial building employees and employers not eligible Air samples were also analyzed for total fibers, including MMVF; however, this did not affect cleanup decisions. In a subset of residences, pre- and post-cleanup dust wipe samples were collected and analyzed for dioxin, mercury, lead, and 21 other metals. This included over 1,500 samples from 263 residences and 157 buildings. The body of this report generally identifies expert responses to our questions about EPA’s management of the panel process. The following tables include the full range of experts (out of 18) who responded to these questions. The tables also indicate the number of experts who provided no response. Question: Was EPA’s decision to obtain individual recommendations rather than have the panel arrive at consensus appropriate? Question: Did expert panel members have adequate agenda time for panel discussion of issues? Question: How transparent was EPA’s decision-making process behind changes in the test and clean plan versions? Question: How helpful would it have been to have written transcripts of the meetings available? Question: How successful do you think the panel was in meeting each of the following panel goals? Follow-up question: Is the Lower Manhattan Indoor Dust Test and Clean Program Plan responsive to the concerns of residents and workers impacted by the collapse of the World Trade Center towers? The following are GAO’s comments on the Environmental Protection Agency’s letter dated August 21, 2007. 1. We believe that the report offers a balanced portrayal of EPA’s development of its second program, the WTC Expert Technical Review Panel process, and EPA’s actions to better prepare for future disasters. In several cases we have clarified the language in the draft report to address EPA concerns. 2. In regard to EPA’s comments about the transparency of the WTC Expert Technical Review Panel process, we reported on the factors that limited the panel’s ability to meet its goals and not on the overall transparency of the process. We stated that two factors limited the panel’s ability to meet its goals: (1) EPA officials’ assertion that other agencies were better equipped to address public health and (2) EPA’s approach for managing the panel process. Regarding EPA’s management of the panel process, however, expert panel members told us that EPA did not have a transparent process for adopting or rejecting their recommendations, as we stated in the draft report. 3. Regarding panel members’ views on the responsiveness of EPA’s second program to concerns of residents and workers, we clarified our report to note that the source of the views included all of the expert panel members who responded to a follow-up inquiry regarding this question. 4. We disagree that the draft report provided panel member views in a misleading manner. However, we clarified the report language to indicate that 9 of 18 panel members reported that the decision-making process behind EPA’s changes to its plan were not at all transparent. In doing so, we reported the category with the largest number of responses and, as indicated in the draft report, the full range of responses can be found in appendix IV. As stated in the draft report, in order to determine the factors that affected the expert panel’s ability to meet its goals, we conducted structured interviews with all 18 expert panel members. We analyzed these responses in order to describe the panel process, including EPA’s management of the panel process. We reported the views that panel members provided to us during structured interviews and included the full range of responses to these questions in an appendix, as stated above. Regarding comments supporting inadequate time for decision making, panel members requested at the final panel meeting that EPA allow time for additional discussion. According to the December 2005 meeting summary, the panel co-chair “summarized that the overall sense of the panel members is that there is a need for additional discussion.” 5. We acknowledge that EPA would have preferred for us to include more detailed information in our discussion of the agency’s second WTC program, the WTC Expert Technical Review Panel process, and its programs for responding to disasters. However, the purpose of our report was not to reiterate the technical details of EPA’s efforts but to summarize specific findings related to our key objectives. 6. EPA asserts that it conducted extensive monitoring and modeling after September 11, 2001, in order to determine the extent of contamination. We acknowledge that appendix I in EPA’s December 2006 plan states, “the plumes resulting from the collapse of the towers and subsequent fires were modeled by EPA” and that “EPA and many other agencies collected and analyzed environmental samples after the September 11, 2001, attack on the WTC,” and we incorporated these facts in the report. However, when we asked EPA to identify which samples were taken indoors, EPA officials told us they did not have this information. Furthermore, in the body of EPA’s December 2006 program plan, EPA acknowledges that it is no longer attempting to assess the extent of WTC contamination. We maintain that the challenge of identifying the extent of WTC contamination in indoor spaces remains. 7. We agree that neither EPA nor panel members suggested testing in inaccessible areas as a means of determining the adequacy of its cleanups. However, our statement was intended to convey our belief that if EPA had information about these areas, a more complete picture of both the extent of contamination and the adequacy of overall efforts directed toward cleaning and testing could be assessed. 8. EPA takes issue with our assertion that EPA did not estimate the resources needed to carry out its second program. We believe that EPA did not conduct a cost estimate that identified the resources needed to effectively implement the second program. As EPA stated in comments, it provided information for potential contract costs for the second program; however, we continue to believe that the information was limited as it related to only one program component—sampling— and it was unclear how the sampling costs related to an overall cost estimate. In EPA’s comments, it states that cost data provided in its interagency agreement constituted a cost estimate; however, information on key assumptions such as estimated participation rates as well as key program elements, including the cost of sampling, were not included. Further, the information provided in the interagency agreement was not the basis for determining whether $7 million in funding would be adequate for implementing the second program—as this amount had already been established as the remaining funds FEMA set aside for EPA’s use. In contrast, for its first program, EPA provided information in the interagency agreement with FEMA that included details associated with individual cost elements, such as sample analysis, equipment and supplies, and salary and travel costs. For example, EPA provided detailed estimates for analytical services based on key assumptions related to participation, samples per unit, and the testing for specific contaminants. EPA did not provide this information in the second interagency agreement to support its identification of resources needed for analytical activities. We note that the interagency agreement for EPA’s first program identified over $9 million for sampling and analysis of asbestos. While the second program is addressing three additional contaminants, the interagency agreement has limited detail on the associated sampling and analysis costs or how these relate to the total funding of $7 million. 9. EPA asserts that table 1 in the draft report (figure 5 in the final report) does not accurately characterize the IG recommendations and the relationship between them and the CEQ charges. As the draft report stated, table 1 in the draft report (figure 5 in the final report) showed key recommendations and additional input that the IG and panel members provided to EPA. We believe that the figure accurately presents both recommendations such as those found in Chapter 6 of the IG report, as well as input the IG provided in other sections of the report that supports these specific recommendations. The figure also presents input provided by panel members, which we believe is not documented comprehensively in other locations. 10. In EPA’s comments, it notes that panel members were free to refocus issues, and our draft report acknowledged that EPA adopted panel members’ input to address contamination, rather than recontamination, of spaces. On page 8 of its comments, EPA took issue with our description of the panel’s goals. EPA provided the charges identified by CEQ in its October 27, 2003, letter to the agency. In our report, rather than present these charges, we instead reported goals that EPA directly provided to the expert panel at its first meeting on March 31, 2004. We believe this is an accurate characterization of the priorities EPA established for the panel. 11. In its comments, EPA states that the agency decided to implement a voluntary program to test and clean residences and whole buildings. In fact, when requested by building owners, the December 2006 program plan offers testing and cleaning in residential and commercial buildings’ common areas, but does not use the term “whole buildings.” 12. EPA takes issue with our assessment that EPA failed to disclose the limitations in testing results. EPA refers to appendix I of its second plan and notes that it contains an “extensive discussion” of the results of the first program. The appendix includes a discussion of EPA’s methodology, raw data such as the total number of samples taken, and the results of sampling efforts but does not include a discussion of the limitations that may have influenced these results. EPA also notes that discussion of its first program’s test results were available in panel meeting summaries and on EPA’s WTC Web site; however, these sources summarized presentations made to the panel and responses to panel member comments but lacked the same discussion of limitations as EPA’s second program plan. We continue to believe that EPA did not include appropriate caveats that clearly articulated the limitations in the results in its discussion, such as that 20 percent of eligible residents participated and, therefore, the results may not have been representative of all spaces. Finally, GAO did not conclude that EPA withheld data, as EPA suggested in its comments. 13. In EPA’s comments, EPA disagrees with our assessment that EPA has not demonstrated how it will overcome certain challenges identified by expert panel members. We acknowledge EPA’s analytical capabilities and the acute exposure guideline levels and other benchmarks that are available to EPA. We continue to believe that expert panel members raised valid issues regarding EPA’s second program following the WTC disaster, including what cleanup benchmarks EPA used, what contaminants EPA tested for, and EPA’s reliance on visual evidence. We believe these issues point to the need for protocols or interagency agreements that clarify how EPA, along with other agencies, is to address indoor contamination in the future. Further, after reviewing the summary that EPA provided on pages 24 and 25 of its comments of the HVAC system evaluation process it employed, we continue to believe that this process is primarily a visual assessment and that we accurately portrayed panel member concerns with EPA’s reliance on visual evidence rather than sample data for HVAC evaluations. 14. We encourage EPA to complete and implement its Crisis Communication Plan’s companion resource guide, described in its comments, in a timely fashion. The public relies on EPA to provide accurate and complete information about environmental hazards that may affect them. Assuring that environmental data are presented in language that is easily understood and in easily accessible formats will improve the public’s ability to make informed decisions. 15. We note that EPA’s comments indicated that since the WTC disaster, EPA has developed more detailed cost estimates to help plan the agency’s Stafford Act activities and that the agency is working to establish more specific reporting requirements. In order to more fully inform planning and to allow for the efficient allocation of disaster funds, we encourage the agency to continue these efforts. 16. We recognized in our recommendation the role that DHS and other federal agencies would play in developing protocols and memorandums of understanding under the National Response Plan that specifically address indoor contamination. We acknowledge that EPA plays a critical role under Emergency Support Function 10 for addressing oil and hazardous waste releases. It is encouraging that EPA is pursuing a number of efforts related to chemical, biological, and radiological incidents, including the development of protocols that specifically address indoor contamination involving these types of agents. In addition to these areas, we believe that protocols specific to indoor contamination, which define when the extent of contamination is to be determined, as well as how and when indoor cleanups are to be conducted, should be priorities. 17. We edited the sentence as suggested, but we note that the May 3, 2002, letter from Christopher Ward, New York City Department of Environmental Protection, to Brad Gair, FEMA, refers specifically to asbestos. It states, “The City of New York believes that it is in the public’s interest to remove this material from buildings in the vicinity of the WTC site. Samples collected during the inspections indicate that asbestos may be present in some of the debris. The removal of this material will assure that it will not become re-entrained in the air in the future, thereby protecting against any adverse affects on air quality or public health and safety.” 18. We edited the sentence on residential sampling as suggested. 19. EPA is concerned that we provided additional detail beyond the specific statement of IG recommendation 6-3. We believe our statement accurately characterizes the recommendation by taking into consideration other information in the IG report. Specifically, preceding this recommendation, the IG provides details that support this recommendation. The IG states on page 51 of its August 2003 report that “in the case of centralized HVAC systems, selective cleaning does not ensure that cleaned apartments will not be recontaminated by uncleaned apartments through the HVAC system. Consequently, the cleaning of contaminated buildings should proceed by treating the building as a system.” 20. We included this information in our final report. 21. EPA asserts that our discussions of EPA’s efforts to develop a WTC dust screening method are incorrect. We recognize that additional development would have been necessary to improve the precision and accuracy of the method and, in doing so, render the method usable as a WTC dust screening tool. Our draft report described the subpanel’s work to help EPA develop such a methodology and provided information about the peer review of the methodology. As indicated on page 18 of its comments, EPA suggested that its method was never intended to distinguish “WTC contaminants in dust.” Our draft report asserted that EPA was unable to develop a method for differentiating between normal background dust and WTC dust and therefore EPA was unable to determine the extent of WTC contamination. We believe the phrase “WTC contaminants in dust” is synonymous with dust contaminated with “WTC residue.” 22. We included this information in our final report. 23. EPA disagrees with our statement that EPA did not begin examining methods for differentiating between normal urban dust and WTC dust until May 2004. While multiagency workgroup and task force activities were related, EPA initiated its specific effort to develop a method for identifying a WTC dust signature after individual expert panel members recommended that it do so at its May 12, 2004, meeting. This decision is documented in a September 8, 2006, letter from the EPA Region 2 Administrator to a Member of Congress that states, “As a result of these discussions, EPA decided to explore whether a WTC signature exists in dust.” We continue to believe that our statement is accurate. 24. We disagree that our statement regarding workplaces is misleading. Despite OSHA and NIOSH presentations made at panel meetings, we continue to have concerns because these agencies do not have authority to conduct cleanup in response to contaminant levels that exceed EPA’s site-specific cleanup benchmarks. Furthermore, our draft report stated that OSHA’s standards are designed primarily to address airborne contamination, while EPA’s test and clean program is designed to address contamination in building spaces, whether it is airborne or in settled dust. 25. We disagree with EPA’s assertion that this statement creates the impression that other agencies were not addressing health-related issues. Our comments were limited to the panel’s ability to meet its goals, one of which was to identify unmet public health needs. While EPA’s facilitation of public health presentations may have provided information about health issues, all but two expert panel members told us that the panel did not successfully identify unmet public health needs. We did not address the quality of the WTC Health Registry or other agencies’ public health activities. 26. The source of the office and residential building data is the May 12, 2004, panel meeting summary posted on EPA’s Web site. The summary identifies a New York City Department of Buildings database from which EPA drew this information. 27. The draft report provided basic facts and background information about EPA’s first program that were derived from EPA’s December 2006 program plan and other EPA reports in order to provide context for the development of the second program. 28. EPA takes issue with our draft regarding our characterization of the availability of sample results from the New York City Department of Health and Mental Hygiene and the Agency for Toxic Substances and Disease Registry’s study. In fact, our draft report provided a footnote pointing out the results of the study were made available to EPA in February 2002. 29. EPA said the dates we provided in a timeline of events did not accurately portray when the results of agency studies were available for its use. We provided publication dates for three EPA studies in our timeline to illustrate the range of activities that EPA engaged in prior to its second program. EPA also asserted that there was no single date for reoccupation of residences. In fact, our timeline specifically includes the date, 9/17/2001, that New York City residents began to reoccupy homes and Wall Street was reopened. 30. As suggested, we replaced the term “cleanup standards” with “cleanup benchmarks” and we expanded our discussion of how these benchmarks were developed. 31. EPA asserts that our statement is incorrect because it omits discussion of cleaning in common areas. We acknowledge that EPA will clean in common areas under certain circumstances; however, the context of this discussion was the panel members’ recommendations that EPA clean in HVACs. 32. We believe that the draft report correctly presents the IG recommendation, what EPA considered, and the agency’s rationale for not electing to pursue a sampling approach that would have addressed whole buildings; however, we clarified the report’s language to include more detail regarding EPA’s proposed approach. The July 26, 2004, panel meeting summary supports our description of how EPA considered various approaches. While EPA said that its intent was not to characterize buildings but rather to use the information from buildings “to characterize areas,” the meeting summary includes a presentation by an EPA official on a sampling approach that involved “…conducting air and dust sampling in several units within the building to characterize the building.” Further, we disagree with EPA’s explanation of why its proposal to do so was rejected by panel members and the public. Panel members rejected the aspect of the plan that would have limited the sampling to the same residences that participated in EPA’s first program, as panel members wanted the plan to allow for sampling in residences that had not participated previously. Thus, EPA’s assertion in its comments that the panel members rejected EPA’s approach because it was addressing whole buildings is not accurate. 33. We clarified this statement in the report, noting that EPA did not maintain a list of recommendations; however, we continue to believe that the meeting summaries maintained by EPA did not constitute comprehensive documentation of recommendations made by expert panel members. 34. We disagree that our discussion of overloaded samples is incorrect; however, we clarified report language to indicate that sample results, rather than samples, were discarded and that dust particles, rather than fibers, obscured analysis. In EPA’s final report from its first program, the agency states, “there were a number of outcomes that resulted in inconclusive results. Filter overload was the most common. Filter overload occurs when too many dust particles are captured on the filter. The filter becomes obscured so technicians examining it under a microscope cannot separate out individual fibers. This causes an inconclusive result, which is discarded.” In its second program plan, EPA does not present this information in its description of its first program’s test results. We continue to believe that this information would have provided additional context to the public. 35. EPA disagrees with our assessment that EPA guidance has not yet addressed how the agency will determine the extent of contamination resulting from disasters. We acknowledge that EPA has built its capacity to address contamination since the WTC disaster and that it continues to work to develop additional sampling methods. In fact, the draft report provided examples of research EPA is conducting, benchmarks EPA is developing, and other preparedness activities that EPA has undertaken. However, we do not believe that existing guidance or protocols have provided additional assurances that EPA has addressed the challenges it faced from 2004 to 2005 when working to develop a reliable screening method for WTC dust. 36. As suggested, we edited the sentence regarding the Environmental Response Team. 37. As suggested, we edited the sentence regarding environmental laboratory networks. 38. As suggested, we edited the sentence regarding acute exposure guideline levels. 39. EPA noted matters for correction in an appendix that provides background information on EPA’s first and second programs. We edited the statement regarding EPA’s role in the first program, as suggested. However, we note that in its final report on its first program EPA states, “contractors cleaned and tested homes, under the direction of the EPA.” In addition, our draft report included a table note referring to the subset of 263 residences that EPA tested for additional contaminants, and we have added detail regarding total fibers. For common areas, the draft report included the number of samples taken from common areas, and it also notes that 144 buildings had common areas cleaned. We clarified the appendix III language regarding geographic extent to note that the appendix provides program boundaries. In addition to the contact named above, Diane B. Raynes, Assistant Director; Janice Ceperich; Michele Fejfar; Brandon H. Haller; Katheryn Summers Hubbell; Karen Keegan; Omari Norman; Carol Herrnstadt Shulman; and Sandra Tasic made major contributions to this report. Additional assistance was provided by Katherine M. Raheb.
The September 11, 2001, terrorist attacks and World Trade Center (WTC) collapse blanketed Lower Manhattan in dust from building debris. In response, the Environmental Protection Agency (EPA) conducted an indoor clean and test program from 2002 to 2003. In 2003, EPA's Inspector General (IG) recommended improvements to the program and identified lessons learned for EPA's preparedness for future disasters. In 2004, EPA formed an expert panel to, among other goals, guide EPA in developing a second voluntary program; EPA announced this program in 2006. As requested, GAO's report primarily addresses EPA's second program, including the (1) extent to which EPA incorporated IG and expert panel member recommendations and input; (2) factors, if any, limiting the expert panel's ability to meet its goals; (3) completeness of information EPA provided to the public; (4) way EPA estimated resources for the program; and (5) extent to which EPA has acted upon lessons learned regarding indoor contamination from disasters. EPA has incorporated some recommendations and input from the IG and expert panel members into its second program, but its decision not to include other items may limit the overall effectiveness of this program. For example, while EPA agreed to test for more contaminants, it did not agree to evaluate risks in areas north of Canal Street and in Brooklyn. EPA reported that it does not have a basis for expanding the boundaries of its program because it cannot distinguish between normal urban, or background, dust and WTC dust. The expert panel's ability to meet its goals was limited by two factors: (1) EPA officials' belief that some panel goals were more appropriately addressed by other agencies, and (2) EPA's approach to managing the panel process. Furthermore, the majority of expert panel members believe the panel did not meet any of its goals, and that EPA's second program does not respond to the concerns of residents and workers affected by the disaster. EPA's second plan does not fully inform the public about the results of its first program. EPA concluded that a "very small" number of samples from its first program exceeded risk levels for airborne asbestos. However, EPA did not provide information such as how representative the samples were of the affected area. Residents who could have participated in this voluntary second program might have opted not to do so because of EPA's conclusion about its first program. EPA did not develop a comprehensive cost estimate to determine the resources needed to carry out its second program. EPA is implementing this program with $7 million remaining from its first program. While EPA has acted upon lessons learned following this disaster, some concerns remain about its preparedness to respond to indoor contamination following future disasters. Specifically, EPA has not developed protocols on how and when to collect data to determine the extent of indoor contamination, one of the concerns raised by panel members.
Since the CFO Act’s passage, steady progress has been made in improving federal financial management. A set of comprehensive accounting standards has been completed by the Federal Accounting Standards Advisory Board (FASAB), agencies are progressing in receiving unqualified audit opinions on financial statements, and structures are in place to identify and resolve governmentwide financial management issues. FASAB was created by the Secretary of the Treasury, the Director of Office of Management and Budget (OMB), and the Comptroller General in October 1990 to consider and recommend federal accounting standards. Treasury, OMB, and GAO then decide whether to adopt the recommended standards; if they do, the standards are published by OMB and GAO and become effective. Statements of federal financial accounting concepts and standards, which are listed in attachment I, now provide for reporting on the federal government’s financial condition, as well as on the costs of its programs. For fiscal year 1996, when agencywide financial statements were required across government for the first time, 6 of the 24 CFO Act agencies received unqualified audit opinions. For the next year, fiscal year 1997, 9 agencies received unqualified audit opinions, and OMB expects an additional agency to receive an unqualified opinion by the end of June 1998. The preparation of financial statements and independent audit opinions required by the expanded CFO Act are bringing greater clarity and understanding to the scope and depth of problems and needed solutions. Some individual agencies have successfully solved these problems. For example, the Social Security Administration (SSA) prepared financial statements for fiscal year 1987—prior to the expanded CFO Act’s requirement—addressed financial weaknesses, and attained its first unqualified audit opinion for fiscal year 1994. As this Subcommittee heard at its April 17, 1998, hearing, SSA now produces financial statements within 2 months of the close of the fiscal year and continues to receive unqualified audit opinions annually. At the Department of Energy, the Inspector General identified problems related to the balance sheet Energy prepared for fiscal year 1995. The problems, for example, involved identifying liabilities associated with environmental cleanup and controls over property and equipment, which Energy worked to correct. The following year, fiscal year 1996, Energy prepared agencywide financial statements that received an unqualified opinion and sustained these results for fiscal year 1997. Many people are actively working to resolve federal financial management problems. For example, OMB has issued guidance to agencies on producing useful financial reports that meet FASAB standards. In addition to individual CFOs working to address issues in their agencies, the CFO Council, working with OMB, develops an annual financial management status report and 5-year plan. Inspectors General are carrying out their responsibilities to ensure annual audits of financial statements. On March 31, 1998, the Secretary of the Treasury, in consultation with the Director of OMB, issued the 1997 Consolidated Financial Statements of the United States Government. These audited governmentwide financial statements were the first prepared and issued under provisions of the expanded CFO Act and included our first report required by the act. On April 1, 1998, we testified before this Subcommittee on the results of our audit. Our testimony framed the most serious financial management improvement challenges facing the federal government. In summary, significant financial systems weaknesses; problems with fundamental recordkeeping; incomplete documentation; and weak internal controls, including computer controls, prevented the government from accurately reporting a large portion of its assets, liabilities, and costs. Our audit of the federal government’s consolidated financial statements and the Inspectors General audits of agencies’ financial statements have resulted in an identification and analysis of deficiencies in the government’s recordkeeping and control system and recommendations to correct them. The executive branch recognizes the extent and severity of the financial management deficiencies and that addressing them will require concerted improvement efforts across government. Financial management has been designated one of the President’s priority management objectives, with the goal of having performance and cost information in a timely, informative, and accurate way, consistent with federal accounting standards. Also, the administration has set goals for individual agencies, as well as the government as a whole, to complete audits and gain unqualified opinions. To help achieve these objectives, the President issued a May 26, 1998, memorandum to the heads of executive departments and agencies on actions needed to improve financial management. The President’s message points to several areas requiring agencies additional attention: practices related to the government’s property, federal credit programs, liabilities related to the disposal of hazardous waste and remediation of environmental contamination, federal government employment-related benefits liabilities, and transactions between federal entities. These areas reflect the serious deficiencies that prevented us from being able to form an opinion on the reliability of the consolidated financial statements of the U.S. government. The President’s directive places additional accountability on agency heads and gives OMB more responsibility for addressing these problems. Specifically, he has directed that: OMB identify agencies subject to reporting under the President’s memorandum and monitor their progress towards the goal of having an unqualified audit opinion on the governmentwide financial statements for fiscal year 1999. The head of each agency identified by OMB submit to OMB a plan, including milestones, for resolving by September 30, 1999, financial reporting deficiencies identified by auditors. The initial agency plans are due to OMB by July 31, 1998. The head of each agency submitting a plan provide quarterly reports to OMB, starting on September 30, 1998, describing progress in meeting the milestones in their action plan and any impediments that would impact the governmentwide goal. OMB provide periodic reports to the Vice President on the agency submissions and governmentwide actions taken to meet the governmentwide goal. Specific agencies, such as the Department of Defense (DOD), are also reacting to the results of the most recent financial audits. As we testified before this Subcommittee on April 16, 1998, material financial management deficiencies identified at DOD, taken together, represent the single largest obstacle that must be effectively addressed to achieve an unqualified opinion on the U.S. government’s consolidated financial statements. In response to DOD’s unfavorable financial audit results over the last several years, the Secretary of Defense announced on May 15, 1998, that initiatives to improve the accuracy, timeliness, and usefulness of financial information are to be developed through the Defense Management Council. The Secretary has (1) instructed the Under Secretary (Comptroller) to oversee departmentwide efforts to improve the manner in which financial information is captured and reported in all DOD systems—not just its financial systems—and (2) directed the secretaries of the military departments, and other top DOD officials, to support the Under Secretary (Comptroller) in DOD’s financial business practices reform. Reactions such as these to address the problems identified through the first audit of the U.S. government’s consolidated financial statements are positive steps. In the short term, the quality of the action plans agency heads submit to OMB in response to the President’s directive will be critical. It is essential for these plans to define financial management problems precisely, establish specific strategies and corrective measures for resolving them, include implementation time frames, fix accountability for needed actions, and be prepared in consultation with auditors. Moreover, our experience has shown that considerable hard work, commitment, and oversight will be necessary to translate planned steps into concrete improvements. The aggressiveness with which agencies implement the action plans and pursue solutions to financial management problems will be a strong indication of whether agency heads have a sustained commitment to achieving financial management reform goals. Ultimately, agency heads and their senior management team have to be accountable for results. Again, the auditors have key roles in providing perspectives on actions needed to attain improvements and in assessing progress toward implementing the action plans. Federal agencies will have great difficulty meeting expectations for modernizing their financial management systems unless they effectively meet the Year 2000 computing challenge. As we have discussed in numerous testimonies before this Subcommittee, this issue is the most sweeping and urgent information technology challenge facing organizations today. Strong leadership is needed to avoid major disruptions in services and financial operations, such as processing financial transactions, reporting financial information, controlling property, and collecting revenue. Unless this issue is successfully addressed, serious consequences could occur. For example: payments to veterans with service-connected disabilities could be severely delayed if the system that issues them either halts or produces checks so erroneous that it must be shut down and checks processed manually; the SSA process to provide benefits to disabled persons could be disrupted if interfaces with state systems fail; federal systems used to track student loans could produce erroneous information on loan status, such as indicating that a paid loan was in default; IRS tax systems could be unable to process returns, thereby jeopardizing revenue collection and delaying refunds; and the military services could find it extremely difficult to efficiently and effectively equip and sustain U.S. forces around the world. This Subcommittee’s emphasis has helped to focus on the potential consequences of the Year 2000 computing crisis and the need for added impetus by some agencies to overcome vast difficulties within the next 18 months. In our most recent testimony before the Subcommittee on June 10, 1998, we reported that progress in addressing Year 2000 continues at a slow pace, and that as the amount of time to the turn of the century shortens, the magnitude of what must be accomplished becomes more daunting. We have issued over 40 reports and testimony statements detailing specific findings and recommendations related to the Year 2000 readiness of a wide range of federal agencies. Moreover, to reduce the risk of widespread disruptions, we have made several governmentwide recommendations to the President’s Council on Year 2000 Conversion and OMB to expedite the efforts of federal agencies and build strong partnerships with the private sector and state and local governments. This will likely affect the pace of progress on modernizing financial systems, as some agencies’ efforts to address the Year 2000 computing crisis are taking precedence over longer-term financial management systems development and improvement initiatives. Unless successfully dealt with, this crisis presents the likelihood of new financial management systems weaknesses occurring, existing problems worsening, and ongoing reform efforts being derailed. Congressional attention is essential to help sustain the current momentum to implement financial management reform legislation. There are clear indications that the results of financial audits are beginning to attract increasing attention from various congressional committees. One instance involves the audit of IRS’s financial statements. During our first audits, beginning with fiscal year 1992, we identified serious problems and were unable to give an opinion on IRS’s financial statements. The head of IRS was called before congressional committees in both the House and Senate on numerous occasions to explain the steps IRS was taking, and the progress it was making, to overcome them. On April 15, 1998, we testifiedbefore this Subcommittee that after several years of concerted effort by IRS and GAO, we were, for the first time, able to conclude that IRS’s custodial financial statements were reliable. These positive results show that focused attention by the Congress and this Subcommittee on IRS’s financial management has begun to improve information available to IRS management and to the Congress to help make decisions. In addition, issues raised by financial audits are beginning to prompt inquiries among various congressional committees, as exemplified by the following. In its reports for the fiscal years 1997 and 1998 appropriations bills, the Subcommittee on Labor, Health and Human Services, Education and Related Agencies of the House Committee on Appropriations (1) set the expectation that the Departments of Labor, Health and Human Services, and Education work vigorously toward obtaining clean audit opinions, (2) questioned whether these agencies could properly exercise the substantial transfer and reprogramming authority granted to them under the appropriations act if substantial financial management reform progress had not been made, and (3) stated that in subsequent years it would consider the agencies’ progress in making such reforms and obtaining clean financial statement audit opinions when scrutinizing requests for appropriations and in deciding whether to continue, expand, or limit transfer and reprogramming authority. The Chairman of the House Committee on the Budget asked us to monitor the Forest Service’s progress in improving the reliability of its accounting and financial data, which also contributed to a recent joint hearing before the House Committee on Resources, Committee on the Budget, and Subcommittee on Interior and Related Agencies, Committee on Appropriations, focusing on inefficiency and waste resulting from the Forest Service’s lack of financial and performance accountability. After considering funding for DOD for fiscal year 1998, the Senate Armed Services Committee legislatively required DOD to prepare biennial financial management improvement plans that include a concept of operations for the financial management of the department. The first such plan is to be submitted to the Congress by September 30, 1998. In approving DOD’s 1997 and 1998 appropriations, the Congress also put in place a legislative requirement to accelerate DOD’s planned timetable for addressing long-standing problems in accurately and promptly accounting for billions of dollars in disbursements. Additionally, as part of DOD’s 1999 authorization, the Senate Armed Services Committee has approved a requirement for DOD to provide a detailed annual report on the quantities and locations of DOD’s multibillion dollar investment in inventories and military equipment. The Chairman of the House Budget Committee asked us to analyze the programmatic and budgetary implications of the financial data deficiencies enumerated by the auditors’ examination of the Department of the Navy’s fiscal year 1996 financial statements. In March 1998, we advised the Chairman that the extent and nature of the Navy’s financial deficiencies identified by the auditors, including those that relate to supporting management systems, increases the risk of waste, fraud, and misappropriation of Navy funds and can drain resources needed for defense mission priorities. On April 24, 1998, this Subcommittee and the House Committee on Commerce’s Subcommittees on Oversight and Investigations, Health and Environment held a joint hearing on the Department of Health and Human Service Inspector General’s audit of the Health Care Financing Administration’s fiscal year 1997 financial statements. This helped focus attention on fixing the control weaknesses associated with the more than $20 billion of improper payments in the Medicare fee-for-service program disclosed by the financial audit. In February 1998, we assisted the Chairman of the House Committee on the Budget in considering the possible program and budgetary implications of the questions raised about financial statement data deficiencies identified in the Department of Transportation Inspector General’s audit report on the Federal Aviation Administration’s fiscal year 1996 Statement of Financial Position. In addition to initiatives by individual congressional committees, the Federal Financial Management Improvement Act provides the Congress another tool in monitoring the progress of all 24 CFO Act agencies in improving financial systems. The act is intended to increase accountability in federal financial management and develop systems with the capability to support FASAB standards. FFMIA also provides for an independent judgment by auditors of agencies’ efforts to foster compliance with financial management improvement goals. Under the act, agencies are required to comply with federal accounting standards, federal financial systems requirements, and the U.S. government’s standard general ledger at the transaction level. This legislation also requires (1) auditors performing financial audits under the CFO Act to report whether agencies comply with these requirements and (2) if agencies do not comply, agency heads are to prepare remediation plans to bring financial management systems into substantial compliance within 3 years. We reported in October 1997 that prior audit results and agency self-reporting all point to significant challenges that agencies must meet to fully implement these requirements. The majority of federal agencies’ financial management systems are not designed to meet current accounting standards and systems requirements and cannot provide reliable financial information for managing government operations and holding managers accountable. Auditors’ reports for fiscal year 1997 agency financial audits are disclosing the continuing poor shape in which agencies find their financial systems. To date, the financial systems of only four agencies are reported to be in substantial compliance with the requirements and standards FFMIA specifies. The Congress can build further upon this structure by conducting annual hearings on each agency as part of the regular appropriation, authorization, and oversight process. Each year, congressional committees could review the results of agencies’ most recent financial statement audits and FFMIA reports to gauge the progress agencies are making in improving financial management. Agency heads could be required to describe remedial actions being taken to address financial management problems identified by independent auditors. Through this process, the Congress can, therefore, be in an informed position to assess progress in achieving legislative financial management improvement reforms, addressing the Year 2000 computing crisis, and meeting the President’s financial statement audit goals. This would allow thorough consideration of the severity of an agency’s financial management problems, the demonstrated commitment to improvement efforts, and the independent perspectives of the auditors on an agency’s progress in responding to financial statement audit recommendations. Using the results of this assessment, the Congress can clearly determine accountability and tailor needed additional actions. Based on the circumstances of individual agencies on a case-by-case basis, the Congress could, for example, consider whether (1) in areas of special concern, to require attainment of specified improvements within established milestones before certain funds supporting administrative operations or systems would be available for obligation, (2) to expand, continue, or limit transfer or reprogramming authority depending on the quality of an agency’s financial management, (3) to target, or set aside, needed funding for financial management improvement efforts that are deemed necessary to achieve progress and require periodic status reports on the return for this investment, or (4) to scrutinize funding requests, and perhaps consider limiting funds, in areas where agencies cannot provide satisfactory answers to questions raised about the quality of the data underpinning the request or their ability to properly account for the expenditures. These mechanisms—sustained congressional attention as part of the normal oversight process and agency head accountability—are essential to continue to effectively implement the financial management reform legislative foundation the Congress has established. They are key elements of ensuring that agencies make the investment of time, talent, and resources necessary to achieve needed financial management improvements. With a concerted effort, the federal government, as a whole, can continue to make progress toward ensuring full accountability and generating reliable information on a regular basis. Annual financial statement audits are essential to ensuring the effectiveness of the improvements now underway and, ultimately, to producing the reliable and complete information needed by decisionmakers and the public to evaluate the government’s financial performance. They are also central to assuring taxpayers that their money is being used as intended and helping the government implement broad management reforms called for by the Government Performance and Results Act. Mr. Chairman, this concludes my statement. I would be happy to now respond to any questions that you or other members of the Subcommittee may have at this time. Statements of Federal Financial Accounting Concepts (SFFAC) Objectives of Federal Financial Reporting (SFFAC 1) Entity and Display (SFFAC 2) Statements of Federal Financial Accounting Standards (SFFAS) Accounting for Selected Assets and Liabilities (SFFAS 1) Accounting for Direct Loans and Loan Guarantees (SFFAS 2) Accounting for Inventory and Related Property (SFFAS 3) Managerial Cost Accounting Concepts and Standards (SFFAS 4) Accounting for Liabilities of the Federal Government (SFFAS 5) Accounting for Property, Plant, and Equipment (SFFAS 6) Accounting for Revenue and Other Financing Sources (SFFAS 7) Supplementary Stewardship Reporting (SFFAS 8) Deferral of the Effective Date of Managerial Cost Accounting Standards for the Federal Government in SFFAS 4 (SFFAS 9) Accounting for Indian Trust Funds (Interpretation 1) Accounting for Treasury Judgment Fund Transactions (Interpretation 2) Measurement Date for Pension and Retirement Health Care Liabilities (Interpretation 3) Accounting for Pension Payments in Excess of Pension Expense (Interpretation 4) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO discussed ways Congress can help to ensure that agencies effectively implement federal financial management reform legislation. GAO noted that: (1) an essential foundation to help achieve the goals of implementing financial management reforms is the requirement that the 24 Chief Financial Officers Act agencies annually prepare financial statements and subject them to an independent audit, beginning with those for fiscal year (FY) 1996; (2) additionally, audited consolidated financial statements for the U.S. government are now required annually, starting with those for FY 1997; (3) to further promote needed reforms, the Federal Financial Management Improvement Act calls for agencies to meet various financial management standards and requirements and, if they do not, to prepare remediation plans; (4) these reforms begin to subject the federal government to the same fiscal discipline imposed for years on the private sector and state and local governments; (5) this discipline is needed to correct long-standing serious weaknesses in federal financial management systems, controls, and reporting practices; (6) considerable effort is under way across government to make needed improvements and progress is being made, but a great deal of perseverance will be required to fully attain the legislative goals set by federal financial management statutes; (7) the federal government can continue to make progress in implementing financial management reforms, but the pace and extent of improvement will depend upon the dedication of agency heads and their senior management teams, especially chief financial officers, and the ability to deal with a range of financial management systems issues, as well as continuing emphasis by Congress on financial management reform; (8) broad oversight by Congress will be very important to hold agency heads accountable for needed financial management improvements; and (9) Congress would make a significant contribution to ensuring satisfactory results in this area if the results of financial audits and needed improvements became a routine part of its normal annual appropriation, authorization, and oversight deliberations.
Since the 1960s, geostationary and polar-orbiting environmental satellites have been used by the United States to provide meteorological data for weather observation, research, and forecasting. NOAA’s National Environmental Satellite Data and Information Service (NESDIS) is responsible for managing the civilian geostationary and polar-orbiting satellite systems as two separate programs, called GOES and the Polar Operational Environmental Satellites, respectively. Unlike polar-orbiting satellites, which constantly circle the earth in a relatively low polar orbit, geostationary satellites can maintain a constant view of the earth from a high orbit of about 22,300 miles in space. NOAA operates GOES as a two-satellite system that is primarily focused on the United States. These satellites are uniquely positioned to provide timely environmental data to meteorologists and their audiences on the earth’s atmosphere, its surface, cloud cover, and the space environment. They also observe the development of hazardous weather, such as hurricanes and severe thunderstorms, and track their movement and intensity to reduce or avoid major losses of property and life. Furthermore, the satellites’ ability to provide broad, continuously updated coverage of atmospheric conditions over land and oceans is important to NOAA’s weather forecasting operations. To provide continuous satellite coverage, NOAA acquires several geostationary satellites at a time as part of a series and launches new satellites every few years (see table 1). Three satellites—GOES-11, GOES-12, and GOES-13—are currently in orbit. Both GOES-11 and GOES-12 are operational satellites, while GOES-13 is in an on-orbit storage mode. It is a backup for the other two satellites should they experience any degradation in service. The others in the series, GOES-O and GOES-P, are planned for launch over the next few years. NOAA is also planning the next generation of satellites, known as the GOES-R series, which are planned for launch beginning in 2014. NOAA plans for the GOES-R program to improve on the technology of prior series, in terms of both system and instrument improvements, to fulfill more demanding user requirements and to provide more rapid information updates. Table 2 highlights key system-related improvements GOES-R is expected to make to the geostationary satellite program. In addition to the system improvements, the instruments on the GOES-R series are expected to significantly increase the clarity and precision of the observed environmental data. NOAA originally planned to acquire six different types of instruments. Furthermore, two of these instruments— the Advanced Baseline Imager and the Hyperspectral Environmental Suite—were considered to be the most critical because they would provide data for key weather products. Table 3 summarizes the originally planned instruments and their expected capabilities. More recently, however, NOAA reduced the scope of the GOES-R program because of expectations of higher costs. In May 2006, the program office projected that total costs, which were originally estimated to be $6.2 billion, could reach $11.4 billion. We reported that this led NOAA to reduce the scope and technical complexity of the baseline program. Specifically, in September 2006, NOAA reduced the minimum number of satellites from four to two, cancelled plans for developing the Hyperspectral Environmental Suite, and estimated the revised program would cost $7 billion. Table 4 provides a summary of the timeline and scope of these key changes. NOAA is solely responsible for GOES-R program funding and overall mission success. However, since it relies on NASA’s acquisition experience and technical expertise to help ensure the success of its programs, NOAA implemented an integrated program management structure with NASA for the GOES-R program. Within the program office, there are two project offices that manage key components of the GOES-R system. These are called the flight and operations project offices. The flight project office oversees the spacecraft, instruments, and launch services. The operations project office oversees the ground elements and on-orbit operations of the satellites. The project manager for the flight project office and the deputy project manager for operations project office are designated to be filled with NASA personnel. Additionally, NOAA has located the program office at NASA’s Goddard Space Flight Center. NOAA’s acquisition strategy was to award contracts for the preliminary design of the GOES-R system to several vendors who would subsequently compete for the contract to be the single prime contractor responsible for overall system development and production. As such, in October 2005, NOAA awarded contracts for the preliminary design of the overall GOES-R system to three vendors. In addition, to reduce the risks associated with developing technically advanced instruments, NASA awarded contracts for the preliminary designs for five of the originally planned instruments. NASA expected to subsequently award development contracts for these instruments and to eventually turn them over to the prime contractor responsible for the overall GOES-R program. NOAA has completed preliminary design studies of its GOES-R procurement. In addition, the agency recently decided to separate the space and ground elements of the program into two separate contracts to be managed by NASA and NOAA, respectively. However, this change has delayed a key decision to proceed with the acquisition, which was planned for September 2007. Further, independent estimates are higher than the program’s current $7 billion cost estimate and convey a low level of confidence in the program’s schedule for launching the first satellite by 2014. As NOAA works to reconcile the independent estimate with its own program office estimate, costs are likely to grow and schedules are likely to be delayed. NOAA and NASA have made progress on GOES-R. The program office has completed preliminary design studies of the overall GOES-R system and has initiated development work on most of the planned instruments. Specifically, the NOAA-issued contracts for the preliminary design of the overall GOES-R system to three vendors have ended, and the designs have been completed. In addition, after completing preliminary designs on five of the originally planned instruments, NASA awarded development contracts for three of them. Further, the most critical of these instruments—the Advanced Baseline Imager—has completed a major development milestone. In February 2007, it passed a critical design review gate and NASA approved the contractor to begin production of a prototype model. NOAA recently made a number of key changes in how it plans to acquire the GOES-R system. Originally, NOAA planned to award and manage a single prime contract for the acquisition and operation of the integrated system. However, an independent review team assessed the program and found that this approach was risky. It recommended that NOAA split the acquisition effort into two separate contracts for the space and ground segments and have NASA manage the space segment. The independent review team concluded that there was less risk in continuing with this approach than there would be if NOAA took on a new and expanded role. In March 2007, Commerce approved NOAA’s decision to implement these recommendations. The agency revised its acquisition strategy to include two separate contracts—the space segment and the ground segment. The two contracts are expected to be awarded in May 2008 and August 2008, respectively. The space segment is to be managed by a NASA-led flight project office. As such, NASA is to be responsible for awarding and managing the space segment contract, delivering the flight-ready instruments to the space segment contractor for integration onto the satellites, and overseeing the systems engineering and integration. NOAA is to be responsible for the ground segment contract, which is to be managed by the NOAA-led operations project office. The revised acquisition strategy has delayed NOAA’s plans to complete a key decision milestone on whether to proceed with GOES-R development and production in September 2007. Once this decision is made, the final requests for proposals on the system segments are to be released. The agency could not provide a timeframe for when this key decision milestone would take place. NOAA’s current estimate that the life cycle cost of the GOES-R program would be $7 billion is likely to grow, and its estimate that the first satellite would be launched in December 2014 is likely to slip. Consistent with best practices in cost estimating, in May 2007, NOAA had two different cost estimates completed for the current GOES-R program—one by its program office and one by an independent cost estimating firm. The program office estimated with 80 percent confidence that the program would cost $6.9 billion. The independent estimating firm estimated with 80 percent confidence that the program would cost $9.3 billion. A comparison of the two cost models shows that the independent estimator has about a 20 percent level of confidence that the program can be completed for $6.9 billion. Further, the independent estimator concluded that the program office estimate significantly understated the risk of cost overruns. Other major differences between the two estimates are contained in government costs and in the space and ground segments. In commenting on a draft of the accompanying report, NOAA officials noted that one of the differences between the estimates is the inflation rate. The independent estimator assumed a higher inflation rate than the rate that NOAA and NASA typically use. NOAA officials noted that if the independent estimate was adjusted to NOAA’s inflation rate, the program’s cost estimate—with 80 percent confidence—would be $8.7 billion. However, we believe that the value of an independent estimate is that it does not necessarily use the same assumptions as the program office. By offering alternative assumptions, the independent estimate provides valuable information for government officials to consider when revising program cost estimates. Program officials are reconciling the two different cost estimates and plan to establish a new program cost estimate to be released in conjunction with the President’s fiscal year 2009 budget in February 2008. Program officials were unable to provide us information on the reconciled estimate until it is released. Nonetheless, the revised cost estimate will likely be $1 billion more than the current $7 billion. Regarding schedule, NOAA’s current plan to launch the first GOES-R series satellite in December 2014 could be delayed. This schedule was driven by a requirement that the satellites be available to back up the last remaining GOES satellites (GOES-O and GOES-P) should anything go wrong during the planned launches of these satellites (see table 5). However, as part of its cost estimate, the independent estimator performed a schedule risk analysis. The independent estimator determined that there was less than a 50 percent chance that the first satellite would be ready for launch by December 2014 and that a later date would be more realistic. The estimator determined that it had 50 percent confidence that the first satellite would launch by October 2015 and 80 percent confidence that the satellite would launch by March 2017. A delay of this magnitude could affect the continuity of GOES data should the agency experience problems with the predecessor satellites. To address cost, schedule, and technical risks, the GOES-R program established a risk management program and has taken steps to identify and mitigate selected risks. However, more remains to be done to fully address a comprehensive set of risks. Specifically, the program has multiple risk watchlists and they are not always consistent. Further, key risks are missing from the risks lists, including risks associated with unfilled executive positions, limitations in NOAA’s insight into NASA’s deliverables, and insufficient funding for unexpected costs (called management reserve) on a critical sensor. As a result, the GOES-R program is at increased risk that problems will not be identified or mitigated in a timely manner and that they could lead to program cost overruns and schedule delays. The GOES-R program office established a risk management program and is tracking and mitigating selected risks. Risk management is a leading management practice that is widely recognized as a key component of a sound system development approach. An effective risk management approach typically includes identifying, prioritizing, and mitigating risks, and escalating key risks to the attention of senior management. In accordance with leading management practices, the GOES-R program identifies risks, assigns a severity rating to risks, tracks these risks in a database, plans response strategies for each risk in the database, and reviews and evaluates these risks during monthly program risk management board meetings. Programwide and project-specific risks are managed by different offices. The program office identifies and tracks programwide risks—those that affect the overall GOES-R program. NASA’s flight project office and NOAA’s operations project office manage risks affecting their respective aspects of the program. Further, the program office briefs senior executives on top program and project risks on a monthly basis. As of July 2007, the program office identified three program risks affecting the overall GOES-R program. These risks include the development of the integrated master schedule, the ability to secure authorization to use a key frequency band to meet the space-to-ground communication data link requirements for the GOES-R system, and the final approval of the GOES-R mission requirements from the NOAA Deputy Undersecretary. NOAA is working to mitigate and close program risks that it is tracking. For example, the program office recently closed the risk associated with GOES-R requirements because it had sufficiently defined and obtained approval of these requirements. As another example, the program office considers the lack of an integrated master schedule to be its highest priority risk. Program officials reported that completion of the integrated master schedule is driven by the completion of the intermediate schedules for the ground segment and the space-to-ground interdependencies. Key program staff members, including a resident scheduler, meet on a weekly basis to resolve outstanding design issues and hone these schedules. Program officials reported that the intermediate schedules are near completion and that they plan to have the integrated master schedule completed in Fall 2007. They expect to remove this issue from the risk watchlist at that time. As of July 2007, the NASA flight project office identified four risks affecting instrument development, all of which are classified as medium risk. The top three risks pertain to the advanced imaging instrument, ABI—including issues on timely and quality subcontractor delivery of a critical part, stray light negatively impacting the performance of the optical system, and meeting specified performance requirements on image navigation and registration. The fourth priority risk pertains to the improvement of subcontractor quality assurance on a key sensor for the Space Environmental In-Situ Suite. NASA is working to mitigate the flight segment risks that it is tracking. For example, the ABI contractor, among other things, plans to complete a key simulation review before the end of the year (called the structural thermal optical performance analysis) to evaluate whether the instrument can meet its expected performance parameters for image navigation and registration. NASA also recently conducted a vendor facility assessment of the Space Environmental In-Situ Suite subcontractor to determine whether adequate quality assurance improvements had been made to be compliant with contract requirements. These actions are expected to help mitigate the risk. As of July 2007, the NOAA operations project office identified five risks impacting the management and development of the ground system and operations, including one that is identified as a medium risk. These risks include, among other things, inadequate definition of flight and operations project interdependencies, algorithm development responsibilities, and the adequate definition of coordination requirements between the space and ground segments to ensure that the two requests for proposals are consistent. NOAA is working to mitigate the ground system and operations risks that it is tracking. For example, for the highest priority risk regarding schedule interdependencies, key staff from both the flight and operations projects meet weekly in order to identify and synchronize project schedules. The project office expects to close this risk in Fall 2007. While GOES-R has implemented a risk management process, its multiple risk watchlists are not consistent in areas where there are interdependencies between the lists, which makes it difficult to effectively prioritize and manage risks at the appropriate organizational levels. Sound risk management practices call for having a consistent prioritization approach and for significant problems to be elevated from the component level to the program level. This is because an issue affecting a critical component could have severe programmatic implications and should be identified, tracked, and overseen at the program level. In addition, program executives should be briefed regularly on the status of key risks. However, on the GOES-R program, the risks identified on the multiple risk lists are inconsistent in areas where there are interdependencies between the lists. These interdependencies include situations where a risk is raised by one project office and affects the other project office, but is not identified by the other project office or elevated to the program level risk list. They also include situations where a risk identified by a project office has programwide implications, but is not elevated to the program level risk list. For example, the operations project office identified schedule interdependencies between the flight and operations project offices as a medium criticality risk, but neither the flight project office nor the program identified this risk even though it is relevant to both. As another example, the operations project office identified the ground procurement schedule as a major issue in its briefing to senior management, but this risk was not identified on its own or on the programwide risk lists. In addition, while the three offices brief senior management about their key risks on a monthly basis, selected risks may not be accurately depicted in these briefings because of the inconsistencies among the risk watchlists. For example, both the flight and operations project offices identified technical development issues as minor to moderate risk areas, but the program office did not identify this item as a risk and, when it briefed senior management, it noted that technical development was in good shape. Figure 1 depicts examples of inconsistencies among risk lists and briefings to senior management. The lack of consistency in managing risks in areas where there are interdependencies makes it difficult to ensure that all identified risks are appropriately prioritized and managed. This situation hampers the program office’s ability to identify and mitigate risks early on and to anticipate and manage the impact of risks on other areas of the program. To be effective, a risk management program should have a comprehensive list of risks. However, several key risks that impact the GOES-R procurement and merit agency attention are not identified in the program’s risk lists. These risks include (1) key leadership positions that need to be filled, (2) NOAA’s limited insight into NASA’s deliverables, and (3) insufficient management reserves (held by the program and a key instrument contractor). At the conclusion of our review for the accompanying report, program officials stated that they are aware of these issues and are working to monitor them or address them, as warranted. Nevertheless, until these and other programwide risks are identified and addressed as part of a comprehensive risk management program, there is increased likelihood that issues will be overlooked that could affect the acquisition of the GOES-R system. The two senior GOES-R program positions—the system program director and deputy system program director—are currently filled by NASA and NOAA personnel in an acting capacity until they can be permanently filled by NOAA. In addition, the acting system program director is not able to work full time in this role because she is also on a special assignment as the NESDIS Deputy Assistant Administrator for Systems. NOAA reported that it plans to fill the deputy system program director role in the near future, but noted that it could take more than 6 months to fill the system program director role. Given the approach of the development phase of the GOES-R acquisition and the competing priorities of the acting system program director, it is especially important that these key leadership positions be filled quickly. At the conclusion of our review, agency officials stated that they are aware of this issue and are working to fill the positions, but they did not believe the issue warranted inclusion on the program level risk watch list. However, without the senior level attention inherent in a sound risk management program, it is not clear that NOAA is sufficiently focused on the importance of establishing knowledgeable and committed program executives, or in moving quickly to fill these critical positions. NOAA’s March 2007 decision to adopt an acquisition management approach similar to prior GOES procurements could make the agency vulnerable to repeating some of the problems experienced in the past. In particular, our work on the GOES I-M series found that NOAA did not have the ability to make quick decisions on problems because portions of the procurement were managed by NASA. In fact, NOAA officials originally intended to depart from this approach as a lesson they learned from the GOES I-M acquisition, because it limited the agency’s insight and management involvement in the procurement of major elements of the system. The established NOAA/NASA interagency agreements require NASA to submit monthly contractor cost performance reports to NOAA and to alert NOAA should cost and schedule performance drop below certain thresholds. NASA is currently submitting the required reports and has alerted NOAA on major cost and schedule changes. However, these interagency agreements do not contain provisions that enable NOAA to ensure that the data and reports are reliable and that they accurately depict contractor performance. To do so would entail NOAA having the ability and means to question and validate data, such as by having direct access to the contractor. NASA and NOAA officials reported that the two agencies are working together with an unparalleled level of transparency and noted that NOAA program staff have access to contractor data and can bring any questions with the data to the relevant NASA staff. However, they acknowledged that this process is not documented and were not able to demonstrate that NOAA staff had questioned contract data and that NASA had facilitated obtaining answers to the questions. By not identifying and mitigating this risk on its program risk list, NOAA increases the likelihood that the GOES- R program will repeat the management and contractor shortfalls that plagued past GOES procurements. A recent modification to the critical ABI instrument contract increased its cost, thereby reducing the amount of management reserve funds held by the program office for unexpected expenses. In September 2006, we reported that ABI was experiencing technical challenges, that were resulting in cost and schedule overruns. Since then, the contractor continued missing cost and schedule targets—a trend that continued until February 2007. At that time, NASA modified the contract to implement a revised baseline cost and schedule. The added cost of this modification was funded using management reserve funds held by the GOES-R program office. As a result, the amount of reserve held by the program office dropped below 25 percent—a level that NOAA reported it intended to establish as a lesson learned from other satellite acquisitions. As of July 2007, the program’s reserve level was at about 15 percent. Program officials stated that their revised goal is to maintain between 10 and 15 percent in reserve at the program level. While maintaining a 10 to 15 percent management reserve is on par with other major satellite acquisitions, the depletion of management reserves this early in the GOES- R acquisition raises concerns that there will be insufficient reserves during the challenging development, integration, and testing phases to come. In addition, the contractor for the ABI instrument has a very low level of reserve funding for unexpected costs, which means that any unexpected problems will likely lead to cost growth on the overall GOES-R program. As of May 2007, the contractor was holding less than 1 percent of funding in reserve to cover unexpected costs associated with the 40 percent of work left to be completed. As such, there is a risk that the new baseline could fail due to inadequate reserves to finish the program. This would likely have a diminishing effect on the reserve held by the GOES-R flight project and the program office to cover the costs of a second revised baseline plan. Our prior work on system acquisitions has shown inadequate reserves to be an indicator of poor management performance that could lead to cost overruns. Considering that GOES-R has not yet entered the development and production phases, it will be critical for NOAA’s senior executive management to aggressively manage this risk. By not identifying, mitigating, and tracking this risk in a programwide risk list, the GOES-R program runs an increased risk that unanticipated issues on the ABI instrument will lead to programwide cost overruns and schedule delays. To improve NOAA’s ability to effectively manage the procurement of the GOES-R system, we recommended in our accompanying report that the Secretary of Commerce direct the Undersecretary of Commerce for Oceans and Atmosphere to take the following two actions: Ensure that the GOES-R program office manages, mitigates, and reports on risks using a program-level risk list that is reconciled with and includes risks from its flight and operations project offices that could impact the overall program. Include the following risks on the programwide risk list, develop plans to mitigate them, and report to senior executives on progress in mitigating them: unfilled or temporary GOES-R program leadership positions, insufficient program insight on NASA contract performance, and insufficient management reserve on the critical Advanced Baseline Imager instrument and at the GOES-R program level. In written comments, Commerce agreed with our recommendations to use a program level risk list and to add selected risks to its list. The department reported that NOAA has established a consolidated programwide risk list that is to be used to evaluate risks during monthly internal and external reviews. Further, NOAA acknowledges the risks associated with having unfilled leadership positions and insufficient management reserves and is working to mitigate these risks. However, the department disagreed with our recommendation to manage and mitigate the risk that NOAA has insufficient insight into NASA’s contracts. The department cited an unparalleled level of transparency between the two agencies and listed multiple regular meetings that the two agencies hold to ensure close coordination. While an improved working relationship between the two agencies is critical, NOAA has not provided any evidence that it has been able to effectively question and validate data on NASA’s contractor performance. Given the past problems that NOAA has experienced in obtaining insight into NASA’s contracts and the importance of this interagency relationship to the success of the GOES-R program, we believe that this issue should be managed and monitored as a risk. NOAA also requested that we acknowledge its effort to reconcile its program estimate with the independent estimate and reflect a 20 percent possibility that the program could cost $1 billion more than the current estimate of $7 billion, rather than $2 billion more. We acknowledge this in our report; however, the reconciliation effort is not complete and NOAA did not provide us with a reconciled estimate. In summary, although NOAA has made progress in the GOES-R procurement, changes in the GOES-R acquisition strategy could lead to cost overruns and schedule delays if not managed effectively. Over the last year, NOAA has completed preliminary design studies of its GOES-R system and decided to separate the space and ground elements of the program into two contracts and have NASA oversee the system integration effort. Current program plans call for a two-satellite program—estimated to cost about $7 billion—with launch of the first satellite in December 2014. However, independent studies show that the program’s cost could increase by about $2 billion and that the first launch could be delayed by at least 2 years. NOAA has taken steps to identify and address key risks but more could be done to effectively manage risks from a programwide perspective. In particular, the program has multiple risk watchlists that are not consistent in areas where there are interdependencies and key risks have not been elevated for programwide attention. Also, several risks that warrant NOAA’s attention have not been placed on any watchlist. Specifically, the top two leadership positions are only temporarily filled; NOAA does not have the ability and means to obtain insight into NASA contracts in order to validate contractor performance data; and insufficient management reserves to handle unexpected problems on a critical instrument and at the program level are likely to affect overall program costs when any unexpected problems arise. Until NOAA manages and addresses a comprehensive set of program risks, the agency’s ability to effectively manage the GOES-R acquisition will be significantly weakened and could lead to substantial program overruns and delays. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of the subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact me at (202) 512-9286 or by e-mail at [email protected]. Other key contributors to this testimony include Carol Cha, Neil Doherty, Nancy Glover, Colleen Phillips (Assistant Director), and Teresa Smith. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Oceanic and Atmospheric Administration (NOAA), with the aid of the National Aeronautics and Space Administration (NASA), plans to procure the next generation of geostationary operational environmental satellites, called the Geostationary Operational Environmental Satellites-R series (GOES-R). This new series is considered critical to the United States' ability to maintain the continuity of data required for weather forecasting through the year 2028. GAO was asked to summarize its report on the GOES-R series. This report (1) assesses the status and revised plans for the GOES-R procurement and (2) evaluates whether NOAA is adequately mitigating key technical and programmatic risks facing the program. To conduct this review, GAO analyzed contractor and program data and interviewed officials from NOAA and NASA. NOAA has made progress in planning its GOES-R procurement--which is estimated to cost $7 billion and scheduled to have the first satellite ready for launch in 2014--but cost and schedules are likely to grow. Specifically, the agency completed preliminary design studies of GOES-R and recently decided to separate the space and ground elements of the program into two separate development contracts. However, this change in strategy has delayed a planned September 2007 decision to proceed with the acquisition. Further, independent estimates are higher than the program's current cost estimate and convey a low level of confidence in the program's schedule. Independent studies show that the estimated program could cost about $2 billion more, and the first satellite launch could be delayed by 2 years. As NOAA works to reconcile the independent estimate with its own program office estimate, costs are likely to grow and schedules are likely to be delayed. To address cost, schedule, and technical risks, the GOES-R program has established a risk management program and has taken steps to mitigate selected risks. For example, as of July 2007, the program office identified the lack of an integrated master schedule to be its highest priority risk and established plans to bring this risk to closure. However, more remains to be done to fully address risks. Specifically, the program has multiple risk watchlists that are not always consistent and key risks are missing from the watchlists, including risks associated with unfilled executive positions, limitations in NOAA's insight into NASA's deliverables, and insufficient funds for unexpected costs--called management reserves. As a result, the GOES-R program is at risk that problems will not be identified or mitigated in a timely manner and could lead to program cost overruns and schedule delays.
In any real estate transaction, the buyer and lender providing the mortgage need a guarantee that the buyer will have clear ownership of the property. Title insurance is designed to provide that guarantee by agreeing to compensate the lender (through a lender’s policy) or the buyer (through an owner’s policy) up to the amount of the loan or the purchase price, respectively. Lenders’ policies are in force for as long as the original loan is still outstanding, but end when the loan is paid off—for instance, through a refinancing transaction—while owners’ policies remain in effect as long as the purchaser of the policy owns the property. Title insurance is sold primarily through title agents. Before issuing a policy, a title agent checks the history of a title by examining public records such as deeds, mortgages, wills, divorce decrees, court judgments, and tax records. If the title search discovers a problem—such as a tax lien that has not been paid—the agent either arranges to resolve the problem, decides to provide coverage despite the problem, or excludes it from coverage. The title policy insures the policyholder against any claims that might have existed at the time of the purchase but were not identified in the public record. The title policy does not require that title problems be fixed but compensates policyholders if a covered problem arises. Except in very limited instances, title insurance does not insure against title defects that arise after the date of sale. Title searches are generally carried out locally by title agents because the public records to be searched are usually only available locally. In addition, the variety of sources that agents must check has fostered the development of privately owned, indexed databases called “title plants.” These plants contain copies of the documents obtained through searches of public records, indexed by property address, and must be regularly updated. Title plants may be owned by insurers, title agents, or a combination of entities. In some cases, the owner of a title plant sells access to other insurers and agents, charging them to use the service. Title insurance premiums are paid only once, at the time of sale or refinancing, to the title agent. Agents retain or are paid a portion of the premium amount as a fee for conducting the title search and related work, and for their commission. Agents have a fiduciary duty to account for premiums paid to them, and insurers generally have the right to audit the agents’ relevant financial records. The party responsible for paying for the title policies varies by state and can even vary by areas within states. In many areas, the seller pays for the owner’s policy and the buyer for the lender’s policy, but the buyer may also pay for both policies or split some (or all) of the costs with the seller. In most cases, the policies are issued simultaneously by the same insurer, so that the same title search can be used for both policies. In a recent nationwide survey, the average cost for simultaneously issuing lender’s and owner’s policies on a $180,000 loan, plus other associated title costs, was approximately $925—or approximately 34 percent of the average total loan origination and closing fees. In almost all states, title insurance is regulated by state insurance departments; in all states, insurers selling title insurance in that state are subject to the state’s regulations for their operations within that state. State regulators are responsible for enforcing these regulations, primarily through the licensing of agents, the approval of insurance rates and products, and the examination of insurers’ financial solvency and conduct. State regulators typically conduct financial solvency examinations every 3 to 5 years, while examinations reviewing insurers’ conduct are generally done in response to specific complaints by consumers or concerns on the part of the regulator. Insurance regulations can vary across states, creating differences in the way insurers are regulated. For example, most states require insurers to submit proposed premium rates to the state regulator, and then perform some level of review of those rates. In several states, however, the state regulator sets the premium rate which all insurers must charge, and in at least one state the regulator does not review rates at all. In addition, while most states license title insurance agents, several do not. At the federal level, HUD is responsible for enforcing RESPA, which regulates real estate settlement practices. Among other things, RESPA requires that borrowers receive certain information regarding closing costs, including title insurance fees. RESPA also generally prohibits giving or accepting any thing of value for the referral of settlement services, such as the referral of business to a particular title agent. RESPA also allows state insurance commissioners to take enforcement actions, under RESPA, against these prohibited activities. Some aspects of the title insurance market that set it apart from other lines of insurance merit further study, including: the importance of title search costs, rather than losses, in setting premium rates; the fact that title insurance agents play a more important role than agents for other lines of insurance; the fact that title insurance is generally marketed not to consumers but to professionals such as real estate agents or mortgage brokers; the proliferation of affiliated business arrangements between title agents and these professionals; and the involvement and coordination among the regulators of multiple types of entities involved in the marketing and sale of title insurance. The extent to which title insurance premium rates reflect insurers’ underlying costs is not always clear. Insurance rate regulation, among other things, aims to protect consumers by ensuring that premium rates accurately reflect insurers’ expected and actual costs, and that they are not excessive. However, most state regulators do not appear to consider title search expenses to be part of the premium. As a result, these expenses are not included in regulatory reviews that seek to determine whether premium rates accurately reflect insurers’ costs. To complicate matters, it also appears that few state regulators collect financial data from title agents, who generally conduct the title search and examination work, so that examining such expenses would be difficult. Further, unlike other lines of insurance, the largest costs for title insurers are expenses related to title searches and agent commissions, not losses on policy claims. In 2004, according to data compiled by ALTA, losses and loss adjustment expenses incurred by title insurers as a whole were approximately 5 percent of total premiums written, while the amount paid to or retained by agents (primarily for work related to title searches and examinations, and for agents’ commissions) was approximately 71 percent of premiums written. In contrast, property casualty insurers’ losses and loss adjustment expenses accounted for approximately 73 percent of total premiums in 2004. A related area worthy of further review is premium rate regulation for mortgage refinance transactions. In these cases, a title search most likely has been performed relatively recently, and the property is not changing hands. If the same title insurer was conducting another title search for the refinancing, that search would presumably need to cover a shorter period of time. Because title search and examination costs are the largest component of premium rates for title insurance, the premium rates for refinance transactions could reasonably be expected to be lower than for home purchases. While it appears that many insurers do provide discounted premiums on refinance transactions, the extent of such discounts and how widely they are used—that is, whether consumers know about them and know how to take advantage of them—is unclear. Finally, the extent to which premium rates increase as loan amounts or purchase prices increase could also usefully be examined. Costs for title search and examination work do not appear to rise as loan or purchase amounts increase, and the portion of premiums that covers potential losses is only about 5 percent of total premiums. If premium rates reflected the underlying costs, premium rates could reasonably be expected to increase at a relatively slow rate as loan or purchase amounts increase. However, this does not always appear to be the case. For example, using premium rates posted on the Internet by two state regulators with whom we spoke, we found that when the purchase price or loan amount doubled from $150,000 to $300,000, the increase in total premium for an owner’s policy for selected insurers in the same county ranged from approximately 27 to 57 percent. According to an industry expert and officials from an industry association, allowing such pricing reflects a policy decision by state regulators to have higher-income purchasers subsidize the title insurance costs of lower-income purchasers. How do title insurers determine premium rates, and how have these rates changed in recent years? How does the current rate review structure in most states examine the costs that determine title insurance premium rates? What data are collected that could be used to assess the extent to which title insurance premium rates reflect the associated costs? To what extent do title insurers offer discounted premium rates on mortgage refinance transactions? Title agents play a more significant role in the title insurance industry than most other types of insurance agents. For most lines of insurance, an agent’s role is primarily a marketing role. Title insurance agents not only perform this task, but also carry out most underwriting tasks, including title search and examination work. In many cases, title agents retain the actual insurance policy and, after deducting expenses, remit the title insurer’s portion of the premium. As we have seen, amounts paid to or retained by title agents for this work in 2004 were around 71 percent of total premiums written. Despite title agents’ critical role, the amount of attention they receive from state regulators is not clear. For example, according to data compiled by ALTA, while most states require title agents to be licensed, three states plus the District of Columbia do not. In addition, also according to the same source, 18 states and the District of Columbia do not require agents to pass a test to become licensed, and only 20 states require some form of continuing education as a prerequisite for title agents. At least one state does not regulate title agents. While NAIC has produced model legislation that states can use as a basis for their own regulation of title agents, according to NAIC, as of October 2005, only 3 states had passed the model act or similar legislation. The level of oversight of title agents by the state regulators that we spoke with for this report varied. For example, one state regulator told us that examiners conduct regular but informal visits to the title agents in their state but do not track such contacts. Another regulator told us that the agency’s review of title agents’ operations focused primarily on financial condition, not on compliance with state laws. This regulator also collected financial data from title agents, but had only recently begun systematically analyzing that data and questioned its quality. Another regulator told us that the agency had recently begun an intense examination of title agents’ activities and had taken a number of related enforcement actions. The state insurance regulators with whom we spoke expected or required insurers to oversee the operations of the title agents writing policies for them. One regulator said that the state did not have specific regulations requiring insurers to monitor title agents’ operations, but expected such monitoring as a matter of course. This regulator also expected insurers to resolve any problems the regulator might find with agents’ operations. Another state regulator told us that, in light of activities identified in recent investigations, their office recently revised its regulations to require title insurers to monitor the activities of their agents and hold insurers responsible for their agents’ actions. To what extent do state insurance regulators review and collect information from title agents operating in their state? To what extent are title insurers required to oversee the agents who write insurance for them? To what extent have state insurers adopted model title insurance and agent laws? For several reasons, the competitiveness of the title insurance market merits further study. First, because the purchase of title insurance is an infrequent and unfamiliar transaction for most people, consumers often rely on the advice of a real estate or mortgage professional in choosing a title insurer. As a result, title insurers and agents normally market their product to such professionals rather than to consumers. Thus, while consumers are the ones paying for title insurance, they generally do not know how to “shop around” for the best deal, and may not even know that they can. Meanwhile, the potential exists for real estate or mortgage professionals to recommend—not the least expensive or most reputable title insurer or agent—but the one that is most closely aligned with the professional’s best interests. While RESPA generally prohibits the payment of fees for such business referrals, as discussed later in this report, recent federal and state investigations have alleged such arrangements. Some industry officials pointed out that cost was not the only basis for selecting a title insurer because service and speed were also important. Second, concentration in the industry has raised further questions about its competitiveness. In 2004, according to data compiled by ALTA, the five largest title insurers and their subsidiary companies accounted for over 90 percent of the total premiums written. However, according to the annual reports of several of these companies, a large number of local agents are used to conduct their business—for example, one company noted in its annual report that more than 9,500 agents sold the company’s insurance nationwide. And while a recent analysis of competition in the California title insurance market concluded that the market was overly concentrated, some experts disagree with the study’s methodology and its conclusions. Finally, certain aspects of the financial performance of title insurers and agents have also caused some to question the competitiveness of the title insurance market. For example, as previously discussed, losses on title insurance claims accounted for only about 5 percent of total premiums written in 2004—a very low percentage compared with most other lines. In addition, according to data collected by ALTA, total operating revenue for the industry as a whole rose approximately 68 percent between 2001 and 2004, from approximately $9.8 to $16.4 billion. Such conditions could create the impression of excessive profits. The same study of competition in the California market analyzed the profitability of insurers and agents in that market and concluded that they were earning large profits at consumers’ expense. To what extent do aspects of competition beneficial to the consumer appear to exist in the current title insurance market? What has been the short- and long-term financial performance of title insurers and agents, and what accounts for the dramatic increase in total operating revenue? The use of affiliated business arrangements involving title agents and others such as lenders, real estate brokers, and builders, appears to have grown over the past several years, and further study of their effect could be beneficial. Within the title insurance industry, the term “affiliated business arrangements” generally refers to some level of joint ownership among title insurers, title agents, real estate brokers, mortgage brokers, lenders, and builders. For example, a mortgage lender and a title agent might form a new jointly owned title agency, or a lender might buy a portion of a title agency. According to some industry groups, consumers can benefit from such arrangements, which may provide convenient, one- stop shopping and lower costs. But some consumer groups and state insurance regulators point out that such arrangements can also be abused and could present conflicts of interest. For example, a real estate broker that is part owner of a title agency might be seen as unable to provide objective advice on which title insurer a consumer should use. In addition, some see such arrangements as a way to hide referral fees by allowing title insurers or agents to mask such fees as a return on ownership interest. As detailed later in this report, a number of recent investigations have alleged improper use of affiliated business arrangements. State regulation of affiliated business arrangements appears to vary. For example, according to one industry association, a number of states limit the amount of business title insurers and agents can receive from an affiliate. In addition, among the state regulators with whom we spoke for this review, one did not normally examine such arrangements, but the others were beginning to conduct more extensive reviews. RESPA regulations require disclosure of affiliated arrangements whenever a settlement service provider refers a consumer to a business with which the provider has an ownership or other beneficial interest. In addition, while owners of affiliated business may be compensated for their ownership interest in, for example, a title agent, RESPA regulations prohibit compensation beyond that interest. As noted above, the extent of information collected regarding the activities of title agents appears to be limited. As a result, the extent of information collected on affiliated business arrangements involving title agents is likely similarly limited. The use of affiliated business arrangements, and the potential benefits and concerns regarding their use, make this an issue on which further study could be beneficial. To what extent is information available on the growth and use of affiliated business arrangements in the title insurance industry? What are the potential benefits and concerns associated with the use of affiliated business arrangements? To what extent do state insurance and other regulators review affiliated business arrangements? How are RESPA disclosure requirements of affiliated business arrangements, and the related prohibitions on referral fees, enforced? Several types of entities (besides the insurers and their agents) are involved in the sale of title insurance, and the degree of involvement and the extent of coordination among the regulators of these entities appears to vary, making this an area meriting further review. Multiple types of entities are involved in the marketing of title insurance, including real estate brokers and agents, mortgage brokers, lenders, and builders who refer clients to the insurers and agents. These entities are generally overseen at the state level by different regulators, and the extent of regulation related to title insurance sales practices tends to vary across states. One state insurance regulator with whom we spoke told us that they informally coordinate with the state real estate commission as well as HUD. Another regulator said that, while they have tried to coordinate their efforts with the state regulators of real estate and mortgage brokers, those regulators have generally not been interested in such coordination. The apparent growth of affiliated business arrangements, which give some of these entities an ownership interest in others, makes examining the strengths of—and need for—such coordination even more important. However, some coordinated regulatory efforts have taken place. At the federal level, HUD, which is responsible for implementing RESPA, has conducted some investigations with state insurance regulators. As we will see, some of these investigations of the marketing of title insurance by title insurers and agents, real estate brokers, and builders have turned up allegedly illegal activities in the market. Oversight of this, and other areas, is critical to ensure that title insurance markets are functioning fairly. To what extent do regulatory differences among those involved in the sale of title insurance create concerns, and to what extent is there regulatory coordination? To what extent do current regulations address the potential concerns about affiliated business arrangements? What could state and federal regulators do to improve coordination? Federal and state investigators have identified two primary types of potentially illegal activities associated with the sale of title insurance. The first involves providing home-builders, real estate agents, real estate brokers, and lenders with potentially unlawful referral fees through captive reinsurer agreements, allegedly inappropriate or fraudulent business arrangements, and free or discounted business services and other items of value. The second involves potential fraud committed by title agents who allegedly misappropriate or mishandle customers’ premiums. Industry representatives told us that title insurers have begun to address these problems but that clearer regulations and more enforcement are needed. In several states, state insurance regulators identified captive reinsurance arrangements that they alleged were being used by title insurers and agents to inappropriately compensate others—such as builders or lenders—for referrals. In such arrangements, a home-builder, real estate broker, lender, title insurance company, or some combination of these entities forms a reinsurance company that works in conjunction with a title insurer. The title insurer agrees to “reinsure” all or part of its business with the reinsurer by paying the company a portion of the premium (and ostensibly transferring a portion of the risk) for each title transaction. Investigators alleged that these reinsurance companies did not actually provide reinsurance services in return for this compensation because the amount the reinsurers received exceeded the risk they assumed. The investigators considered these arrangements a way to pay for referrals, a practice that is unlawful under some state anti-kickback and anti-rebating laws as well as under RESPA. In one investigation, a reinsurer controlled by three title insurance underwriters entered into agreements with lenders, real estate brokerages, and builders to pay part of its premiums to these entities. State investigators alleged that the reinsurer was transacting reinsurance business without a required certificate from the state and that the title insurers were using unfair practices. As part of the settlement, state investigators demanded that the reinsurer cease operations in the state and that the underwriters end their captive reinsurance arrangements with unauthorized reinsurers but also reimburse affected consumers and pay penalties to the state. In New York, regulators and the attorney general confirmed that they are currently investigating alleged illegal kickbacks in the title insurance industry. State and federal investigators have also alleged the existence of inappropriate or fraudulent business arrangements among title agencies, title insurers, mortgage brokers, attorneys, and real estate brokers that were allegedly being used to convey kickbacks and referral fees. Most of the investigations we reviewed have examined activities by title agents that involve affiliated business arrangements—that is, part or full ownership of title agencies by real estate brokers, lenders, home-builders, and mortgage brokers. A typical fraudulent business arrangement involves a shell title agency that is set up by a title agent but that generally has no physical location, employees, or assets, and does not actually perform title and settlement business. In cases we examined, regulators alleged their primary purpose is to serve as a vehicle to provide kickbacks by being a pass-through for payments or preferential treatment given by the title agent to real estate agents and brokers, home-builders, attorneys, or mortgage brokers for business referrals. Investigations have alleged that the arrangements in these cases violate RESPA. For example: In one federal investigation, a title insurer and eight home-builders were alleged to have formed shell agencies that performed little or no title work, were not independent entities, and benefited financially from referrals. In a multi-state federal investigation, a title agency and its affiliates were found to have created “preferred” attorney lists for real estate closings. Attorneys were allegedly placed on the list only if they agreed to refer their clients to the title agency’s affiliated online title company. As part of the settlement, the parties agreed to stop creating “preferred” attorney lists and pay monetary penalties to the federal government. State and federal investigators have also looked at other types of alleged kickbacks that title agents have given real estate agents and brokers, and attorneys involved in real estate transactions. In investigations we reviewed, these alleged kickbacks included free or discounted business services or other items of value and included gifts, entertainment, business support services, training, and printing costs. One state investigation identified items such as spa treatments, event tickets, electronics, and trips to domestic and foreign vacation locations. Investigators alleged that these inducements also violated federal and state anti-kickback and anti- rebating laws. Finally, federal and state investigators have alleged that some title agents have misappropriated or mishandled customers’ premiums. For example, one licensed title insurance agent, who was an owner or partial owner of more than 10 title agencies, allegedly failed to remit approximately $500,000 in premiums to the title insurer. As a result, the insurer allegedly did not issue 6,400 title policies to consumers who had paid for them. The agent also had allegedly mixed funds from premiums with business assets and allegedly misappropriated escrow funds for his personal use. The investigators, who alleged that the agent had failed to perform his fiduciary duty and had violated several state laws, subsequently suspended his license and, pending the outcome of hearings, plan to shut down the title agencies he owned or controlled. Some employees of title agencies have also been alleged to have submitted fraudulent receipts, invoices, and expense reports and then used the reimbursement money for personal expenses or to pay for items on behalf of those who referred business to them. In response to these and other investigations, insurers and industry associations say they have addressed some concerns but that clearer regulations and stronger enforcement regarding affiliated businesses are needed. One title-insurance-industry association told us that some title insurers have been motivated by recent federal and state enforcement actions to increasingly address kickbacks and rebates through, for example, increased oversight of title agents. In addition, they said that companies operating legally are hurt by competition from those breaking the rules and that these businesses welcome greater enforcement efforts. Another industry association, however, told us that clearer regulations regarding referral fees and affiliated business arrangements would aid the industry’s compliance efforts. Specifically, regulations need to be more transparent about what types of discounts and fees are prohibited and what types are allowed. How widespread are cited infractions associated with the sale of title insurance? What are the implications of the findings of state and federal investigations for the title insurance industry and consumers? What actions have regulators and title industry participants taken to reduce the extent of illegal activities? Over the past several years, regulators, industry groups, and others have suggested changes to regulations that would affect the way title insurance is sold. In 2002, in order to simplify and improve the process of obtaining home mortgages and to reduce settlement costs for consumers, HUD proposed revisions to the regulations implementing RESPA. The proposed revisions included the creation of a guaranteed mortgage package that included guaranteed prices for loan origination and settlement services and a guaranteed interest rate, as well as a revised good faith estimate that would have required additional disclosures of settlement fees and limit fee increases over the original estimates. In response to considerable comment from the title industry, consumers, and other federal agencies, HUD withdrew the proposal in 2004. Opponents argued that the revisions would have given lenders too much leverage in putting together the guaranteed mortgage packages and would have included title insurance— a product priced in part on risk—in a package that was priced based on market forces. HUD announced in June of 2005 that it was again considering revisions to the regulations, and has subsequently held a number of industry roundtables to get input from industry and others. NAIC officials told us that NAIC is considering changes to the model title insurance act in order to address current issues such as the growth of affiliated business arrangements. The model law for title insurers, among other things, covers premium rate regulation and title insurers’ oversight of title agents that write insurance for them. The model law for title agents includes, among other things, agent licensing requirements and prohibitions on referral fees. According to NAIC, they will likely change the model title insurers act to more closely mirror RESPA’s provisions regarding referral fees and available sanctions against violators. In addition, they would like to revise the model title agents act by strengthening the licensing requirements for title agents, because doing so can discourage the formation of shell agencies as part of an improper affiliated business arrangement. Finally, at least one consumer advocate has suggested that requiring lenders to pay for the title policies from which they benefit might increase competition and ultimately lower costs for consumers. Lenders could then use their market power to force title insurers to compete for lenders’ business based on price. Additional regulation, these advocates said, might be necessary to require lenders to pass such cost savings on to consumers. Some title industry officials have voiced concern with such an approach because it would allow the lender to decide which title insurer the buyer must use. That is, if the buyer wanted to get the cost savings associated with simultaneously issued lender’s and owner’s policies, the buyer would have to use the same insurer as the lender. What benefits and concerns might arise from the implementation of potential regulatory changes? What barriers to implementation exist, and how serious are they? What other regulatory alternatives exist? As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the Senate Committee on Banking, Housing and Urban Affairs; the House Committee on Financial Services; the Secretary of Housing and Urban Development; and other interested parties. We will make copies available to others upon request. The report will also be available at no charge on our Web site at http://www.gao.gov. Please contact me at (202) 512-8678 or [email protected] if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix I. In addition to the contact named above, Lawrence Cluff, Assistant Director; Tania Calhoun, Emily Chalmers, Nina Horowitz, Marc Molino, Donald Porteous, Melvin Thomas, and Patrick Ward made key contributions to this report.
Title insurance is a required element of almost all real estate purchases and is not an insignificant cost for consumers. However, consumers generally do not have the knowledge needed to "shop around" for title insurance and usually rely on professionals involved in real estate--such as lenders, real estate agents, and attorneys--for advice in selecting a title insurer. Recent state and federal investigations into title insurance sales have identified practices that may have benefited these professionals and title insurance providers at the expense of consumers. At your request, GAO currently has work under way studying the title insurance industry, including pricing, competition, the size of the market, the roles of the various participants in the market, and how they are regulated. You asked GAO to identify and report on preliminary issues for further study. In so doing, this report focuses on: (1) the reasonableness of cost structures and agent practices common to the title insurance market that are not typical of other insurance markets; (2) the implications of activities identified in recent state and federal investigations that may have benefited real estate professionals rather than consumers; and (3) the potential need for regulatory changes that would affect the way that title insurance is sold. Some cost structures and agent practices that are common to the title insurance market are not typical of other lines of insurance and merit further study. First, the extent to which premium rates reflect underlying costs is not always clear. For example, most states do not consider title search and examination costs--insurers' largest expense--to be part of the premium, and do not review them. Second, while title agents play a key role in the underwriting process, the extent to which state insurance regulators review them is not clear. Few states regularly collect information on agents, and three states do not license them. Third, the extent to which a competitive environment exists within the title insurance market that benefits consumers is also not clear. Consumers generally lack the knowledge necessary to "shop around" for a title insurer and therefore often rely on the advice of real estate and mortgage professionals. As a result, title agents normally market their business to these professionals, creating a form of competition from which the benefit to consumers is not always clear. Fourth, real estate brokers and lenders are increasingly becoming full or part owners of title agencies, which may benefit consumers by allowing one-stop shopping, but may also create conflicts of interest. Finally, multiple regulators oversee the different entities involved in the title insurance industry, but the extent of involvement and coordination among these entities is not clear. Recent state and federal investigations have identified potentially illegal activities--mainly involving alleged kickbacks--that also merit further study. The investigations alleged instances of real estate agents, mortgage brokers, and lenders receiving referral fees or other inducements in return for steering business to title insurers or agents, activities that may have violated federal or state anti-kickback laws. Participants allegedly used several methods to convey the inducements, including captive reinsurance agreements, fraudulent business arrangements, and discounted business services. For example, investigators identified several "shell" title agencies created by a title agent and a real estate or mortgage broker that had no physical location or employees and did not perform any title business, allegedly serving only to obscure referral payments. Insurers and industry associations with whom we spoke said that they had begun to address such alleged activities but also said that current regulations needed clarification. In the past several years, regulators, industry groups, and others have suggested changes to the way title insurance is sold, and further study of these suggestions could be beneficial. For example, the Department of Housing and Urban Development announced in June 2005 that it was considering revisions to the regulations implementing the Real Estate Settlement Procedures Act. In addition, the National Association of Insurance Commissioners is considering changes to model laws for title insurers and title agents. Finally, at least one consumer advocate has suggested that requiring lenders to pay for the title policies from which they benefit might increase competition and ultimately lower consumers' costs.
Automation of the battlefield has been a long-term goal of the Army because of its promise as a force multiplier: it produces greater fighting effectiveness through better use of battlefield resources. Digitization of the battlefield is the Army’s latest effort to bring it closer to its long-term goal. Prior Army efforts focused on automating command and control at the corps and division levels whereas digitization extends this automation to the brigade and lower echelons, including individual weapons platforms. Digitization of the battlefield is part of a major effort to reshape the Army and, thus, it is one of the Army’s highest priorities. The Army hopes to identify how digitization will improve combat power and how to change its organizational structure, doctrine, and tactics to take advantage of digitization. Army battlefield digitization started in the 1980s with the development of five corps- and division-level command and control systems collectively known as the Army Tactical Command and Control System. Their development and fielding have been a struggle. Two systems were fielded in 1993 and 1994, with limited capabilities. Two other systems are scheduled to undergo operational testing in 1995 and in 1996. The fifth system is scheduled to undergo its second operational test in 1996. The Army’s strategy for digitizing the battlefield uses a bottom-up approach that experiments echelon by echelon with several digital systems simultaneously. It is a massive effort involving brigade-, division-, and corps-level experiments over the next 5 years. Advanced warfighting experiments were performed in 1993 at the company level, and in 1994 at the battalion level. Current plans call for a brigade experiment in February 1997, a division experiment in February 1998, and a corps experiment in April 1999. There are many digital systems to evaluate. For example, 25 unique digital systems and more than 120 items of equipment were evaluated in the battalion experiment. More than 40 digital systems, including potentially 1,200 appliques, may be evaluated during the brigade experiment. The applique, which began its development when a contract was awarded on January 6, 1995, will provide digital capability to weapon systems that do not have any. The major feature of the applique will be the situational awareness that it provides to its users. A digital map will display the locations of friendly and enemy forces and update their movement in near real time. This common picture will be provided simultaneously to all units in the brigade, from the command staff to the individual M1 tanks and other weapons platforms. The investment required for what the Army describes as the equivalent of the concept exploration and definition phase is $272 million through fiscal year 1997. For fiscal years 1998 and 1999, the equivalent of the demonstration and validation phase and the engineering and manufacturing development phase, the cost is expected to be $125 million, bringing the total development effort to $397 million. The cost primarily covers the development and acquisition of the applique and its integration onto many different vehicles, helicopters, and other platforms. It also covers the development of a digital radio and other related products. These research and development costs are relatively high because it is expensive to equip a battalion, a brigade, a division, and a corps with appliques for experiments. In the conventional concept exploration and definition phase, only a few prototypes of a system would be bought for experiments. The Army’s position is that, although these costs are relatively high, the resources are needed to demonstrate the utility of a digitized force. Through 2005, the Army estimates that $2.1 billion is needed to field and sustain Force Package I. About 77 percent of this amount is to equip the force with appliques. The cost to equip the rest of the Army with appliques is not known, but according to Army officials, it could be $2 billion through 2011. This is in addition to funds already programmed for other digital battlefield efforts such as the five systems that comprise the Army Tactical Command and Control System and the embedded systems whose costs are born by the weapon systems themselves. The Army faces numerous technical, program, cost, and schedule risks in implementing its master plan for battlefield digitization. These risks are integration, software development, hardware costs, unknown quantity requirements, communications, and interoperability with other command and control systems. The integration of the applique onto different platforms represents a technical risk. The underlying cause of this risk is that each platform is different and requires a separate solution in terms of installation kits. For example, the installation kit that works for a tank may not necessarily work for an infantry fighting vehicle or a helicopter. Software development is an additional technical, cost, and schedule risk in our view because no appliques have been delivered and tested. More will be known after a critical design review in August 1995, and evaluations of interim software currently scheduled for July, September, and December 1995 and January and May 1996 have occurred. During this period, soldiers from Fort Knox will evaluate each version of software. Implementing all software functions and requirements will require additional engineering; in fact, 30 percent of applique software, which is needed for the brigade experiment, is estimated to be new code. The rest of the software is existing Brigade and Below Command and Control (B2C2) software and elements of the Forward Area Air Defense Command and Control, the Combat Service Support Control System, and the Enhanced Position Location Reporting System software, which have only been demonstrated separately and not as an integrated system. Applique hardware costs may be understated, depending on (1) how frequently hardware will be replaced, (2) what mix of computers will be used in future experiments and fieldings, and (3) whether higher end machines with more memory and speed will be needed. The Army may be required to upgrade applique computers every 2 to 3 years or sooner to take advantage of industry’s technology advancements. The Army is still deciding on the proper mix of militarized, ruggedized, and commercial computers to be used for the brigade experiment. Currently, it is moving away from militarized toward ruggedized computers, which are less costly. However, the commercial computers, which are the least costly of the three variants, may not be rugged enough for the job. If the brigade experiment shows that more militarized and ruggedized computers are needed, that would drive up the costs of future experiments and deployment. The brigade experiment may also show that the appliques cannot do the job in terms of memory and speed. If so, higher end machines would be required, which will also increase costs. Cost risk is further aggravated by unknown quantity requirements for the applique. Because total quantity requirements are unknown, the total cost of the applique and the FBCB2 program is unknown. The 1997 brigade experiment may show that installing an applique in every tank, helicopter, and weapon system is useful but not affordable. Army officials have told us that having adequate communications is key to the 1997 brigade experiment; otherwise, it may have to be postponed. The Army is developing a tactical internet that increases the digital capacity and connectivity of three existing radio based communications systems.However, the tactical internet is not expected to be delivered to the Army until May 1996, only 1 month before the start of training for the experiment. Consequently, it represents a significant schedule risk. If successful, the tactical internet will provide a short-term solution to meeting the Army’s data distribution needs. However, long-term needs will increase as the Army becomes dependent on automation and adds new digital systems to its inventory. Because of this, Army officials told us that they will require two new data distribution systems, one in the interim to be potentially more capable but less costly than the current system, EPLRS, and another one in the future to meet long-term needs. Developing an interim digital communications system for a 10 division Army could cost at least as much as EPLRS, or more than $900 million, and could take years to field. In our view, the data distribution issue is the weak link in the Army’s plan because a new, interim system will be needed to meet the increasing communications demands imposed by the digital battlefield in the next century. Until it is resolved, we do not believe the full potential of battlefield digitization or automation will be realized. A schedule risk is posed because a number of systems must interoperate with the applique and be available for integration and testing prior to the 1997 brigade experiment. An example would be the five division- and corps-level systems that comprise the Army Tactical Command and Control System. Interoperability has been demonstrated through a very limited number of messages being exchanged between these systems. However, database to database exchange, which is critical to providing commanders with an accurate, near real-time common picture of the battlefield, has not been achieved. In commenting on our report, the Army recognizes the risks that we discuss and believes that it has taken steps to mitigate them. These include (1) the establishment of the Army Digitization Office, which provides high-level oversight by reporting to the Chief of Staff of the Army; (2) the establishment of the Digital Integrated Laboratory to assess interoperability issues; (3) the establishment of a “user jury” to provide early assessments of applique performance; and (4) the development of a Risk Management Master Plan. While these efforts are commendable, we still believe that the risks are substantial in number and formidable obstacles to the success of the digitization of the battlefield and we will continue to monitor the program to determine whether these risk reduction efforts really work. The Army’s experimentation master plan states what experiments are to be performed through 1999, but it does not provide specific goals and clear criteria to support decisions to proceed with the experiments and buy additional appliques and other equipment. Thus, there is no criteria for measuring whether the experiments will be successful. As a result, the Army could continue to conduct large-scale, costly experiments at the brigade, division, and corps level, no matter what the results would be. For example, the 1994 battalion-level experiment lacked specific goals and exit criteria. Despite poor results in that experiment, the Army is moving on to a larger scale, brigade-level experiment in 1997, at a cost of $258 million. In addition, the Army’s experimentation approach lacks adequate instrumentation and data collection. Specific, measurable, and quantifiable goals are needed to evaluate program achievements and assure program success. The Army’s Operational Test and Evaluation Command (OPTEC) stated this requirement in its report on the 1994 experiment. Its recommendation was to “establish entrance criteria for hardware and software to ensure equipment used by the units is reliable and interoperable, and insights and data generated on force effectiveness meet established goals and expectations.” Although the experimentation plan identifies numerous goals, such as increased lethality, it does not say how much lethality is to be achieved from the battalion experiment to the brigade and division-level experiments. Increased lethality is measured by many factors, such as the number of enemy troops, artillery pieces, and helicopters lost in battle. However, neither numeric criteria nor a baseline is given for these factors. The Army intends to determine effectiveness based on increasing trends in a series of simulations, technical tests, and field and subfield experiments over the next 5 years. The Army does not believe that either pass/fail criteria or a baseline are necessary at this stage since it is only experimenting. However, given that the experiment is expensive and important to its future, the Army should have measurable goals that it is expecting to achieve. Attainment or nonattainment of these goals, rather than subjective assessments alone, can best show the Army where it must direct its resources and whether it is appropriate to proceed to the next experiment. From April 10 to April 23, 1994, a battalion-level experiment was conducted at the National Training Center, Fort Irwin, California. It was the first experiment to use a digitized battalion task force. The experiment did not have (1) specific goals, (2) a specific way to measure success, or (3) a baseline to compare the digitized battalion’s performance to. However, some Army leaders expected that the digitized “blue” force would defeat its nondigitized opponent called the “red” force. This did not happen. In the absence of specific goals, thresholds for performance, and a baseline, the Army compared the outcomes of seven nondigitized units that participated in four training rotations against the same well-trained red force at about the same time. Four units were at the National Training Center prior to April 1994, one at the same time as the digitized battalion, and two were there after the digitized unit’s exercise. The comparison showed that the blue force generally performed no better than the seven other nondigitized blue forces against the red force. For example, the loss exchange ratio (the ratio of enemy losses to friendly losses) of the digitized blue force was about the same as the seven nondigitized blue forces in offensive and defensive engagements. The main reasons for these poor results were the immaturity of the B2C2 software, its lack of interoperability with the M1A2 tank’s command and control software, and a lack of hands-on training with the digital systems. Despite these poor results, the Army decided to proceed to the brigade-level experiment instead of redoing the battalion experiment because it would have slowed the digitization effort by a year and cost several million dollars. “. . . additional instrumentation at critical nodes would allow increased confidence in experiment outcomes. It would permit a determination of when systems are operational, when they are used, how much they are used, who is communicating with whom . . . and if the systems are down, is the cause hardware, software, radio propagation, or human error. . . . The lack of instrumentation does not provide system developers the kind of information they need to troubleshoot problems identified during the exercise and make needed fixes.” Objective data is vital in decisions to proceed to the next experiment and finally to full-rate production and deployment. The Army is planning to provide a more controlled environment for data collection of 100 instrumented vehicles during a 9-month training period prior to the February 1997 brigade experiment. However, it is unclear whether this will be enough in the context of numbers and critical nodes. The Army, in conjunction with an independent test agency, needs to decide specifically what instrumentation is needed to provide sufficient objective data to support moving the experiment forward. Last year, Congress directed the Army to include the Marine Corps in its plans for the digital battlefield. This has been done. Also, in fiscal year 1995, the Army provided the Marine Corps with $429,000 to help it launch its digitization program. The Army will also provide the Marines—at a cost of about $2.3 million to the Army—with enough appliques and installation kits to equip a light-armored reconnaissance company to participate in the 1997 brigade experiment. Despite these efforts, the Marines will have a $4.8 million shortfall in fiscal year 1996 research and development funds for equipment, engineering support, and operational demonstrations, which will affect its preparation for the Army’s 1997 brigade experiment. The Army says it cannot provide additional assistance to the Marines because it has no more resources. Thus, the Marines’ participation in the Army’s 1997 experiment appears to be unknown. This situation illustrates that the Marine Corps needs assured funding to solidify its participation and success in all of the Army’s digital battlefield experiments. These experiments may show that the Marines need additional appliques and communications systems to assure its interoperability with the Army in future joint combat operations. Thus, the Army; the Navy, which oversees Marine Corps funding; and DOD need to work together to produce a specific plan to create and assure Marine Corps funding. To help ensure that resources are directed appropriately and the Army has the data it needs to determine whether it should (1) buy additional appliques and (2) proceed to the next level of experiments, we recommend that the Secretary of Defense require the Secretary of the Army to develop specific, measurable goals and exit criteria for each phase of digital battlefield experimentation. Further, the Secretary of Defense should independently verify the goals’ attainments. To carry out congressional direction, we also recommend the Secretary of Defense insure that the Secretary of the Navy and the Commandant of the Marine Corps identify resources to support the Marine Corps’ participation and success in the Army’s battlefield digitization effort. DOD partially concurred with the recommendations in our draft report. While the steps it plans to take on eventually establishing measurable goals substantially complies with our recommendation, we still have differences on the timing and specificity of the goals and the independent verification of the attainment of those goals. DOD believes that while it is necessary to have some means to judge the outcome of these large-scale experiments, it is too early in the program to have specific goals and measurable standards that have a pass or fail criteria associated with them. We disagree and continue to maintain that specific, measurable goals are needed, even at this early stage because of the expenses involved, the scale and progressive nature of the experiments, and their importance to the Army. By not establishing specific goals now at this level of experimentation, DOD and the Army are escalating risk as each advanced warfighting experiment progresses from the brigade to the division and finally to the corps levels. The DOD-supported Army approach continues the risk associated with acquiring millions of dollars of appliques and other related developments without knowing whether previous experiments were successful. Without some limits and controls, the Army could spend hundreds of millions of dollars on these experiments without having an adequate basis to judge whether it should continue them. DOD partially concurred with our recommendation that the attainment of these yet to be established measurable goals needs to be independently verified by DOD and points to the involvement of the Director, Operational Test and Evaluation (DOT&E). We acknowledge that DOT&E involvement is a very positive step in the direction we recommend. However, it is still unclear whether DOT&E will actually (1) approve of specific, measurable goals early on as we recommend instead of the general ones that DOD and the Army advocate and (2) verify the attainment of those goals in each advanced warfighting experiment. DOD’s recognition of the Marine Corps’ funding issue and its statement that it is working with the services to resolve it, essentially complies with the intent of our recommendation. We intend to monitor DOD’s implementation efforts. DOD’s comments are addressed in the body of this report where appropriate and are reprinted in their entirety in appendix I, along with our evaluation. We performed our review primarily at the Army Digitization Office in Washington, D.C., and the Program Executive Office for Command and Control Systems, and the Program Executive Office for Communications Systems at Fort Monmouth, New Jersey. We also visited the Army’s Training and Doctrine Command at Fort Monroe, Virginia; the Armor Center at Fort Knox, Kentucky; the Combined Arms Center at Fort Leavenworth, Kansas; OPTEC, Arlington, Virginia; and the Program Executive Office for Aviation, St. Louis, Missouri. In addition, we contacted DOD’s DOT&E, Washington, D.C.; and the U.S. Marine Corps Systems Command, Quantico, Virginia. We conducted our review between October 1994 and June 1995 in accordance with generally accepted government auditing standards. We are sending copies of this report to other appropriate congressional committees; the Director, Office of Management and Budget; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. Copies will also be made available to others upon request. Please contact me at (202)512-6548 if you or your staff have any questions concerning this report. The major contributors to this report were William L. Wright, Donald F. Lopes, and Edwin B. Griffin. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated August 25, 1995. 1. We have identified these efforts in the body of our report. We believe that the Army’s intentions are encouraging. However, we will continue to monitor the program to determine whether these risk reduction efforts really work. We still believe that the risks are substantial in number and formidable obstacles to the success of the digitization of the battlefield. 2. The steps the Army plans to take on eventually establishing measurable goals substantially complies with our recommendation. We still have differences on the timing and specificity of the goals and the independent verification of the attainment of those goals. DOD believes that while it is necessary to have some means to judge the outcome of these large scale experiments, it is too early in the program to have specific goals and measurable standards that have a pass or fail criteria associated with them. We disagree and continue to maintain that specific, measurable goals are needed, even at this early stage because of the expenses involved, the scale and progressive nature of the experiments, and their importance to the Army. By not establishing specific goals now at this level of experimentation, DOD and the Army are escalating risk at higher levels as each advanced warfighting experiment progresses from the brigade to the division and finally to the corps levels. DOD supported Army approach continues the risk associated with acquiring millions of dollars of appliques and other related developments without knowing whether previous experiments were successful. Without some limits and controls, the Army could spend hundreds of millions of dollars on these experiments without having an adequate basis to judge whether it should continue with them. 3. We acknowledge that the Director, Operational Test and Evaluation (DOT&E) involvement is a very positive step in the direction we recommend. However, it is still unclear whether DOT&E will actually (1) approve of specific, measurable goals early on as we recommend instead of the general ones as DOD and the Army advocate and (2) verify the attainment of those goals in each advanced warfighting experiment. 4. DOD’s recognition of the Marine Corps’ funding issue and its statement that it is working with the services to resolve it, essentially complies with the intent of our recommendation. We will continue to monitor DOD’s implementation efforts. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Army's plans to digitize its battlefield operations. GAO found that: (1) as part of its battlefield digitization plan, the Army plans to conduct a series of costly experiments from 1995 to 1997 to demonstrate the utility of a digitized force; (2) risks that the Army faces in implementing its digitization plan include integration, software development, hardware costs, unknown quantity requirements, communications, and interoperability with other command and control systems; (3) specific and measurable goals are needed to evaluate the achievements of each experiment, and these goals should be met before proceeding to the next experiment; (4) the Army is risking investments of almost $400 million for digital systems needed to conduct increasingly larger scale experiments through fiscal year 1999; (5) the investment required to digitize a 10-division Army could be as high as $4 billion; and (6) since Congress has directed the Army to include the Marine Corps in its digitization plan, the Department of Defense must identify funding for the Marine Corps to ensure its participation and success in the digitization program.