chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Explain overall equity market behavior over various historical periods.
• Explain different equity style and size behavior over various historical periods.
• Extract various equity market performance results from plots and charts.
Using Graphs and Charts to Plot Equity Market Behavior Stock Size Considerations
The Dow Jones Industrial Average (DJIA), also known as the Dow 30) and the S&P 500 Index are the most frequently quoted stock market indices among scholars, businesses, and the public in general. Both indices track the change in value of a group of large capitalization stocks. The changes in the two indices are highly correlated.
It may be fair to question if either index is a good representation of the value of equity and the changes in value in the market because there are over 6,000 publicly traded companies listed on organized exchanges and thousands of additional companies that trade only over the counter. As of year-end 2020, the S&P 500 firms had a combined market capitalization of \$33.4 trillion, about 66% of the estimated US equity market capitalization of \$50.8 trillion.20 It is widely agreed that the performance of the S&P 500 is a good representation of the broader market and more specifically of large capitalization firms.
Figure 12.13 provides a visualization of how S&P 500 stock returns have stacked up since 1900. This figure makes it clear that equity returns roughly follow a bell curve, or normal distribution. Thus, we are able to measure risk with standard deviation. A lower standard deviation of returns suggests less uncertainty of returns and therefore less risk.
Capital market history demonstrates that the average return to stocks has significantly outperformed other financial security classes, such as government bonds, corporate bonds, or the money market. Table 12.4 provides the return and standard deviation of several US investment classes over the 40-year period 1981–2020. As you can see, stocks outperformed bonds, bills, and inflation. This has led many investment advisers to emphasize asset allocation first and individual security selection second. The intuition is that the decision to invest in stocks rather than bonds has a greater long-run payoff than the change in performance resulting from the selection of any individual or group of stocks.
Figure 12.14 demonstrates the growth of a \$100 investment at the start of 1928. Note that the value of the large company portfolio is more than 50 times greater than the equal investment in long-term US government bonds. This supports the importance of thoughtful asset allocation.
Still, the size of a firm has a significant impact on how investors choose equity securities. Capital market history also shows that a portfolio of small company stocks has realized larger average annual returns, as well as greater variability, than a portfolio of large companies as represented by the S&P 500. Small-cap stock total returns ranged from a high of 142.9% in 1933 to a low of -58.0% in 1937.
More recently, the differential return between small and large capital stocks has not been as pronounced. From 1980 through 2020, the Wilshire US Small-Cap Index has averaged an annual compound return of 12.13% compared to the Wilshire US Large Cap Index average of 11.82% over the same period. The 31-basis point premium is much smaller than that realized in the 1926–2019 period, which saw a small-cap average annual compounded return of 11.90% versus 10.14% for the large-cap portfolio.
Figure 12.13 The Pyramid of Equity Returns: Distribution of Annual Returns for the S&P 500 Index, 1928–2020 (data source: Aswath Damodaran Online)
Asset Class Nominal Average
Annual Returns
1981–2020
Standard Deviation
of Returns
1981–2020
Large company stocks 12.64% 16.06%
Baa bonds 10.34% 7.67%
10-year T-bonds 8.21% 9.92%
US T-bills 3.94% 3.39%
Inflation 2.93% 1.76%
Table 12.4 Arithmetic Average Annual Returns and Standard Deviation by Asset Class, 1981–2020 (source: Aswatch Damodaran Online)
Figure 12.14 Growth of a \$100 Investment into Selected Asset Portfolios, 1928–2020 (data source: Aswath Damodaran Online)
Link to Learning
Would You Like to Research More Historical Returns?
This article on historical returns and risks contains calculators that can help you find returns over your selected periods for US stocks, bonds, and inflation. How have the markets done since you were born? This second article about global equity markets has a comparable calculator. You can also go to the Global Wealth Report, an annual publication by Credit Suisse, to dig more deeply.
Link to Learning
Does It Pay to Time the Market?
Over the period 1980 to mid-2020, an investment of \$10,000 into an S&P 500 index fund would have yielded the investor \$697,421. However, missing the 5 best-performing days in the market would have reduced the final portfolio balance to \$432,411. Stay out of the market on the 10 best days, and the balance would have ended at only \$313,377, or less than half of the return earned in the full time period. Watch this Wall Street Journal video on the DJIA to learn more.
Concepts In Practice
Warren Buffett
Figure 12.15 Warren Buffett (credit: “Warren Buffet at the 2015 Select USA Investment Summit.” USA International Trade Administration/Wikimedia Commons, CC Public Domain Mark)
Warren Buffett has not always been one of the richest people in the world, but he has always been one of the hardest workers. An entrepreneur from an early age, Buffett’s yearbook photo caption noted that he “likes math: a future stockbroker.” Before leaving high school, Buffett had already earned thousands of dollars running a paper route and through one of his start-up businesses of installing and maintaining pinball machines in barbershops.
As they say in Nebraska, “you need to make hay while the sun shines,” and Buffett has made his share of hay, so to speak. In his career, Buffett has accumulated enough hay to be one of the wealthiest people in the world, with a net worth of over \$80 billion by the end of 2020.
The “Oracle of Omaha,” as Buffett is known, grew his fortune through investing partnerships and most notably as the chairman, president, CEO, and largest stockholder of Berkshire Hathaway (BRK). Berkshire Hathaway was a New England textile manufacturer when Buffett and his investment partners began buying shares in the 1960s. By 1966, after a dispute with the then CEO of Berkshire, Buffett assumed control of the company and fired the CEO. Soon, Buffett’s partnerships merged into Berkshire and moved the business away from textiles; it eventually became the largest financial services company in the world, including total ownership of the Geico Insurance Company.
Buffett’s career is notable for how he developed his fortune, how he explained his philosophy, and for his current and future plans. Buffett followed the method of Benjamin Graham, famous value investor and author of Security Analysis, The Intelligent Investor. However, Buffett expanded beyond Graham’s analysis of financial statements and intrinsic value to examine the character of executive management. He applied the same criteria to hiring employees as well. Buffett once noted, “We look for three things when we hire people. We look for intelligence, we look for initiative or energy, and we look for integrity. And if they don’t have the latter, the first two will kill you, because if you’re going to get someone without integrity, you want them lazy and dumb.” When speaking of integrity, Buffett went on to say, “Only when the tide goes out do you discover who’s been swimming naked.”
Buffett’s folksy way of making his point will undoubtedly be another of his legacies. When asked repeatedly about how he managed to be such a successful investor, Buffett replied, “Never invest in a business you can’t understand.” Never was this truer than in the late 1990s and 2000, when the dot-com craze fueled the stock market with technology firms enjoying tremendous price increases without the corresponding earnings. Buffett’s value investing lagged until the bubble burst, and suddenly he was back on top. When asked about his change in fortune he replied, “In the business world, the rearview mirror is always clearer than the windshield.”
Buffett believes in long-term rather than short-term investing. He once remarked that “Someone’s sitting in the shade today because someone planted a tree a long time ago” and “If you aren’t willing to own a stock for 10 years, don’t even think about owning it for 10 minutes.”
The third aspect of Buffett’s legacy will be how his money works now and after he is gone. With Bill and Melinda Gates, Buffett started the Giving Pledge, and to date they have gathered the pledge of over 200 billionaires to give away half or more of their fortune during and after their lifetimes. Buffett states, “If you’re in the luckiest 1% of humanity, you owe it to the rest of humanity to think about the other 99%.” Buffett has begun the process to give away most of his fortune, but he has left this pearl of wisdom for his own children: “A very rich person should leave his kids enough to do anything, but not enough to do nothing.”
(Sources: Joshua Kennon. “How Warren Buffett Became One of the Wealthiest People in America.” The Balance. May 4, 2021. https://www.thebalance.com/warren-Bu...imeline-356439; Ty Haqqi. “Five Largest Financial Services Companies in the World.” Insider Monkey. November 26, 2020. https://www.insidermonkey.com/blog/5...orld-891348/2/; Mohit Oberoi. “Warren Buffett: Growth Stocks Look Like Dot-Com Bubble.” Market Realist. September 4, 2020. https://marketrealist.com/2020/07/wa...ot-com-bubble/)
Link to Learning
Does It Pay to Invest Globally?
On a global basis, US equity markets have been among the highest performing since 1900. Only Australia shows a higher average annual return over the 121-year period. Many factors contribute to long-run stock performance in any given country. However, over the period studied, the United States benefited from an entrepreneurial spirit and distance from the center of two world wars. An article reviewing global stock market returns from 1900 to 2020 summarizes and analyzes global market return information created by researchers Elroy Dimson, Paul Marsh, and Mike Staunton for Credit Suisse.21
While most established economies have not generated higher returns than the US equity markets, they do offer the benefits of diversification. Further, the greatest return potential—and the greatest risk of loss—may lie in developing economies. Investing experts are not in complete agreement about the advantages and disadvantages of investing in foreign equity markets. This article provides a framework for analysis and tools for comparing equity returns by country from 1970 through 2020. Has Australia continued to be the top-performing equity market since 1970? Have equity markets performed as well or better in the last 21 years compared to the 121-year period? Does the article encourage you to diversify internationally or focus only on domestic securities? | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/12%3A_Historical_Performance_of_US_Markets/12.05%3A_Historical_Picture_of_Returns_to_Stocks.txt |
12.1 Overview of US Financial Markets
One way to parse financial markets is by the maturity of financial instruments. With this dichotomy, we explored the money market and the capital market. The money market consists of short-term securities and the capital market of longer-term securities. The capital market discussion focused on debt and equity as financial instruments used to finance longer-term capital financing needs. IPOs or SPACs are vehicles for raising new equity. Most trading on organized exchanges or over-the-counter markets is for used, or secondary, securities.
12.2 Historical Picture of Inflation
The Federal Reserve considers moderate inflation rates optimal in their oversight of the US economy. We measure inflation by comparing the price of a bundle or basket of goods over time and documenting how prices change. Since not everyone consumes similar baskets of goods, we calculate several different measures of inflation. The most commonly quoted measure of inflation uses changes in the Consumer Price Index (CPI).
12.3 Historical Picture of Returns to Bonds
Historical bond yields are published going back hundreds of years but are only reliably available for the last 100 years or so. In large part, the returns realized on portfolios of bonds have been smaller and less variable than the returns realized for equities.
12.4 Historical Picture of Returns to Stocks
Stocks have produced the greatest average annual rates of return of the money and capital market assets discussed in this chapter. Stockholders bear more risk than bondholders or money market investors and receive on average higher average annual returns. Despite the relatively high average annual rate of return for portfolios of stock, history shows that the equity markets earn negative annual returns about 25% of the time. The negative returns realized by equities occur far more often than the negative results realized by money market or debt market instruments.
12.07: Key Terms
bond returns
sums the periodic interest payments and the change in bond price in a given period and divides by the bond price at the beginning of the period
commercial paper (CP)
a short-term, unsecured security issued by corporations and financial institutions to meet short-term financing needs such as inventory and receivables
debenture
a common type of unsecured bond issued by a corporation
indenture
legal term for a bond contract
inflation
a general increase in prices and a reduction in purchasing power; expected rate is a key component of interest rates
initial public offering (IPO)
the first time a firm offers stock to the public
mortgage bond
bond issued by a corporation using a real asset, such as property or buildings, to guarantee it
municipal bonds (munis)
bonds issued by a local government, territory, or agency; generally used to finance infrastructure projects
negotiable certificates of deposit (NCDs)
large CDs issued by financial institutions; redeemable at maturity but can trade prior to maturity in a broad secondary market
primary market
market for new securities
seasoned equity offering (SEO)
a method used by new IPOs to raise capital by offering additional shares of stock to the public
secondary market
market for used securities
shelf registration
part of Securities and Exchange Commission (SEC) Rule 415; allows a company to register with the SEC to issue new shares but allows up to two years before issuing the shares
special purpose acquisition companies (SPACs)
a special form of IPO
stock returns
sums the periodic dividend payments plus the change in stock price in a given period divided by the stock price at the beginning of the period
total returns
the sum of all cash flows received from an investment; includes periodic cash flows plus price appreciation or price depreciation
Treasury bills (T-bills)
short-term debt instruments issued by the federal government and maturing in a year or less
Treasury bonds
government debt instruments with maturities of 20 or 30 years
Treasury notes (T-notes)
government debt instruments with maturities of 2, 3, 5, 7, or 10 years
12.08: Multiple Choice
1.
Which of the following statements about Treasury bills is false?
1. T-bills sell at a discount from face value and pay the face value at maturity.
2. T-bills have maturities of 2, 3, 5, 7, or 10 years.
3. T-bill auctions take place weekly.
4. T-bill denominations are relatively small compared to other money market instruments, with initial auction sizes of as little as \$10,000 per T-bill.
2.
If an investor wishes to simply execute a stock trade at the current market price, they should issue a ________.
1. limit order
2. stop loss order
3. market order
4. hedge order
3.
Based on nominal average annual returns over the period 1980–2020, list the order of returns by asset class from highest to lowest.
1. large company stocks, Baa bonds, small company stocks, T-bills
2. small company stocks, large company stocks, Baa bonds, T-bills
3. T-bills, Baa bonds, small company stocks, large company stocks
4. small company stocks, large company stocks, T-bills, Baa bonds
4.
A \$1 investment in a portfolio of small company stocks in 1928 would have grown to over ________ by mid-2019.
1. \$35,000
2. \$8,000
3. \$800
4. \$80
5.
Since 1980, the compound average annual growth rate for large company stocks has been ________.
1. greater than Baa bonds but less than small company stocks
2. greater than small company stocks but less than Baa bonds
3. greater than Baa bonds and small company stocks
4. less than Baa bonds and small company stocks | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/12%3A_Historical_Performance_of_US_Markets/12.06%3A_Summary.txt |
1.
Define the competitive and noncompetitive bid process for US Treasury bills.
2.
How does a negotiable certificate of deposit (NCD) differ from the typical certificate of deposit you may see advertised by your local bank?
3.
If you are an investor concerned about unexpected inflation in the coming years which of the following investments offers the greatest protection against inflation, and why: T-notes, T-bonds, or TIPS?
4.
Debentures are more common than mortgage bonds issued by corporations. Why do you think debentures are more popular with investors? Be sure to define each bond contract in your discussion.
5.
Market capitalization is a common way to rank firm size. Search the internet to identify and define at least two other ways to rank firms based on size. Identify at least one reason you prefer market capitalization as the method of choice to rank firm size.
6.
Compare and contrast an SEO, IPO, and SPAC. If Ford Motor Company wished to raise new equity capital, which of these vehicles would they use?
7.
Compared to a “best efforts” form of underwriting, how does “firm commitment” underwriting transfer risk from the issuing firm to the underwriter?
8.
How would a decrease in inflation affect the interest rate on an adjustable-rate debenture?
9.
If inflation unexpectedly rises by 3%, would a corporation that had recently borrowed money by issuing fixed-rate bonds to pay for a new investment benefit or lose?
10.
If wages on average rise at least as fast as inflation, why do people worry about how inflation affects incomes?
11.
Identify at least one item that you use regularly whose price has changed significantly.
12.
What has been the average annual rate of inflation between 1985 and 2020? What is the long-run average annual rate of inflation over the last century?
13.
Between 1985 and 2020, what year had the lowest realized annual rate of inflation in the United States? Why do you think inflation was so low in this particular year?
14.
Go to https://www.usinflationcalculator.com/. How much money would it take today to purchase what one dollar would have bought in 1950, in 1975, and in your birth year?
15.
Are US Treasury bonds truly risk free?
16.
At the end of 2020 and the beginning of 2021, coupon rates on long-term T-notes and T-bonds were near historic lows. Further, the federal government was running a historically large budget deficit in an effort to stimulate an economy battered by COVID-19 and to support millions of unemployed workers. Some investment advisers warned that this could be a particularly bad time to invest in government bonds or bonds in general. Why?
17.
Which group of securities earned a higher average annual return from 2000 to 2020, T-bonds or Baa bonds? Why do think this was so?
18.
Which earned a higher average annual return, a portfolio of T-bonds from 1980 to 2000 or from 2000 to 2020? Why do think this was so?
19.
Why is standard deviation of returns a reasonable measure of risk for a portfolio of equity securities?
20.
Many popular-press articles claim that growth investing is “clearly better” than value investing or that value investing is “dead.” How would you respond to proponents of growth investing after observing Figure 12.15?
21.
Over the last 120 years, few countries have achieved the realized rate of returns enjoyed by US equity markets. Does this mean investors should ignore international investments and focus only on domestic markets in an effort to maximize returns?
12.10: Video Activity
How Private Companies Are Bypassing the IPO Process
1.
Can you identify three apparent advantages and three disadvantages for investors in a SPAC (special purpose acquisition company) versus a traditional IPO process?
2.
The video concludes with a question about whether SPACs are a current fad doomed to fade away or a new and growing method of publicly financing firms. What do you think? Search for information related to SPACs and proposals for SPAC regulation, and report your conclusion.
A Secret Meeting and the Birth of the Federal Reserve
3.
How can an institution like the United State Federal Reserve System prevent bank runs?
4.
After the passage of the Federal Reserve Act in 1913, the United States has suffered through three great global financial crises: the Great Depression of the 1930s, the Great Recession of 2007–2009, and the COVID-19 pandemic of 2020–2021. Research one of the latter two crises, and identify and discuss some of the tools used by the Fed to lessen the length and economic severity of the economic hardships. In what ways has this research project supported or changed your opinion about the United States having the Federal Reserve System? | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/12%3A_Historical_Performance_of_US_Markets/12.09%3A_Review_Questions.txt |
Figure 13.1 Graphical displays are used extensively in the finance field. (credit: modification of "Analysing target market" by Marco Verch/flickr CC BY 2.0)
Statistical analysis is used extensively in finance, with applications ranging from consumer concerns such as credit scores, retirement planning, and insurance to business concerns such as assessing stock market volatility and predicting inflation rates. As a consumer, you will make many financial decisions throughout your life, and many of these decisions will be guided by statistical analysis. For example, what is the probability that interest rates will rise over the next year, and how will that affect your decision on whether to refinance a mortgage? In your retirement planning, how should the investment mix be allocated among stocks and bonds to minimize volatility and ensure a high probability for a secure retirement? When running a business, how can statistical quality control methods be used to maintain high quality levels and minimize waste? Should a business make use of consumer focus groups or customer surveys to obtain business intelligence data to improve service levels? These questions and more can benefit from the use and application of statistical methods.
Running a business and tracking its finances is a complex process. From day-to-day activities such as managing inventory levels to longer-range activities such as developing new products or expanding a customer base, statistical methods are a key to business success. For finance considerations, a business must manage risk versus return and optimize investments to ensure shareholder value. Business managers employ a wide range of statistical processes and tools to accomplish these goals. Increasingly, companies are also interested in data analytics to optimize the value gleaned from business- and consumer-related data, and statistical analysis forms the core of such analytics.
13.02: Measures of Center
Learning Objectives
By the end of this section, you will be able to:
• Calculate various measures of the average of a data set, such as mean, median, mode, and geometric mean.
• Recognize when a certain measure of center is more appropriate to use, such as weighted mean.
• Distinguish among arithmetic mean, geometric mean, and weighted mean.
Arithmetic Mean
The average of a data set is a way of describing location. The most widely used measures of the center of a data set are the mean (average), median, and mode. The arithmetic mean is the most common measure of the average. We will discuss the geometric mean later.
Note that the words mean and average are often used interchangeably. The substitution of one word for the other is common practice. The technical term is arithmetic mean, and average technically refers only to a center location. Formally, the arithmetic mean is called the first moment of the distribution by mathematicians. However, in practice among non-statisticians, average is commonly accepted as a synonym for arithmetic mean.
To calculate the arithmetic mean value of 50 stock portfolios, add the 50 portfolio dollar values together and divide the sum by 50. To calculate the arithmetic mean for a set of numbers, add the numbers together and then divide by the number of data values.
In statistical analysis, you will encounter two types of data sets: sample data and population data. Population data represents all the outcomes or measurements that are of interest. Sample data represents outcomes or measurements collected from a subset, or part, of the population of interest.
The notation $x¯x¯$ is used to indicate the sample mean, where the arithmetic mean is calculated based on data taken from a sample. The notation $∑x∑x$ is used to denote the sum of the data values, and $nn$ is used to indicate the number of data values in the sample, also known as the sample size.
The sample mean can be calculated using the following formula:
$x¯ = ∑xnx¯ = ∑xn$
13.1
Finance professionals often rely on averages of Treasury bill auction amounts to determine their value. Table 13.1 lists the Treasury bill auction amounts for a sample of auctions from December 2020.
Maturity Amount (\$Billions)
4-week T-bills \$32.9
8-week T-bills 38.4
13-week T-bills 63.1
26-week T-bills 59.6
52-week T-bills 39.7
Total \$233.7
Table 13.1 United States Treasury Bill Auctions, December 22 and 24, 2020 (source: Treasury Direct)
To calculate the arithmetic mean of the amount paid for Treasury bills at auction, in billions of dollars, we use the following formula:
$x¯=∑xn=233.75=46.74x¯=∑xn=233.75=46.74$
13.2
Median
To determine the median of a data set, order the data from smallest to largest, and then find the middle value in the ordered data set. For example, to find the median value of 50 portfolios, find the number that splits the data into two equal parts. The portfolio values owned by 25 people will be below the median, and 25 people will have portfolio values above the median. The median is generally a better measure of the average when there are extreme values or outliers in the data set.
An outlier or extreme value is a data value that is significantly different from the other data values in a data set. The median is preferred when outliers are present because the median is not affected by the numerical values of the outliers.
The ordered data set from Table 13.1 appears as follows:
$32.9, 38.4, 39.7, 59.6, 63.132.9, 38.4, 39.7, 59.6, 63.1$
13.3
The middle value in this ordered data set is the third data value, which is 39.7. Thus, the median is \$39.7 billion.
You can quickly find the location of the median by using the expression $n + 12n + 12$. The variable n represents the total number of data values in the sample. If n is an odd number, the median is the middle value of the data values when ordered from smallest to largest. If n is an even number, the median is equal to the two middle values of the ordered data values added together and divided by 2. In the example from Table 13.1, there are five data values, so n = 5. To identify the position of the median, calculate $n + 12n + 12$, which is $5 + 125 + 12$, or 3. This indicates that the median is located in the third data position, which corresponds to the value 39.7.
As mentioned earlier, when outliers are present in a data set, the mean can be nonrepresentative of the center of the data set, and the median will provide a better measure of center. The following Think It Through example illustrates this point.
Think It Through
Finding the Measure of Center
Suppose that in a small village of 50 people, one person earns a salary of \$5 million per year, and the other 49 individuals each earn \$30,000. Which is the better measure of center: the mean or the median?
However, the median would be \$30,000. There are 49 people who earn \$30,000 and one person who earns \$5,000,000.
The median is a better measure of the “average” than the mean because 49 of the values are \$30,000 and one is \$5,000,000. The \$5,000,000 is an outlier. The \$30,000 gives us a better sense of the middle of the data set.
Mode
Another measure of center is the mode. The mode is the most frequent value. There can be more than one mode in a data set as long as those values have the same frequency and that frequency is the highest. A data set with two modes is called bimodal. For example, assume that the weekly closing stock price for a technology stock, in dollars, is recorded for 20 consecutive weeks as follows:
$50, 53, 59, 59, 63, 63, 72, 72, 72, 72, 72, 76, 78, 81, 83, 84, 84, 84, 90, 9350, 53, 59, 59, 63, 63, 72, 72, 72, 72, 72, 76, 78, 81, 83, 84, 84, 84, 90, 93$
13.5
To find the mode, determine the most frequent score, which is 72. It occurs five times. Thus, the mode of this data set is 72. It is helpful to know that the most common closing price of this particular stock over the past 20 weeks has been \$72.00.
Geometric Mean
The arithmetic mean, median, and mode are all measures of the center of a data set, or the average. They are all, in their own way, trying to measure the common point within the data—that which is “normal.” In the case of the arithmetic mean, this is accomplished by finding the value from which all points are equal linear distances. We can imagine that all the data values are combined through addition and then distributed back to each data point in equal amounts.
The geometric mean redistributes not the sum of the values but their product. It is calculated by multiplying all the individual values and then redistributing them in equal portions such that the total product remains the same. This can be seen from the formula for the geometric mean, x̃ (pronounced x-tilde):
$x~=x1·x2⋯xnnx~=x1·x2⋯xnn$
13.6
The geometric mean is relevant in economics and finance for dealing with growth—of markets, in investments, and so on. For an example of a finance application, assume we would like to know the equivalent percentage growth rate over a five-year period, given the yearly growth rates for the investment.
For a five-year period, the annual rate of return for a certificate of deposit (CD) investment is as follows:
3.21%, 2.79%, 1.88%, 1.42%, 1.17%. Find the single percentage growth rate that is equivalent to these five annual consecutive rates of return. The geometric mean of these five rates of return will provide the solution. To calculate the geometric mean for these values (which must all be positive), first multiply1 the rates of return together—after adding 1 to the decimal equivalent of each interest rate—and then take the nth root of the product. We are interested in calculating the equivalent overall rate of return for the yearly rates of return, which can be expressed as 1.0321, 1.0279, 1.0188, 1.0142, and 1.0117:
$x~=x1·x2⋯xnn = 1.0321·1.0279·1.0188·1.0142·1.01175 = 1.0209x~=x1·x2⋯xnn = 1.0321·1.0279·1.0188·1.0142·1.01175 = 1.0209$
13.7
Based on the geometric mean, the equivalent annual rate of return for this time period is 2.09%.
Link to Learning
Arithmetic versus Geometric Means
In this video on arithmetic versus geometric means, the returns of the S&P 500 are tracked using an arithmetic mean versus a geometric mean, and the difference between these two measurements is discussed.
Weighted Mean
A weighted mean is a measure of the center, or average, of a data set where each data value is assigned a corresponding weight. A common financial application of a weighted mean is in determining the average price per share for a certain stock when the stock has been purchased at different points in time and at different share prices.
To calculate a weighted mean, create a table with the data values in one column and the weights in a second column. Then create a third column in which each data value is multiplied by each weight on a row-by-row basis. Then, the weighted mean is calculated as the sum of the results from the third column divided by the sum of the weights.
Think It Through
Calculating the Weighted Mean
Assume your portfolio contains 1,000 shares of XYZ Corporation, purchased on three different dates, as shown in Table 13.2. Calculate the weighted mean of the purchase price for the 1,000 shares.
Date Purchased Purchase Price (\$) Number of Shares Purchased Price (\$) Times
Number of Shares
January 17 78 200 15,600
February 10 122 300 36,600
March 23 131 500 65,500
Total NA 1,000 117,700
Table 13.2 1,000 Shares of XYZ Corporation
Footnotes
• 1In this chapter, the interpunct dot will be used to indicate the multiplication operation in formulas. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/13%3A_Statistical_Analysis_in_Finance/13.01%3A_Why_It_Matters.txt |
Learning Objectives
By the end of this section, you will be able to:
• Define and calculate standard deviation for a data set.
• Define and calculate variance for a data set.
• Explain the relationship between standard deviation and variance.
Standard Deviation
An important characteristic of any set of data is the variation in the data. In some data sets, the data values are concentrated close to the mean; in other data sets, the data values are more widely spread out. For example, an investor might examine the yearly returns for Stock A, which are 1%, 2%, -1%, 0%, and 3%, and compare them to the yearly returns for Stock B, which are -9%, 2%, 15%, -5%, and 0%.
Notice that Stock B exhibits more volatility in yearly returns than Stock A. The investor may want to quantify this variation in order to make the best investment decisions for a particular investment objective.
The most common measure of variation, or spread, is standard deviation. The standard deviation of a data set is a measure of how far the data values are from their mean. A standard deviation
• provides a numerical measure of the overall amount of variation in a data set; and
• can be used to determine whether a particular data value is close to or far from the mean.
The standard deviation provides a measure of the overall variation in a data set. The standard deviation is always positive or zero. It is small when the data values are all concentrated close to the mean, exhibiting little variation or spread. It is larger when the data values are more spread out from the mean, exhibiting more variation.
Suppose that we are studying the variability of two different stocks, Stock A and Stock B. The average stock price for both stocks is \$5. For Stock A, the standard deviation of the stock price is 2, whereas the standard deviation for Stock B is 4. Because Stock B has a higher standard deviation, we know that there is more variation in the stock price for Stock B than in the price for Stock A.
There are two different formulas for calculating standard deviation. Which formula to use depends on whether the data represents a sample or a population. The notation s is used to represent the sample standard deviation, and the notation $σσ$ is used to represent the population standard deviation. In the formulas shown below, is the sample mean, $μμ$ is the population mean, n is the sample size, and N is the population size.
Formula for the sample standard deviation:
$s=∑(x - x¯)2n - 1s=∑(x - x¯)2n - 1$
13.8
Formula for the population standard deviation:
$σ=∑(x-μ)2N σ=∑(x-μ)2N$
13.9
Variance
Variance also provides a measure of the spread of data values. The variance of a data set measures the extent to which each data value differs from the mean. The more the individual data values differ from the mean, the larger the variance. Both the standard deviation and the variance provide similar information.
In a finance application, variance can be used to determine the volatility of an investment and therefore to help guide financial decisions. For example, a more cautious investor might opt for investments with low volatility.
Similar to standard deviation, the formula used to calculate variance also depends on whether the data is collected from a sample or a population. The notation $s2s2$ is used to represent the sample variance, and the notation σ2 is used to represent the population variance.
Formula for the sample variance:
$s2=∑(x - x¯)2n - 1s2=∑(x - x¯)2n - 1$
13.10
Formula for the population variance:
$σ2=∑(x-μ)2N σ2=∑(x-μ)2N$
13.11
This is the method to calculate standard deviation and variance for a sample:
1. First, find the mean $x¯x¯$ of the data set by adding the data values and dividing the sum by the number of data values.
2. Set up a table with three columns, and in the first column, list the data values in the data set.
3. For each row, subtract the mean from the data value $(x-x¯)(x-x¯)$, and enter the difference in the second column. Note that the values in this column may be positive or negative. The sum of the values in this column will be zero.
4. In the third column, for each row, square the value in the second column. So this third column will contain the quantity (Data Value – Mean)2 for each row. We can write this quantity as $x - x¯2x - x¯2$. Note that the values in this third column will always be positive because they represent a squared quantity.
5. Add up all the values in the third column. This sum can be written as $∑x-x¯2∑x-x¯2$.
6. Divide this sum by the quantity (n – 1), where n is the number of data points. We can write this as $∑x - x¯2n - 1∑x - x¯2n - 1$.
7. This result is called the sample variance, denoted by s2. Thus, the formula for the sample variance is $s2=∑x - x¯2n - 1s2=∑x - x¯2n - 1$.
8. Now take the square root of the sample variance. This value is the sample standard deviation, called s. Thus, the formula for the sample standard deviation is $s=∑(x - x¯)2n - 1s=∑(x - x¯)2n - 1$.
9. Round-off rule: The sample variance and sample standard deviation are typically rounded to one more decimal place than the data values themselves.
Think It Through
Finding Standard Deviation and Variance
A brokerage firm advertises a new financial analyst position and receives 210 applications. The ages of a sample of 10 applicants for the position are as follows:
$40, 36, 44, 51, 54, 55, 39, 47, 44, 5040, 36, 44, 51, 54, 55, 39, 47, 44, 50$
13.12
The brokerage firm is interested in determining the standard deviation and variance for this sample of 10 ages. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/13%3A_Statistical_Analysis_in_Finance/13.03%3A_Measures_of_Spread.txt |
Learning Objectives
By the end of this section, you will be able to:
• Define and calculate z-scores for a measurement.
• Define and calculate quartiles and percentiles for a data set.
• Use quartiles as a method to detect outliers in a data set.
z-Scores
A z-score, also called a z-value, is a measure of the position of an entry in a data set. It represents the number of standard deviations by which a data value differs from the mean. For example, suppose that in a certain year, the rates of return for various technology-focused mutual funds are examined, and the mean return is 7.8% with a standard deviation of 2.3%. A certain mutual fund publishes its rate of return as 12.4%. Based on this rate of return of 12.4%, we can calculate the relative standing of this mutual fund compared to the other technology-focused mutual funds. The corresponding z-score of a measurement considers the given measurement in relation to the mean and standard deviation for the entire population.
The formula for a z-score calculation is as follows:
$z=x-μσz=x-μσ$
13.15
where x is the measurement, $μμ$ is the mean, and $σσ$ is the standard deviation.
Think It Through
Interpreting a z-Score
A certain technology-based mutual fund reports a rate of return of 12.4% for a certain year, while the mean rate of return for comparable funds is 7.8% and the standard deviation is 2.3%. Calculate and interpret the z-score for this particular mutual fund.
The resulting z-score indicates the number of standard deviations by which a particular measurement is above or below the mean. In this example, the rate of return for this particular mutual fund is 2 standard deviations above the mean, indicating that this mutual fund generated a significantly better rate of return than all other technology-based mutual funds for the same time period.
Quartiles and Percentiles
If a person takes an IQ test, their resulting score might be reported as in the 87th percentile. This percentile indicates the person’s relative performance compared to others taking the IQ test. A person scoring in the 87th percentile has an IQ score higher than 87% of all others taking the test. This is the same as saying that the person is in the top 13% of all people taking the IQ test.
Common measures of location are quartiles and percentiles. Quartiles are special percentiles. The first quartile, Q1, is the same as the 25th percentile, and the third quartile, Q3, is the same as the 75th percentile. The median, M, is called both the second quartile and the 50th percentile.
To calculate quartiles and percentiles, the data must be ordered from smallest to largest. Quartiles divide ordered data into quarters. Percentiles divide ordered data into hundredths. If you score in the 90th percentile of an exam, that does not necessarily mean that you receive 90% on the test. It means that 90% of the test scores are the same as or less than your score and the remaining 10% of the scores are the same as or greater than your score.
Percentiles are useful for comparing values. In a finance example, a mutual fund might report that the performance for the fund over the past year was in the 80th percentile of all mutual funds in the peer group. This indicates that the fund performed better than 80% of all other funds in the peer group. This also indicates that 20% of the funds performed better than this particular fund.
Quartiles are values that separate the data into quarters. Quartiles may or may not be part of the data. To find the quartiles, first find the median, or second quartile. The first quartile, Q1, is the middle value, or median, of the lower half of the data, and the third quartile, Q3, is the middle value of the upper half of the data. As an example, consider the following ordered data set, which represents the rates of return for a group of technology-based mutual funds in a certain year:
$5.4, 6.0, 6.3, 6.8, 7.1, 7.2, 7.4, 7.5, 7.9, 8.2, 8.75.4, 6.0, 6.3, 6.8, 7.1, 7.2, 7.4, 7.5, 7.9, 8.2, 8.7$
13.17
The median, or second quartile, is the middle value in this data set, which is 7.2. Notice that 50% of the data values are below the median, and 50% of the data values are above the median. The lower half of the data values are 5.4, 6.0, 6.3, 6.8, 7.1 Notice that these are the data values below the median. The upper half of the data values are 7.4, 7.5, 7.9, 8.2, 8.7, which are the data values above the median.)
To find the first quartile, Q1, locate the middle value of the lower half of the data. The middle value of the lower half of the data set is 6.3. Notice that one-fourth, or 25%, of the data values are below this first quartile, and 75% of the data values are above this first quartile.
To find the third quartile, Q3, locate the middle value of the upper half of the data. The middle value of the upper half of the data set is 7.9. Notice that one-fourth, or 25%, of the data values are above this third quartile, and 75% of the data values are below this third quartile.
The interquartile range (IQR) is a number that indicates the spread of the middle half, or the middle 50%, of the data. It is the difference between the third quartile, Q3, and the first quartile, Q1.
$IQR = Q3- Q1IQR = Q3- Q1$
13.18
In the above example, the IQR can be calculated as
$IQR = Q3- Q1 = 7.9-6.3 = 1.6IQR = Q3- Q1 = 7.9-6.3 = 1.6$
13.19
Outlier Detection
Quartiles and the IQR can be used to flag possible outliers in a data set. For example, if most employees at a company earn about \$50,000 and the CEO of the company earns \$2.5 million, then we consider the CEO’s salary to be an outlier data value because is significantly different from all the other salaries in the data set. An outlier data value can also be a value much lower than the other data values, so if one employee only makes \$15,000, then this employee’s low salary might also be considered an outlier.
To detect outliers, use the quartiles and the IQR to calculate a lower and an upper bound for outliers. Then any data values below the lower bound or above the upper bound will be flagged as outliers. These data values should be further investigated to determine the nature of the outlier condition.
To calculate the lower and upper bounds for outliers, use the following formulas:
$Lower Bound for Outliers=Q1-(1.5·IQR)Lower Bound for Outliers=Q1-(1.5·IQR)$
13.20
$Upper Bound for Outliers=Q3+(1.5·IQR)Upper Bound for Outliers=Q3+(1.5·IQR)$
13.21
Think It Through
Calculating the IQR
Calculate the IQR for the following 13 portfolio values, and determine if any of the portfolio values are potential outliers. Data values are in dollars.
389,950; 230,500; 158,000; 479,000; 639,000; 114,950; 5,500,000; 387,000; 659,000; 529,000; 575,000; 488,800; 1,095,000
$Q1=230,500+387,0002=308,750Q1=230,500+387,0002=308,750$
13.23
$Q3=639,000+659,0002=649,000Q3=639,000+659,0002=649,000$
13.24
$IQR=649,000-308,750=340,250IQR=649,000-308,750=340,250$
13.25
$(1.5)(IQR)=(1.5)(340,250)=510,375(1.5)(IQR)=(1.5)(340,250)=510,375$
13.26
$LowerBound=Q1- 1.5IQR =308,750-510,375=-201,625LowerBound=Q1- 1.5IQR =308,750-510,375=-201,625$
13.27
$Upper Bound=Q3+(1.5)(IQR)=649,000+510,375=1,159,375Upper Bound=Q3+(1.5)(IQR)=649,000+510,375=1,159,375$
13.28
No portfolio value price is less than -201,625. However, 5,500,000 is more than 1,159,375. Therefore, the portfolio value of 5,500,000 is a potential outlier. This is important because the presence of outliers could potentially indicate data errors or some other anomalies in the data set that should be investigated. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/13%3A_Statistical_Analysis_in_Finance/13.04%3A_Measures_of_Position.txt |
Learning Objectives
By the end of this section, you will be able to:
• Construct and interpret a frequency distribution.
• Apply and evaluate probabilities using the normal distribution.
• Apply and evaluate probabilities using the exponential distribution.
Frequency Distributions
A frequency distribution provides a method to organize and summarize a data set. For example, we might be interested in the spread, center, and shape of the data set’s distribution. When a data set has many data values, it can be difficult to see patterns and come to conclusions about important characteristics of the data. A frequency distribution allows us to organize and tabulate the data in a summarized way and also to create graphs to help facilitate an interpretation of the data set.
To create a basic frequency distribution, set up a table with three columns. The first column will show the intervals for the data, and the second column will show the frequency of the data values, or the count of how many data values fall within each interval. A third column can be added to include the relative frequency for each row, which is calculated by taking the frequency for that row and dividing it by the sum of all the frequencies in the table.
Think It Through
Graphing Demand and Supply
A financial consultant at a brokerage firm records the portfolio values for 20 clients, as shown in Table 13.5, where the portfolio values are shown in thousands of dollars.
278 318 422 577 618
735 798 864 903 944
1,052 1,099 1,132 1,180 1,279
1,365 1,471 1,572 1,787 1,905
Table 13.5 Portfolio Values for 20 Clients at a Brokerage Firm (\$000s)
Create a frequency distribution table using the following intervals for the portfolio values:
0–299
13.29
300–599
13.30
600–899
13.31
900–1,199
13.32
1,200–1,499
13.33
1,500–1,799
13.34
1,800–2,099
13.35
Normal Distribution
The normal probability density function, a continuous distribution, is the most important of all the distributions. The normal distribution is applicable when the frequency of data values decreases with each class above and below the mean. The normal distribution can be applied to many examples from the finance industry, including average returns for mutual funds over a certain time period, portfolio values, and others. The normal distribution has two parameters, or numerical descriptive measures: the mean, $μμ$, and the standard deviation, $σσ$. The variable x represents the quantity being measured whose data values have a normal distribution.
Figure 13.3 Graph of the Normal Distribution
The curve in Figure 13.3 is symmetric about a vertical line drawn through the mean, $μμ$. The mean is the same as the median, which is the same as the mode, because the graph is symmetric about $μμ$. As the notation indicates, the normal distribution depends only on the mean and the standard deviation. Because the area under the curve must equal 1, a change in the standard deviation, $σσ$, causes a change in the shape of the normal curve; the curve becomes fatter and wider or skinnier and taller depending on $σσ$. A change in $μμ$ causes the graph to shift to the left or right. This means there are an infinite number of normal probability distributions.
To determine probabilities associated with the normal distribution, we find specific areas under the normal curve, and this is further discussed in Apply the Normal Distribution in Financial Contexts. For example, suppose that at a financial consulting company, the mean employee salary is \$60,000 with a standard deviation of \$7,500. A normal curve can be drawn to represent this scenario, in which the mean of \$60,000 would be plotted on the horizontal axis, corresponding to the peak of the curve. Then, to find the probability that an employee earns more than \$75,000, you would calculate the area under the normal curve to the right of the data value \$75,000.
Excel uses the following command to find the area under the normal curve to the left of a specified value:
`=NORM.DIST(XVALUE, MEAN, STANDARD_DEV, TRUE)`
For example, at the financial consulting company mentioned above, the mean employee salary is \$60,000 with a standard deviation of \$7,500. To find the probability that a random employee’s salary is less than \$55,000 using Excel, this is the command you would use:
`=NORM.DIST(55000, 60000, 7500, TRUE)`
`Result: 0.25249`
Thus, there is a probability of about 25% that a random employee has a salary less than \$55,000.
Exponential Distribution
The exponential distribution is often concerned with the amount of time until some specific event occurs. For example, a finance professional might want to model the time to default on payments for company debt holders.
An exponential distribution is one in which there are fewer large values and more small values. For example, marketing studies have shown that the amount of money customers spend in a store follows an exponential distribution. There are more people who spend small amounts of money and fewer people who spend large amounts of money.
Exponential distributions are commonly used in calculations of product reliability, or the length of time a product lasts. The random variable for the exponential distribution is continuous and often measures a passage of time, although it can be used in other applications. Typical questions may be, What is the probability that some event will occur between x1 hours and x2 hours? or What is the probability that the event will take more than x1 hours to perform? In these examples, the random variable x equals either the time between events or the passage of time to complete an action (e.g., wait on a customer). The probability density function is given by
$fx = 1μe-1μxfx = 1μe-1μx$
13.36
where $μμ$ is the historical average of the values of the random variable (e.g., the historical average waiting time). This probability density function has a mean and standard deviation of $1μ1μ$.
To determine probabilities associated with the exponential distribution, we find specific areas under the exponential distribution curve. The following formula can be used to calculate the area under the exponential curve to the left of a certain value:
$Fx = 1-e-1μxFx = 1-e-1μx$
13.37
Think It Through
Calculating Probability
At a financial company, the mean time between incoming phone calls is 45 seconds, and the time between phone calls follows an exponential distribution, where the time is measured in minutes. Calculate the probability of having 2 minutes or less between phone calls. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/13%3A_Statistical_Analysis_in_Finance/13.05%3A_Statistical_Distributions.txt |
Learning Objectives
By the end of this section, you will be able to:
• Calculate portfolio weights in an investment.
• Calculate and interpret the expected values.
• Apply the normal distribution to characterize average and standard deviation in financial contexts.
Calculate Portfolio Weights
In many financial analyses, the weightings by asset category in a portfolio are a key index used to assess if the portfolio is meeting allocation metrics. For example, an investor approaching retirement age may wish to shift assets in a portfolio to more conservative and lower-volatility investments. Weightings can be calculated in several different ways—for example, based on individual stocks in a portfolio or on various sectors in a portfolio. Weightings can also be calculated based on number of shares or the value of shares of a stock.
To calculate a weighting in a portfolio based on value, take the value of the particular investment and divide it by the total value of the overall portfolio. As an example, consider an individual’s retirement account for which the desired portfolio weighting is determined to be 40% stocks, 50% bonds, and 10% cash equivalents. Table 13.7 shows the current assets in the individual’s portfolio, broken out according to stocks, bonds, and cash equivalents.
Asset Value (\$)
Stock A 134,000
Stock B 172,000
Bond C 38,000
Bond D 102,000
Bond E 96,000
Cash in CDs 35,700
Cash in savings 22,500
Total Value 600,200
Table 13.7 Portfolio Assets in Stocks, Bonds, and Cash Equivalents
To determine the weighting in this portfolio for stocks, bonds, and cash, take the total value for each category and divide it by the total value of the entire portfolio. These results are summarized in Table 13.8. Notice that the portfolio weightings shown in the table do not match the target, or desired, allocation weightings of 40% stocks, 50% bonds, and 10% cash equivalents.
Asset Category Category Value (\$) Portfolio Weighting
Stocks $306,000306,000$ $306,000600,200=0.51306,000600,200=0.51$
Bonds $236,000236,000$ $236,000600,200=0.39236,000600,200=0.39$
Cash $58,20058,200$ $58,200600,200=0.1058,200600,200=0.10$
Total Value $600,200600,200$
Table 13.8 Portfolio Weightings for Stocks, Bonds, and Cash Equivalents
Portfolio rebalancing is a process whereby the investor buys or sells assets to achieve the desired portfolio weightings. In this example, the investor could sell approximately 10% of the stock assets and purchase bonds with the proceeds to align the asset categories to the desired portfolio weightings.
Calculate and Interpret Expected Values
A probability distribution is a mathematical function that assigns probabilities to various outcomes. For example, we can assign a probability to the outcome of a certain stock increasing in value or decreasing in value. One application of a probability distribution function is determining expected value.
In many financial situations, we are interested in determining the expected value of the return on a particular investment or the expected return on a portfolio of multiple investments. To calculate expected returns, we formulate a probability distribution and then use the following formula to calculate expected value:
$ExpectedValue=P1· R1+P2· R2+P3· R3+…+Pn· RnExpectedValue=P1· R1+P2· R2+P3· R3+…+Pn· Rn$
13.39
where P1, P2, P3, ⋯ Pn are the probabilities of the various returns and R1, R2, R3, ⋯ Rn are the various rates of return.
In essence, expected value is a weighted mean where the probabilities form the weights. Typically, these values for Pn and Rn are derived from historical data. As an example, consider a probability distribution for potential returns for United Airlines common stock. Assume that from historical data gathered over a certain time period, there is a 15% probability of generating a 12% return on investment for this stock, a 35% probability of generating a 5% return, a 25% probability of generating a 2% return, a 14% probability of generating a 5% loss, and an 11% probability of resulting in a 10% loss. This data can be organized into a probability distribution table as seen in Table 13.9.
Using the expected value formula, the expected return of United Airlines stock over an extended period of time follows:
$Expected Value=P1·R1+P2·R2+P3·R3+…+Pn·RnExpected Value=P1·R1+P2·R2+P3·R3+…+Pn·Rn$
13.40
$ExpectedValue = 0.150.12 + 0.350.05 + 0.250.02 + 0.14-0.05 + 0.11-0.10 = 0.0225ExpectedValue = 0.150.12 + 0.350.05 + 0.250.02 + 0.14-0.05 + 0.11-0.10 = 0.0225$
13.41
Based on the probability distribution, the expected value of the rate of return for United Airlines common stock over an extended period of time is 2.25%.
Historical Return (%) Associated Probability (%)
12 15
5 35
2 25
-5 14
-10 11
Table 13.9 Probability Distribution for Historical Returns on United Airlines Stock
We can extend this analysis to evaluate the expected return for an investment portfolio consisting of various asset categories, such as stocks, bonds, and cash equivalents, where the probabilities are associated with the weighting of each category relative to the total value of the portfolio. Using historical return data for each of the asset categories, the expected return of the overall portfolio can be calculated using the expected value formula.
Assume an investor has assets in stocks, bonds, and cash equivalents as shown in Table 13.10.
Asset Category Value (\$) Portfolio Weighting Historical Return (%)
Stocks $306,000306,000$ $306,000600,200=0.51306,000600,200=0.51$ 13.0
Bonds $236,000236,000$ $236,000600,200=0.39236,000600,200=0.39$ 4.0
Cash $58,20058,200$ $58,200600,200=0.1058,200600,200=0.10$ 2.5
Total Value $600,200600,200$
Table 13.10 Portfolio Weightings and Historical Returns for Various Asset Categories
$ExpectedValue=P1·R1+P2·R2+P3·R3+…+Pn·RnExpectedValue=P1·R1+P2·R2+P3·R3+…+Pn·Rn$
13.42
$ExpectedValue=0.510.130 + 0.390.040 + 0.100.025 = 0.0844ExpectedValue=0.510.130 + 0.390.040 + 0.100.025 = 0.0844$
13.43
Based on the probability distribution, the expected value of the rate of return for this portfolio over an extended period of time is 8.44%.
Apply the Normal Distribution in Financial Contexts
The normal, or bell-shaped, distribution can be utilized in many applications, including financial contexts. Remember that the normal distribution has two parameters: the mean, which is the center of the distribution, and the standard deviation, which measures the spread of the distribution. Here are several examples of applications of the normal distribution:
• IQ scores follow a normal distribution, with a mean IQ score of 100 and a standard deviation of 15.
• Salaries at a certain company follow a normal distribution, with a mean salary of \$52,000 and a standard deviation of \$4,800.
• Grade point averages (GPAs) at a certain college follow a normal distribution, with a mean GPA of 3.27 and a standard deviation of 0.24.
• The average annual gain of the Dow Jones Industrial Average (DJIA) over a 40-year time period follows a normal distribution, with a mean gain of 485 points and a standard deviation of 1,065 points.
• The average annual return on the S&P 500 over a 50-year time period follows a normal distribution, with a mean rate of return of 10.5% and a standard deviation of 14.3%.
• The average annual return on mid-cap stock funds over the five-year period from 2010 to 2015 follows a normal distribution, with a mean rate of return of 8.9% and a standard deviation of 3.7%.
When analyzing data sets that follow a normal distribution, probabilities can be calculated by finding areas under the normal curve. To find the probability that a measurement is within a specific interval, we can compute the area under the normal curve corresponding to the interval of interest.
Areas under the normal curve are available in tables, and Excel also provides a method to find these areas. The empirical rule is one method for determining areas under the normal curve that fall within a certain number of standard deviations of the mean (see Figure 13.4).
Figure 13.4 Normal Distribution Showing Mean and Increments of Standard Deviation
If x is a random variable and has a normal distribution with mean µ and standard deviation $σσ$, then the empirical rule states the following:
• About 68% of the x-values lie between $-1σ-1σ$ and $+1σ+1σ$ units from the mean $µµ$ (within one standard deviation of the mean).
• About 95% of the x-values lie between $-2σ-2σ$ and $+2σ+2σ$ units from the mean $µµ$ (within two standard deviations of the mean).
• About 99.7% of the x-values lie between $-3σ-3σ$ and $+3σ+3σ$ units from the mean $µµ$ (within three standard deviations of the mean). Notice that almost all the x-values lie within three standard deviations of the mean.
• The z-scores for $+1σ+1σ$ and $-1σ-1σ$ are $+1+1$ and $-1-1$, respectively.
• The z-scores for $+2σ+2σ$ and $-2σ-2σ$ are $+2+2$ and $-2-2$, respectively.
• The z-scores for $+3σ+3σ$ and $-3σ-3σ$ are $+3+3$ and $-3-3$, respectively.
As an example of using the empirical rule, suppose we know that the average annual return for mid-cap stock funds over the five-year period from 2010 to 2015 follows a normal distribution, with a mean rate of return of 8.9% and a standard deviation of 3.7%. We are interested in knowing the likelihood that a randomly selected mid-cap stock fund provides a rate of return that falls within one standard deviation of the mean, which implies a rate of return between 5.2% and 12.6%. Using the empirical rule, the area under the normal curve within one standard deviation of the mean is 68%. Thus, there is a probability, or likelihood, of 0.68 that a mid-cap stock fund will provide a rate of return between 5.2% and 12.6%.
If the interval of interest is extended to two standard deviations from the mean (a rate of return between 1.5% and 16.3%), using the empirical rule, we can determine that the area under the normal curve within two standard deviations of the mean is 95%. Thus, there is a probability, or likelihood, of 0.95 that a mid-cap stock fund will provide a rate of return between 1.5% and 16.3%. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/13%3A_Statistical_Analysis_in_Finance/13.06%3A_Probability_Distributions.txt |
Learning Objectives
By the end of this section, you will be able to:
• Determine appropriate graphs to use for various types of data.
• Create and interpret univariate graphs such as bar graphs and histograms.
• Create and interpret bivariate graphs such as time series graphs and scatter plot graphs.
Graphing Univariate Data
Data visualization refers to the use of graphical displays to summarize data to help to interpret patterns and trends in the data. Univariate data refers to observations recorded for a single characteristic or attribute, such as salaries or blood pressure measurements. When graphing univariate data, we can choose from among several types of graphs, such as bar graphs, time series graphs, and so on.
The most effective type of graph to use for a certain data set will depend on the nature of the data and the purpose of the graph. For example, a time series graph is typically used to show how a measurement is changing over time and to identify patterns or trends over time.
Below are some examples of typical applications for various graphs and displays.
Graphs used to show the distribution of data:
• Bar chart: used to show frequency or relative frequency distributions for categorical data
• Histogram: used to show frequency or relative frequency distributions for continuous data
Graphs used to show relationships between data points:
• Time series graph: used to show measurement data plotted against time, where time is displayed on the horizontal axis
• Scatter plot: used to show the relationship between a dependent variable and an independent variable
Bar Graphs
A bar graph consists of bars that are separated from each other and compare percentages. The bars can be rectangles, or they can be rectangular boxes (used in three-dimensional plots), and they can be vertical or horizontal. The bar graph shown in the example below has age groups represented on the x-axis and proportions on the y-axis.
By the end of 2021, a certain social media site had over 146 million users in the United States. Table 13.11 shows three age groups, the number of users in each age group, and the proportion (%) of users in each age group. A bar graph using this data is shown in Figure 13.5.
Age Groups Number of Site Users Percent of Site Users
13–25 65,082,280 45
26–44 53,300,200 36
45–64 27,885,100 19
Table 13.11 Data for Bar Graph of Age Groups
Figure 13.5 Bar Graph of Age Groups
Histograms
A histogram is a bar graph that is used for continuous numeric data, such as salaries, blood pressures, heights, and so on. One advantage of a histogram is that it can readily display large data sets. A rule of thumb is to use a histogram when the data set consists of 100 values or more.
A histogram consists of contiguous (adjoining) boxes. It has both a horizontal axis and a vertical axis. The horizontal axis is labeled with what the data represents (for instance, distance from your home to school). The vertical axis is labeled either Frequency or Relative Frequency (or Percent Frequency or Probability). The graph will have the same shape regardless of the label on the vertical axis. A histogram, like a stem-and-leaf plot, can give you the shape of the data, the center, and the spread of the data.
The relative frequency is equal to the frequency of an observed data value divided by the total number of data values in the sample. Remember, frequency is defined as the number of times a solution occurs. Relative frequency is calculated using the formula
$RF=fnRF=fn$
13.44
where f = frequency, n = the total number of data values (or the sum of the individual frequencies), and RF = relative frequency.
To construct a histogram, first decide how many bars or intervals, also called classes, will represent the data. Many histograms consist of 5 to 15 bars or classes for clarity. The number of bars needs to be chosen. Choose a starting point for the first interval that is less than the smallest data value. A convenient starting point is a lower value carried out to one more decimal place than the value with the most decimal places. For example, if the value with the most decimal places is 6.1, and if this is the smallest value, a convenient starting point is 6.05 (because $6.1-0.05=6.056.1-0.05=6.05$). We say that 6.05 has more precision. If the value with the most decimal places is 2.23 and the lowest value is 1.5, a convenient starting point is 1.495 ($1.5-0.005=1.4951.5-0.005=1.495$). If the value with the most decimal places is 3.234 and the lowest value is 1.0, a convenient starting point is $0.995 (1.0-0.0005=0.9995)0.995 (1.0-0.0005=0.9995)$. If all the data values happen to be integers and the smallest value is 2, then a convenient starting point is $1.5 (2-0.05=1.5)1.5 (2-0.05=1.5)$. Also, when the starting point and other boundaries are carried to one additional decimal place, no data value will fall on a boundary. The next two examples go into detail about how to construct a histogram using continuous data and how to create a histogram using discrete data.
Example: The following data values are the portfolio values, in thousands of dollars, for 100 investors.
60, 60.5, 61, 61, 61.5
63.5, 63.5, 63.5
64, 64, 64, 64, 64, 64, 64, 64.5, 64.5, 64.5, 64.5, 64.5, 64.5, 64.5, 64.5
66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66.5, 66.5, 66.5, 66.5, 66.5, 66.5, 66.5, 66.5, 66.5, 66.5, 66.5, 67, 67, 67, 67, 67, 67, 67, 67, 67, 67, 67, 67, 67.5, 67.5, 67.5, 67.5, 67.5, 67.5, 67.5
68, 68, 69, 69, 69, 69, 69, 69, 69, 69, 69, 69, 69.5, 69.5, 69.5, 69.5, 69.5
70, 70, 70, 70, 70, 70, 70.5, 70.5, 70.5, 71, 71, 71
72, 72, 72, 72.5, 72.5, 73, 73.5
74
The smallest data value is 60. Because the data values with the most decimal places have one decimal place (for instance, 61.5), we want our starting point to have two decimal places. Because the numbers 0.5, 0.05, 0.005, and so on are convenient numbers, use 0.05 and subtract it from 60, the smallest value, to get a convenient starting point: $60-0.05=59.9560-0.05=59.95$, which is more precise than, say, 61.5 by one decimal place. Thus, the starting point is 59.95. The largest value is 74, and $74+0.05=74.0574+0.05=74.05$, so 74.05 is the ending value.
Next, calculate the width of each bar or class interval. To calculate this width, subtract the starting point from the ending value and divide the result by the number of bars (you must choose the number of bars you desire). Suppose you choose eight bars. The interval width is calculated as follows:
$74.05-59.958=1.7674.05-59.958=1.76$
13.45
We will round up to 2 and make each bar or class interval 2 units wide. Rounding up to 2 is one way to prevent a value from falling on a boundary. Rounding to the next number is often necessary, even if it goes against the standard rules of rounding. For this example, using 1.76 as the width would also work. A guideline that is followed by some for the width of a bar or class interval is to take the square root of the number of data values and then round to the nearest whole number if necessary. For example, if there are 150 data values, take the square root of 150 and round to 12 bars or intervals. The boundaries are as follows:
$59.95 59.95 + 2 = 61.95 61.95 + 2 = 63.95 63.95 + 2 = 65.95 65.95 + 2 = 67.95 67.95 + 2 = 69.95 69.95 + 2 = 71.95 71.95 + 2 = 73.95 73.95 + 2 = 75.9559.95 59.95 + 2 = 61.95 61.95 + 2 = 63.95 63.95 + 2 = 65.95 65.95 + 2 = 67.95 67.95 + 2 = 69.95 69.95 + 2 = 71.95 71.95 + 2 = 73.95 73.95 + 2 = 75.95$
13.46
The data values 60 through 61.5 are in the interval 59.95–61.95. The data values of 63.5 are in the interval 61.95–63.95. The data values of 64 and 64.5 are in the interval 63.95–65.95. The data values 66 through 67.5 are in the interval 65.95–67.95. The data values 68 through 69.5 are in the interval 67.95–69.95. The data values 70 through 71 are in the interval 69.95–71.95. The data values 72 through 73.5 are in the interval 71.95–73.95. The data value 74 is in the interval 73.95–75.95. The histogram shown in Figure 13.6 displays the portfolio values on the x-axis and relative frequency on the y-axis.
Figure 13.6 Histogram of Portfolio Values
Graphing Bivariate Data
Bivariate data refers to paired data, where each value of one variable is paired with a value of a second variable. An example of paired data would be if data were collected on employees’ years of experience and their corresponding salaries. Typically, it is of interest to investigate possible associations or correlations between the two variables under analysis.
Time Series Graphs
Suppose that we want to track the consumer price index (CPI) over the past 10 years. One feature of the data that we may want to consider is the element of time. Because each year is paired with the CPI value for that year, we do not have to think of the data as being random. We can instead use the years given to impose a chronological order on the data. A graph that recognizes this ordering and displays the changing CPI value as the decade progresses is called a time series graph.
To construct a time series graph, we must look at both pieces of our paired data set. We start with a standard Cartesian coordinate system. The horizontal axis is used to plot the time increments, and the vertical axis is used to plot the values of the variable that we are measuring. By doing this, we make each point on the graph correspond to a point in time and a measured quantity. The points on the graph are typically connected by straight lines in the order in which they occur.
Example: The following data set shows the annual CPI for 10 years. We need to construct a time series graph for the (rounded) annual CPI data (see Table 13.12). The time series graph is shown in Figure 13.7.
Year CPI
2012 226.65
2013 230.28
2014 233.91
2015 233.70
2016 236.91
2017 242.84
2018 247.87
2019 251.71
2020 257.97
2021 261.58
Table 13.12 Data for Time Series Graph of Annual CPI, 2012–2021 (source: US Bureau of Labor Statistics)
Figure 13.7 Time Series Graph of Annual CPI, 2012–2021
Scatter Plots
A scatter plot, or scatter diagram, is a graphical display intended to show the relationship between two variables. The setup of the scatter plot is that one variable is plotted on the horizontal axis and the other variable is plotted on the vertical axis. Then each pair of data values is considered as an (x, y) point, and the various points are plotted on the diagram. A visual inspection of the plot is then made to detect any patterns or trends. Additional statistical analysis can be conducted to determine if there is a correlation or other statistically significant relationship between the two variables.
Assume we are interested in tracking the closing price of Nike stock over the one-year time period from April 2020 to March 2021. We would also like to know if there is a correlation or relationship between the price of Nike stock and the value of the S&P 500 over the same time period. To visualize this relationship, we can create a scatter plot based on the (x, y) data shown in Table 13.13. The resulting scatter plot is shown in Figure 13.8.
Date S&P 500 Nike Stock Price (\$)
4/1/2020 2,912.43 87.18
5/1/2020 3,044.31 98.58
6/1/2020 3,100.29 98.05
7/1/2020 3,271.12 97.61
8/1/2020 3,500.31 111.89
9/1/2020 3,363.00 125.54
10/1/2020 3,269.96 120.08
11/1/2020 3,621.63 134.70
12/1/2020 3,756.07 141.47
1/1/2021 3,714.24 133.59
2/1/2021 3,811.15 134.78
3/1/2021 3,943.34 140.45
3/12/2021 3,943.34 140.45
Table 13.13 Data for S&P 500 and Nike Stock Price over a 12-Month Period (source: Yahoo! Finance)
Figure 13.8 Scatter Plot of Nike Stock Price versus S&P 500
Note the linear pattern of the points on the scatter plot. Because the data points generally align along a straight line, this provides an indication of a linear correlation between the price of Nike stock and the value of the S&P 500 over this one-year time period.
The scatter plot can be generated using Excel as follows:
1. Enter the x-data in column A of a spreadsheet.
2. Enter the y-data in column B.
3. Highlight the data with your mouse.
4. Go to the Insert menu and select the icon for a scatter plot, as shown in Figure 13.9.
Figure 13.9 Excel Menu Showing the Scatter Plot Icon | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/13%3A_Statistical_Analysis_in_Finance/13.07%3A_Data_Visualization_and_Graphical_Displays.txt |
Learning Objectives
By the end of this section, you will be able to:
• Create a vector of data values for the R statistical analysis tool.
• Write basic statistical commands using the R statistical analysis tool.
Commands and Vectors in R
R is a statistical analysis tool that is widely used in the finance industry. It is available as a free program and provides an integrated suite of functions for data analysis, graphing, and statistical programming. R is increasingly being used as a data analysis and statistical tool as it is an open-source language and additional features are constantly being added by the user community. The tool can be used on many different computing platforms and can be downloaded at the R Project website.
Link to Learning
Using the R Statistical Tool
There are many resources for learning and using the R statistical tool, including the following:
How to install R on different computer operating systems
Introduction to using R
How to import and export data using R
Frequently asked questions (FAQ) on using R
Once you have installed and started R on your computer, at the bottom of the R console, you should see the symbol >, which indicates that R is ready to accept commands.
```
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
>
```
R is a command-driven language, meaning that the user enters commands at the prompt, which R then executes one at a time. R can also execute a program containing multiple commands. There are ways to add a graphic user interface (GUI) to R. An example of a GUI tool for R is RStudio.
The R command line can be used to perform any numeric calculation, similar to a handheld calculator. For example, to evaluate the expression $10+3·7,10+3·7,$ enter the following expression at the command line prompt and hit return:
``` > 10+3*7
[1] 31
```
Most calculations in R are handled via functions. For statistical analysis, there are many preestablished functions in R to calculate mean, median, standard deviation, quartiles, and so on. Variables can be named and assigned values using the assignment operator <-. For example, the following R commands assign the value of 20 to the variable named x and assign the value of 30 to the variable named y:
``` > x <- 20
> y <- 30
```
These variable names can be used in any calculation, such as multiplying x by y to produce the result 600:
``` > x*y
[1] 600
```
The typical method for using functions in statistical applications is to first create a vector of data values. There are several ways to create vectors in R. For example, the c function is often used to combine values into a vector. The following R command will generate a vector called salaries that contains the data values 40,000, 50,000, 75,000, and 92,000:
``` > salaries <- c(40000, 50000, 75000, 92000)
```
This vector salaries can then be used in statistical functions such as mean, median, min, max, and so on, as shown:
``` > mean(salaries)
[1] 64250
> median(salaries)
[1] 62500
> min(salaries)
[1] 40000
> max(salaries)
[1] 92000
```
Another option for generating a vector in R is to use the seq function, which will automatically generate a sequence of numbers. For example, we can generate a sequence of numbers from 1 to 5, incremented by 0.5, and call this vector example1, as follows:
``` > example1 <- seq(1, 5, by=0.5)
```
If we then type the name of the vector and hit enter, R will provide a listing of numeric values for that vector name.
``` > salaries
[1] 40000 50000 75000 92000
> example1
[1] 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
```
Often, we are interested in generating a quick statistical summary of a data set in the form of its mean, median, quartiles, min, and max. The R command called summary provides these results.
``` > summary(salaries)
Min. 1st Qu. Median Mean 3rd Qu. Max.
40000 47500 62500 64250 79250 92000
```
For measures of spread, R includes a command for standard deviation, called sd, and a command for variance, called var. The standard deviation and variance are calculated with the assumption that the data set was collected from a sample.
``` > sd(salaries)
[1] 23641.42
> var(salaries)
[1] 558916667
```
To calculate a weighted mean in R, create two vectors, one of which contains the data values and the other of which contains the associated weights. Then enter the R command weighted.mean(values, weights).
The following is an example of a weighted mean calculation in R:
Assume your portfolio contains 1,000 shares of XYZ Corporation, purchased on three different dates, as shown in Table 13.14. Calculate the weighted mean of the purchase price, where the weights are based on the number of shares in the portfolio.
Date Purchased Purchase Price (\$) Number of Shares Purchased
January 17 78 200
February 10 122 300
March 23 131 500
Total 1,000
Table 13.14 Portfolio of XYZ Shares
Here is how you would create two vectors in R: the price vector will contain the purchase price, and the shares vector will contain the number of shares. Then execute the R command weighted.mean(price, shares), as follows:
``` > price <- c(78, 122, 131)
> shares <- c(200, 300, 500)
> weighted.mean(price, shares)
[1] 117.7
```
A list of common R statistical commands appears in Table 13.15.
R Command Result
mean( ) Calculates the arithmetic mean
median( ) Calculates the median
min( ) Calculates the minimum value
max( ) Calculates the maximum value
weighted.mean( ) Calculates the weighted mean
sum( ) Calculates the sum of values
summary( ) Calculates the mean, median, quartiles, min, and max
sd( ) Calculates the sample standard deviation
var( ) Calculates the sample variance
IQR( ) Calculates the interquartile range
barplot( ) Plots a bar chart of non-numeric data
boxplot( ) Plots a boxplot of numeric data
hist( ) Plots a histogram of numeric data
plot( ) Plots various graphs, including a scatter plot
freq( ) Creates a frequency distribution table
Table 13.15 List of Common R Statistical Commands
Graphing in R
There are many statistical applications in R, and many graphical representations are possible, such as bar graphs, histograms, time series plots, scatter plots, and others. The basic command to create a plot in R is the plot command, plot(x, y), where x is a vector containing the x-values of the data set and y is a vector containing the y-values of the data set.
The general format of the command is as follows:
``` >plot(x, y, main="text for title of graph", xlab="text for x-axis label", ylab="text for y-axis label")
```
For example, we are interested in creating a scatter plot to examine the correlation between the value of the S&P 500 and Nike stock prices. Assume we have the data shown in Table 13.13, collected over a one-year time period.
Note that data can be read into R from a text file or Excel file or from the clipboard by using various R commands. Assume the values of the S&P 500 have been loaded into the vector SP500 and the values of Nike stock prices have been loaded into the vector Nike. Then, to generate the scatter plot, we can use the following R command:
``` >plot(SP500, Nike, main="Scatter Plot of Nike Stock Price vs. S&P 500", xlab="S&P 500", ylab="Nike Stock Price")
```
As a result of these commands, R provides the scatter plot shown in Figure 13.10. This is the same data that was used to generate the scatter plot in Figure 13.8 in Excel.
Figure 13.10 Scatter Plot Generated by R for Nike Stock Price versus S&P 500 | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/13%3A_Statistical_Analysis_in_Finance/13.08%3A_The_R_Statistical_Analysis_Tool.txt |
13.1 Measures of Center
Several measurements are used to provide the average of a data set, including mean, median, and mode. The terms mean and average are often used interchangeably. To calculate the mean for a set of numbers, add the numbers together and then divide the sum by the number of data values. The geometric mean redistributes not the sum of the values but the product by multiplying all of the individual values and then redistributing them in equal portions such that the total product remains the same. To calculate the median for a set of numbers, order the data from smallest to largest and identify the middle data value in the ordered data set.
13.2 Measures of Spread
The standard deviation and variance are measures of the spread of a data set. The standard deviation is small when the data values are all concentrated close to the mean, exhibiting little variation or spread. The standard deviation is larger when the data values are more spread out from the mean, exhibiting more variation. The formula used to calculate the standard deviation depends on whether the data represents a sample or a population, as the formulas for the sample standard deviation and the population standard deviation are slightly different.
13.3 Measures of Position
Several measures are used to indicate the position of a data value in a data set. One measure of position is the z-score for a particular measurement. The z-score indicates how many standard deviations a particular measurement is above or below the mean. Other measures of position include quartiles and percentiles. Quartiles are special percentiles. The first quartile, Q1, is the same as the 25th percentile, and the third quartile, Q3, is the same as the 75th percentile. The median, M, is called both the second quartile and the 50th percentile. To calculate quartiles and percentiles, the data must be ordered from smallest to largest. Quartiles divide ordered data into quarters. Percentiles divide ordered data into hundredths.
13.4 Statistical Distributions
A frequency distribution provides a method of organizing and summarizing a data set and allows us to organize and tabulate the data in a summarized way. Once a frequency distribution is generated, it can be used to create graphs to help facilitate an interpretation of the data set. The normal distribution has two parameters, or numerical descriptive measures: the mean, $μμ$, and the standard deviation, $σσ$. The exponential distribution is often concerned with the amount of time until some specific event occurs.
13.5 Probability Distributions
A probability distribution is a mathematical function that assigns probabilities to various outcomes. In many financial situations, we are interested in determining the expected value of the return on a particular investment or the expected return on a portfolio of multiple investments. When analyzing distributions that follow a normal distribution, probabilities can be calculated by finding the area under the graph of the normal curve.
13.6 Data Visualization and Graphical Displays
Data visualization refers to the use of graphical displays to summarize a data set to help to interpret patterns and trends in the data. Univariate data refers to observations recorded for a single characteristic or attribute, such as salaries or blood pressure measurements. When graphing univariate data, we can choose from among several types of graphs. The type of graph to be used for a certain data set will depend on the nature of the data and the purpose of the graph. Examples of graphs for univariate data include line graphs, bar graphs, and histograms. Bivariate data refers to paired data where each value of one variable is paired with a value of a second variable. Examples of graphs for bivariate data include time series graphs and scatter plots.
13.7 The R Statistical Analysis Tool
R is an open-source statistical analysis tool that is widely used in the finance industry. It provides an integrated suite of functions for data analysis, graphing, and statistical programming. R is increasingly being used as a data analysis and statistical tool as it is an open-source language, and additional features are constantly being added by the user community. This tool can be used on many different computing platforms and can be downloaded at The R Project for Statistical Computing. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/13%3A_Statistical_Analysis_in_Finance/13.09%3A_Summary.txt |
arithmetic mean
a measure of center of a data set, calculated by adding up the data values and dividing the sum by the number of data values
bar graph
a chart that presents categorical data in a summarized form based on frequency or relative frequency
bivariate data
paired data in which each value of one variable is paired with a value of a second variable
data visualization
the use of graphical displays, such as bar charts, histograms, and scatter plots, to help interpret patterns and trends in a data set
empirical rule
a rule that provides the percentages of data values falling within one, two, and three standard deviations from the mean for a bell-shaped (normal) distribution
expected value
a weighted average of the values of a variable where the weights are the associated probabilities
exponential distribution
a continuous probability distribution that is useful for calculating probabilities within the time between events
frequency distribution
a method of organizing and summarizing a data set that provides the frequency with which each value in the data set occurs
geometric mean
a measure of center of a data set, calculated by multiplying the data values and then raising the product to the exponent $1n1n$, where n is the number of data values
histogram
a graphical display of continuous data showing class intervals on the horizontal axis and frequency or relative frequency on the vertical axis
interquartile range (IQR)
a number that indicates the spread of the middle half, or middle 50%, of the data; the difference between the third quartile (Q3) and the first quartile (Q1)
median
the middle value in an ordered data set
mode
the most frequently occurring data value in a data set
normal distribution
a bell-shaped distribution curve that is used to model many measurements, including IQ scores, salaries, heights, weights, blood pressures, etc.
outliers
data values that are significantly different from the other data values in a data set
percentiles
numbers that divide an ordered data set into hundredths; often used to indicate position of a data value in a data set
population data
data representing all the outcomes or measurements that are of interest
portfolio
a collection of financial investments, such as stocks, bonds, mutual funds, certificates of deposit, etc.
probability distribution
a mathematical function that assigns probabilities to various outcomes
quartiles
numbers that divide an ordered data set into quarters; the second quartile is the same as the median
sample data
data representing outcomes or measurements collected from a subset or part of a population
scatter plot (or scatter diagram)
a graphical display that shows the relationship between a dependent variable and an independent variable
standard deviation
a measure of the spread of a data set that indicates how far a typical data value is from the mean
time series graph
a graphical display used to show measurement data plotted versus time, where time is displayed on the horizontal axis
variance
the measure of the spread of data values calculated as the square of the standard deviation
weighted mean
a measure of center of a data set in which each data value has a corresponding weighting
x-axis
the horizontal axis in a rectangular coordinate system
y-axis
the vertical axis in a rectangular coordinate system
z-score (or z-value)
a measure of the position of a data value in the data set, calculated by subtracting the mean from the data value and then dividing the difference by the standard deviation
13.11: CFA Institute
This chapter supports some of the Learning Outcome Statements (LOS) in this CFA® Level I Study Session. Reference with permission of CFA Institute. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/13%3A_Statistical_Analysis_in_Finance/13.10%3A_Key_Terms.txt |
1.
A data set of salaries contains an outlier salary. The best measure of center to use for this data set is the ________.
1. mean
2. median
3. mode
4. standard deviation
2.
A portfolio includes shares of United Airlines stock that were purchased at different times and different prices. Which measure is best to determine the average cost of the shares of the stock?
1. mean
2. median
3. weighted mean
4. standard deviation
3.
Standard deviation is a measure of the ________.
1. center of a data set
2. position of a data value in a data set
3. area under a normal curve
4. spread of a distribution
4.
How are standard deviation and variance related?
1. The two measures are equal to one another.
2. Variance is the square root of the standard deviation.
3. Standard deviation is the square root of the variance.
4. The two squared measures are equal to one another.
5.
Which of the following is the best definition of a z-score?
1. the distance of a data value from the mean
2. the number of standard deviations that a data value is from the mean
3. the distance of a data value from the mean divided by the sample size
4. the number of quartiles that a data value is from the mean
6.
The results of a standardized test indicate that you are in the 85th percentile. What is the best interpretation of this result?
1. You scored in the top 85% of all students taking the test.
2. You scored in the top 15% of all students taking the test.
3. Your score on the test is 85 when measured on a scale from 0 to 100.
4. You scored in the bottom 15% of all students taking the test.
7.
The interquartile range is ________.
1. the middle 50% of a data set
2. the upper 50% of a data set
3. the lower 50% of a data set
4. equal to the median
8.
In a frequency distribution table, the sum of the relative frequencies must be equal to ________.
1. the sample size
2. 1, or 100%
3. zero
4. the standard deviation of the distribution
9.
A change in the standard deviation of a normal distribution will result in ________.
1. a change in the location of the peak of the curve
2. a change in the area under the curve
3. a change in the shape of the curve
4. a change that shifts the graph to the left or the right
10.
When calculating an expected value, ________.
1. the result should always be 1
2. the result should always be a positive value
3. the result should always be a negative value
4. the result can be a positive or negative value
11.
The area under a normal curve between a z-score of -2 and a z-score of +2 is ________.
1. 0.68
2. 0.95
3. 0.997
4. dependent on the mean and standard deviation
12.
A scatter plot is a visualization for ________.
1. univariate data only
2. bivariate data only
3. either univariate or bivariate data
4. test scores
13.
Which of the following is NOT a benefit of using the R statistical analysis tool?
1. Additional features are constantly being added by the user community.
2. It can be used on many computer platforms, including Mac, Windows, and Linux.
3. It is free to download.
4. Users pay an annual subscription fee.
13.13: Review Questions
1.
Explain the considerations that determine whether the mean or the median is the best measure of central tendency for a data set.
2.
Explain the difference between a mean and a weighted mean.
3.
Explain why the standard deviation of a data set cannot be a negative value.
4.
Explain what a negative z-score, a positive z-score, and a z-score of zero imply.
5.
Explain how quartiles can be used to detect outliers in a data set.
13.14: Problems
1.
You purchased 1,000 shares of a stock for \$12 per share. Then, two months later, you purchased an additional 500 shares of the same stock at \$9 per share. Calculate the weighted mean of the purchase price for the total of 1,500 shares.
2.
You score a 60 on a biology test. The mean test grade is 70, and the standard deviation is 5. Calculate and interpret your corresponding z-score.
3.
You score a 60 on a biology test. The mean test grade is 70, and the standard deviation is 5. What percentile does your grade correspond to?
4.
A fast food restaurant has measured service time for customers waiting in line, and the service time follows an exponential distribution with a mean waiting time of 1.9 minutes. The restaurant has a guarantee that if customers wait in line for more than 5 minutes, their meal is free. What is the probability that a customer will receive a free meal?
5.
The total value of your portfolio consists of approximately 65% stock assets, 25% bonds, and 10% cash equivalents. Historical returns have shown that stocks provide a return of 12%, bonds provide a return of 3.5%, and cash savings provide a return of 1.5%. What is the expected value of the return on this portfolio?
6.
The distribution of the average annual return of the S&P 500 over a 50-year time period follows a normal distribution with a mean rate of return of 10.5% and a standard deviation of 14.3%. What is the probability that an average annual return will fall between -3.8% and 24.8%?
7.
Write a short R program to find the expected return for the data set in the table below.
Historical Return on United Airlines StockAssociated Probability
12%15%
5%35%
2%25%
-5%14%
-10%11%
Table 13.16
13.15: Video Activity
Normal Distribution Stock Return Calculations
1.
Assume the return on stocks follows a normal distribution. Is it more likely that a stock will return between -1 and +1 standard deviations from the mean or between -2 and +2 standard deviations from the mean? Why?
2.
Would an investor be likely to prefer a stock that has a smaller standard deviation for annual stock returns or one with a larger standard deviation for annual stock returns? Why?
Portfolio Weights
3.
What are the reasons for calculating portfolio weights? What useful information does this provide to the investor?
4.
What are the advantages and disadvantages of the equal weighting approach and the market cap weighting approach for portfolio allocation strategy? | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/13%3A_Statistical_Analysis_in_Finance/13.12%3A_Multiple_Choice.txt |
Figure 14.1 Regression analysis is used in financial decision-making. (credit: modification of “Stock exchange” by Jack Sem/flickr, CC BY 2.0)
Correlation and regression analysis are used extensively in finance applications. Correlation analysis allows the determination of a statistical relationship between two numeric quantities. Regression analysis can be used to predict one quantity based on a second quantity, assuming there is a significant correlation between the two quantities. For example, in finance, we use regression analysis to calculate the beta coefficient of a stock, which represents the volatility of the stock versus overall market volatility, with volatility being a measure of risk.
A business may want to establish a correlation between the amount the company spent on advertising versus its recorded sales. If a strong enough correlation is established, then the business manager can predict sales based on the amount spent on advertising for a given time period.
Finance professionals often use correlation analysis to predict future trends and mitigate risk in a stock portfolio. For example, if two investments are strongly correlated, an investor might not want to have both investments in a certain portfolio since the two investments would tend to move in the same directions during up markets or down markets. To diversify a portfolio, an investor might seek investments that are not strongly correlated with one another.
Regression analysis can be used to establish a mathematical equation that relates a dependent variable (such as sales) to an independent variable (such as advertising expenditure). In this discussion, the focus will be on analyzing the relationship between one dependent variable and one independent variable, where the relationship can be modeled using a linear equation. This type of analysis is called linear regression.
14.02: Correlation Analysis
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Calculate a correlation coefficient.
• Interpret a correlation coefficient.
• Test for the significance of a correlation coefficient.
Calculate a Correlation Coefficient
In correlation analysis, we study the relationship between bivariate data, which is data collected on two variables where the data values are paired with one another.
Correlation is the measure of association between two numeric variables. For example, we may be interested to know if there is a correlation between bond prices and interest rates or between the age of a car and the value of the car. To investigate the correlation between two numeric quantities, the first step is to create a scatter plot that will graph the (x, y) ordered pairs. The independent, or explanatory, quantity is labeled as the x-variable, and the dependent, or response, quantity is labeled as the y-variable.
For example, we may be interested to know if the price of Nike stock is correlated with the value of the S&P 500 (Standard & Poor’s 500 stock market index). To investigate this, monthly data can be collected for Nike stock prices and value of the S&P 500 for a period of time, and a scatter plot can be created and examined. A scatter plot, or scatter diagram, is a graphical display intended to show the relationship between two variables. The setup of the scatter plot is that one variable is plotted on the horizontal axis and the other variable is plotted on the vertical axis. Each pair of data values is considered as an (x, y) point, and the various points are plotted on the diagram. A visual inspection of the plot is then made to detect any patterns or trends on the scatter diagram. Table 14.1 shows the relationship between the Nike stock price and its S&P value over a one-year time period.
To assess linear correlation, the graphical trend of the data points is examined on the scatter plot to determine if a straight-line pattern exists. If a linear pattern exists, the correlation may indicate either a positive or a negative correlation. A positive correlation indicates that as the independent variable increases, the dependent variable tends to increase as well, or, as the independent variable decreases, the dependent variable tends to decrease (the two quantities move in the same direction). A negative correlation indicates that as the independent variable increases, the dependent variable decreases, or, as the independent variable decreases, the dependent variable increases (the two quantities move in opposite directions). If there is no relationship or association between the two quantities, where one quantity changing does not affect the other quantity, we conclude that there is no correlation between the two variables.
Date S&P 500
Nike
Stock Price
4/1/2020 2,912.43 87.18
5/1/2020 3,044.31 98.58
6/1/2020 3,100.29 98.05
7/1/2020 3,271.12 97.61
8/1/2020 3,500.31 111.89
9/1/2020 3,363.00 125.54
10/1/2020 3,269.96 120.08
11/1/2020 3,621.63 134.70
12/1/2020 3,756.07 141.47
1/1/2021 3,714.24 133.59
2/1/2021 3,811.15 134.78
3/1/2021 3,943.34 140.45
3/12/2021 3,943.34 140.45
Table 14.1 Nike Stock Price (\$) and Value of S&P 500 over a One-Year Time Period (source: Yahoo! Finance)
From the scatter plot in the Nike stock versus S&P 500 example (see Figure 14.2), we note that the trend reflects a positive correlation in that as the value of the S&P 500 increases, the price of Nike stock tends to increase as well.
Figure 14.2 Scatter Plot of Nike Stock Price (\$) and Value of S&P 500 (data source: Yahoo! Finance)
When inspecting a scatter plot, it may be difficult to assess a correlation based on a visual inspection of the graph alone. A more precise assessment of the correlation between the two quantities can be obtained by calculating the numeric correlation coefficient (referred to using the symbol r).
The correlation coefficient, which was developed by statistician Karl Pearson in the early 1900s, is a measure of the strength and direction of the correlation between the independent variable x and the dependent variable y.
The formula for r is shown below; however, technology, such as Excel or the statistical analysis program R, is typically used to calculate the correlation coefficient.
$r=n∑xy-∑x∑yn∑x2-∑x2n∑y2-∑y2r=n∑xy-∑x∑yn∑x2-∑x2n∑y2-∑y2$
14.1
where n refers to the number of data pairs and the symbol $∑x∑x$ indicates to sum the x-values.
Table 14.2 provides a step-by-step procedure on how to calculate the correlation coefficient r.
Step Representation in Symbols
1. Calculate the sum of the x-values. $∑x∑x$
2. Calculate the sum of the y-values. $∑y∑y$
3. Multiply each x-value by the corresponding y-value and calculate the sum of these xy products. $∑xy∑xy$
4. Square each x-value and then calculate the sum of these squared values. $∑x2∑x2$
5. Square each y-value and then calculate the sum of these squared values. $∑y2∑y2$
6. Determine the value of n, which is the number of data pairs. n
7. Use these results to then substitute into the formula for the correlation coefficient. $r=n∑xy-∑x∑yn∑x2-∑x2n∑y2-∑y2r=n∑xy-∑x∑yn∑x2-∑x2n∑y2-∑y2$
Table 14.2 Steps for Calculating the Correlation Coefficient
Note that since r is calculated using sample data, r is considered a sample statistic used to measure the strength of the correlation for the two population variables. Sample data indicates data based on a subset of the entire population.
Given the complexity of this calculation, Excel or other software is typically used to calculate the correlation coefficient.
The Excel command to calculate the correlation coefficient uses the following format:
`=CORREL(A1:A10, B1:B10)`
where A1:A10 are the cells containing the x-values and B1:B10 are the cells containing the y-values.
Download the spreadsheet file containing key Chapter 14 Excel exhibits.
Interpret a Correlation Coefficient
Once the value of r is calculated, this measurement provides two indicators for the correlation:
1. the strength of the correlation based on the value of r
2. the direction of the correlation based on the sign of r
The value of r gives us this information:
• The value of r is always between $-1-1$ and $+1+1$: $-1 ≤ r ≤ 1-1 ≤ r ≤ 1$.
• The size of the correlation r indicates the strength of the linear relationship between the two variables. Values of r close to $-1-1$ or to $+1+1$ indicate a stronger linear relationship.
• If $r =0r =0$, there is no linear relationship between the two variables (no linear correlation).
• If $r=1r=1$, there is perfect positive correlation.
• If $r=-1,r=-1,$ there is perfect negative correlation. In both of these cases, all the original data points lie on a straight line.
The sign of r gives us this information:
• A positive value of r means that when x increases, y tends to increase, and when x decreases, y tends to decrease (positive correlation).
• A negative value of r means that when x increases, y tends to decrease, and when x decreases, y tends to increase (negative correlation).
Link to Learning
Correlation in Finance Applications
This video on correlation concepts discusses them with a specific focus on finance applications.
The Excel command used to find the value of the correlation coefficient for the Nike stock versus S&P 500 example (refer back to Table 14.1) is
`=CORREL(B2:B14,C2:C14)`
In this example, the value of $rr$ is calculated by Excel to be $r=0.928r=0.928$.
Since this is a positive value close to 1, we conclude that the relationship between Nike stock and the value of the S&P 500 over this time period represents a strong, positive correlation.
The correlation coefficient r can also be determined using the statistical capability on the financial calculator:
• Step 1 is to enter the data in the calculator (using the [DATA] function that is located above the 7 key).
• Step 2 is to access the statistical results provided by the calculator (using the [STAT] function that is located above the 8 key) and scroll to the correlation coefficient results.
Follow the steps in Table 14.3 for calculating the correlation data for the data set of Nike stock price and value of the S&P 500 shown previously.
Step Description Enter Display
1 Enter [DATA] entry mode 2ND [DATA] X01 0.00
2 Clear any previous data 2ND [CLR WORK] X01 0.00
3 Enter first x-value of 2912.43 2912.43 ENTER X01 = 2,912.43
4 Move to next data entry Y01 = 1.00
5 Enter first y-value of 87.18 87.18 ENTER Y01 = 87.18
6 Move to next data entry X02 0.00
7 Enter second x-value of 3044.31 3044.31 ENTER X02 = 3,044.31
8 Move to next data entry Y02 = 1.00
9 Enter second y-value of 98.58 98.58 ENTER Y02 = 98.58
10 Move to next data entry X03 0.00
11 Continue to enter remaining data values
12 Enter [STAT] mode 2ND [STAT]
13 Press [SET] until LIN appears 2ND [SET] LIN
14 Move to 1st statistical result $n=n=$ 13.00
15 Move to next statistical result $x¯=x¯=$ 3,480.86
16 Continue to scroll down until the value of r is displayed $r=r=$ 0.93
Table 14.3 Calculator Steps for Finding the Relationship between Nike Stock Price and Value of S&P 5001
From the statistical results shown on the calculator display, the correlation coefficient r is 0.93, which indicates that the relationship between Nike stock and the value of the S&P 500 over this time period represents a strong, positive correlation.
Note: A strong correlation does not suggest that x causes y or y causes x. We must remember that correlation does not imply causation.
Test a Correlation Coefficient for Significance
The correlation coefficient, r, tells us about the strength and direction of the linear relationship between x and y. The sample data are used to compute r, the correlation coefficient for the sample. If we had data for the entire population (that is, all measurements of interest), we could find the population correlation coefficient, which is labeled as the Greek letter ρ (pronounced “rho”). But because we have only sample data, we cannot calculate the population correlation coefficient. The sample correlation coefficient, r, is our estimate of the unknown population correlation coefficient.
• ρ = population correlation coefficient (unknown)
• r = sample correlation coefficient (known; calculated from sample data)
An important step in the correlation analysis is to determine if the correlation is significant. By this, we are asking if the correlation is strong enough to allow meaningful predictions for y based on values of x. One method to test the significance of the correlation is to employ a hypothesis test. The hypothesis test lets us decide whether the value of the population correlation coefficient ρ is close to zero or significantly different from zero. We decide this based on the sample correlation coefficient r and the sample size n.
If the test concludes that the correlation coefficient is significantly different from zero, we say that the correlation coefficient is significant.
• Conclusion: There is sufficient evidence to conclude that there is a significant linear relationship between x and y variables because the correlation coefficient is significantly different from zero.
• What the conclusion means: There is a significant linear relationship between the x and y variables. If the test concludes that the correlation coefficient is not significantly different from zero (it is close to zero), we say that the correlation coefficient is not significant.
A hypothesis test can be performed to test if the correlation is significant. A hypothesis test is a statistical method that uses sample data to test a claim regarding the value of a population parameter. In this case, the hypothesis test will be used to test the claim that the population correlation coefficient ρ is equal to zero.
Use these hypotheses when performing the hypothesis test:
• Null hypothesis: $H0: ρ=0H0: ρ=0$
• Alternate hypothesis: $Ha: ρ≠0Ha: ρ≠0$
The hypotheses can be stated in words as follows:
• Null hypothesis $H0H0$: The population correlation coefficient is not significantly different from zero. There is not a significant linear relationship (correlation) between x and y in the population.
• Alternate hypothesis $HaHa$: The population correlation coefficient is significantly different from zero. There is a significant linear relationship (correlation) between x and y in the population.
A quick shorthand way to test correlations is the relationship between the sample size and the correlation. If $r ≥ 2n, r ≥ 2n,$ then this implies that the correlation between the two variables demonstrates that a linear relationship exists and is statistically significant at approximately the 0.05 level of significance. As the formula indicates, there is an inverse relationship between the sample size and the required correlation for significance of a linear relationship. With only 10 observations, the required correlation for significance is 0.6325; for 30 observations, the required correlation for significance decreases to 0.3651; and at 100 observations, the required level is only 0.2000.
NOTE:
• If r is significant and the scatter plot shows a linear trend, the line can be used to predict the value of y for values of x that are within the domain of observed x-values.
• If r is not significant OR if the scatter plot does not show a linear trend, the line should not be used for prediction.
• If r is significant and the scatter plot shows a linear trend, the line may not be appropriate or reliable for prediction outside the domain of observed x-values in the data.
Think It Through
Determining If a Correlation Is Significant
Suppose that the chief financial officer (CFO) of a corporation is investigating the correlation between stock prices and unemployment rate over a period of 10 years and finds the correlation coefficient to be -0.68. There are 10 (x, y) data points in the data set. Should the CFO conclude that the correlation is significant for the relationship between stock prices and unemployment rate based on a level of significance of 0.05?
Correlations may be helpful in visualizing the data, but they are not appropriately used to explain a relationship between two variables. Perhaps no single statistic is more misused than the correlation coefficient. Citing correlations between health conditions and everything from place of residence to eye color have the effect of implying a cause-and-effect relationship. This simply cannot be accomplished with a correlation coefficient. The correlation coefficient is, of course, innocent of this misinterpretation. It is the duty of analysts to use a statistic that is designed to test for cause-and-effect relationships and to report only those results, if they are intending to make such a claim. The problem is that passing this more rigorous test is difficult, therefore lazy and/or unscrupulous researchers fall back on correlations when they cannot make their case legitimately.
Footnotes
• 1The specific financial calculator in these examples is the Texas Instruments BA II Plus TM Professional model, but you can use other financial calculators for these types of calculations. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/14%3A_Regression_Analysis_in_Finance/14.01%3A_Why_It_Matters.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Analyze a regression using the method of least squares and residuals.
• Test the assumptions for linear regression.
Method of Least Squares and Residuals
Once the correlation coefficient has been calculated and a determination has been made that the correlation is significant, typically a regression model is then developed. In this discussion we will focus on linear regression, where a straight line is used to model the relationship between the two variables. Once a straight-line model is developed, this model can then be used to predict the value of the dependent variable for a specific value of the independent variable.
Recall from algebra that the equation of a straight line is given by
$y=mx+by=mx+b$
14.2
where m is the slope of the line and b is the y-intercept of the line.
The slope measures the steepness of the line, and the y-intercept is that point on the y-axis where the graph crosses, or intercepts, the y-axis.
In linear regression analysis, the equation of the straight line is written in a slightly different way using the model
$y^=a+bxy^=a+bx$
14.3
In this format, b is the slope of the line, and a is the y-intercept. The notation $y^y^$ is called y-hat and is used to indicate a predicted value of the dependent variable y for a certain value of the independent variable x.
If a line extends uphill from left to right, the slope is a positive value, and if the line extends downhill from left to right, the slope is a negative value. Refer to Figure 14.3.
Figure 14.3 Three Possible Graphs of $y ^ = a + b x y ^ = a + b x$ (a) If $b>0b>0$, the line slopes upward to the right. (b) If $b=0b=0$, the line is horizontal. (c) If $b<0b<0$, the line slopes downward to the right.
When generating the equation of a line in algebra using $y=mx+by=mx+b$, two (x, y) points were required to generate the equation. However, in regression analysis, all (x, y) points in the data set will be utilized to develop the linear regression model.
The first step in any regression analysis is to create the scatter plot. Then proceed to calculate the correlation coefficient r, and check this value for significance. If we think that the points show a linear relationship, we would like to draw a line on the scatter plot. This line can be calculated through a process called linear regression. However, we only calculate a regression line if one of the variables helps to explain or predict the other variable. If x is the independent variable and y the dependent variable, then we can use a regression line to predict y for a given value of x.
As an example of a regression equation, assume that a correlation exists between the monthly amount spent on advertising and the monthly revenue for a Fortune 500 company. After collecting (x, y) data for a certain time period, the company determines the regression equation is of the form
$y^=9,376.7+61.8xy^=9,376.7+61.8x$
14.4
where x represents the monthly amount spent on advertising (in thousands of dollars) and $y^y^$ represents the monthly revenues for the company (in thousands of dollars).
A scatter plot of the (x, y) data is shown in Figure 14.4.
Figure 14.4 Scatter Plot of Revenue versus Advertising for a Fortune 500 Company (\$000s)
The Fortune 500 company would like to predict the monthly revenue if its executives decide to spend \$150,000 in advertising next month. To determine the estimate of monthly revenue, let $x=150x=150$ in the regression equation and calculate a corresponding value for $y^y^$:
$y^ = 9,376.7+61.8xy^=9,376.7+61.8150y^=18,646.7y^ = 9,376.7+61.8xy^=9,376.7+61.8150y^=18,646.7$
14.5
This predicted value of y indicates that the anticipated revenue would be \$18,646,700, given the advertising spend of \$150,000.
Notice that from past data, there may have been a month where the company actually did spend \$150,000 on advertising, and thus the company may have an actual result for the monthly revenue. This actual, or observed, amount can be compared to the prediction from the linear regression model to calculate a residual.
A residual is the difference between an observed y-value and the predicted y-value obtained from the linear regression equation. As an example, assume that in a previous month, the actual monthly revenue for an advertising spend of \$150,000 was \$19,200,000, and thus $y=19,200y=19,200$. The residual for this data point can be calculated as follows:
$Residual=observed y-value-predicted y-valueResidual=y-y^Residual=19,200-18,646.7=553.3Residual=observed y-value-predicted y-valueResidual=y-y^Residual=19,200-18,646.7=553.3$
14.6
Notice that residuals can be positive, negative, or zero. If the observed y-value exactly matches the predicted y-value, then the residual will be zero. If the observed y-value is greater than the predicted y-value, then the residual will be a positive value. If the observed y-value is less than the predicted y-value, then the residual will be a negative value.
When formulating the linear regression line of best fit to the points on the scatter plot, the mathematical analysis generates a linear equation where the sum of the squared residuals is minimized. This analysis is referred to as the method of least squares. The result is that the analysis generates a linear equation that is the “best fit” to the points on the scatter plot, in the sense that the line minimizes the differences between the predicted values and observed values for y.
Think It Through
Calculating a Residual
Suppose that the chief financial officer of a corporation has created a linear model for the relationship between the company stock and interest rates. When interest rates are at 5%, the company stock has a value of \$94. Using the linear model, when interest rates are at 5%, the model predicts the value of the company stock to be \$99. Calculate the residual for this data point.
The goal in the regression analysis is to determine the coefficients a and b in the following regression equation:
$y^=a+bxy^=a+bx$
14.8
Once the (x, y) has been collected, the slope (b) and y-intercept (a) can be calculated using the following formulas:
$b =n∑xy - ∑x∑yn∑x2 - ∑x2a = ∑yn - b∑xnb =n∑xy - ∑x∑yn∑x2 - ∑x2a = ∑yn - b∑xn$
14.9
where n refers to the number of data pairs and $∑x∑x$ indicates sum of the x-values.
Notice that the formula for the y-intercept requires the use of the slope result (b), and thus the slope should be calculated first and the y-intercept should be calculated second.
When making predictions for y, it is always important to plot a scatter diagram first. If the scatter plot indicates that there is a linear relationship between the variables, then it is reasonable to use a best-fit line to make predictions for y, given x within the domain of x-values in the sample data, but not necessarily for x-values outside that domain.
Note: Computer spreadsheets, statistical software, and many calculators can quickly calculate the best-fit line and create the graphs. The calculations tend to be tedious if done by hand.
Assumptions for Linear Regression
Testing the significance of the correlation coefficient requires that certain assumptions about the data are satisfied. The premise of this test is that the data are a sample of observed points taken from a larger population. We have not examined the entire population because it is not possible or feasible to do so. We are examining the sample to draw a conclusion about whether the linear relationship that we see between x and y in the sample data provides strong enough evidence that we can conclude that there is a linear relationship between x and y in the population.
The regression line equation that we calculate from the sample data gives the best-fit line for our particular sample. We want to use this best-fit line for the sample as an estimate of the best-fit line for the population (Figure 14.5). Examining the scatter plot and testing the significance of the correlation coefficient helps us determine if it is appropriate to do this.
These are the assumptions underlying the test of significance:
1. There is a linear relationship in the population that models the average value of y for varying values of x. In other words, the expected value of y for each particular value lies on a straight line in the population. (We do not know the equation for the line for the population. Our regression line from the sample is our best estimate of this line in the population.)
2. The y-values for any particular x-value are normally distributed about the line. This implies that there are more y-values scattered closer to the line than are scattered farther away. Assumption (1) implies that these normal distributions are centered on the line: the means of these normal distributions of y-values lie on the line.
3. The standard deviations of the population y-values about the line are equal for each value of x. In other words, each of these normal distributions of y-values has the same shape and spread about the line.
4. The residual errors are mutually independent (no pattern).
5. The data are produced from a well-designed, random sample or randomized experiment.
Figure 14.5 Best-Fit Line The y-values for each x-value are normally distributed about the line with the same standard deviation. For each x-value, the mean of the y-values lies on the regression line. More y-values lie near the line than are scattered further away from the line. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/14%3A_Regression_Analysis_in_Finance/14.03%3A_Linear_Regression_Analysis.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Calculate the slope and y-intercept for a linear regression model using technology.
• Interpret and apply the slope and y-intercepts.
Calculate the Slope and y-Intercept for a Linear Regression Model Using Technology
Once a correlation has been deemed as significant, a best-fit linear regression model is developed. The goal in the regression analysis is to determine the coefficients a and b in the following regression equation:
$y^=a+bxy^=a+bx$
14.10
The slope (b) and y-intercept (a) can be calculated using the following formulas:
$b = n∑xy - ∑x∑yn∑x2 - ∑x2a = ∑yn - b∑xnb = n∑xy - ∑x∑yn∑x2 - ∑x2a = ∑yn - b∑xn$
14.11
These formulas can be quite cumbersome, especially for a significant number of data pairs, and thus technology is often used (such as Excel, a calculator, R statistical software, etc.).
Using Excel: To calculate the slope and y-intercept of the linear model, start by entering the (x, y) data in two columns in Excel. Then the Excel commands =SLOPE and =INTERCEPT can be used to calculate the slope and intercept, respectively.
The following data set will be used as an example: the monthly amount spent on advertising and the monthly revenue for a Fortune 500 company for 12 months (data is shown in Table 14.4).
Month
Advertising
Expenditure
Revenue
Jan 49 12,210
Feb 145 17,590
Mar 57 13,215
Apr 153 19,200
May 92 14,600
Jun 83 14,100
Jul 117 17,100
Aug 142 18,400
Sep 69 14,100
Oct 106 15,500
Nov 109 16,300
Dec 121 17,020
Table 14.4 Revenue versus Advertising for Fortune 500 Company (\$000s)
To calculate the slope of the regression model, use the Excel command
`=SLOPE(y-data range, x-data range)`
It’s important to note that this Excel command expects that the y-data range is entered first and the x-data range is entered second. Since revenue depends on amount spent on advertising, revenue is considered the y-variable and amount spent on advertising is considered the x-variable. Notice the y-data is contained in cells C2 through C13 and the x-data is contained in cells B2 through B13. Thus the Excel command for slope would be entered as
`=SLOPE(C2:C13, B2:B13)`
In the same way, the Excel command to calculate the y-intercept of the regression model is
`=INTERCEPT(y-data range, x-data range)`
For the data set shown in the above table, the Excel command would be
`=INTERCEPT(C2:C13, B2:B13)`
The results are shown in Figure 14.6, where
$slope b = 61.8intercept a = 9,376.7slope b = 61.8intercept a = 9,376.7$
14.12
Figure 14.6 Revenue versus Advertising for Fortune 500 Company (\$000s) Showing Slope and y-Intercept Calculation in Excel
Based on this, the regression equation can be written as
$y^ = a+bxy^=9,376.7+61.8xy^ = a+bxy^=9,376.7+61.8x$
14.13
where x represents the amount spent on advertising (in thousands of dollars) and y represents the amount of revenue (in thousands of dollars).
Using a Financial Calculator
The financial calculator provides the slope and y-intercept for the linear regression model once the (x, y) data is inputted into the calculator.
Follow the steps in Table 14.5 for calculating the slope and y-intercept for the data set of amounts spent on advertising and revenue shown previously.
Step Description Enter Display
1 Enter [DATA] entry mode 2ND [DATA] X01 0.00
2 Clear any previous data 2ND [CLR WORK] X01 0.00
3 Enter first x-value of 49 49 ENTER X01 = 49.00
4 Move to next data entry Y01 = 1.00
5 Enter first y-value of 12210 12210 ENTER Y01 = 12,210.00
6 Move to next data entry X02 0.00
7 Enter second x-value of 145 145 ENTER X02 = 145.00
8 Move to next data entry Y02 = 1.00
9 Enter second y-value of 17590 17590ENTER Y02 = 17,590.00
10 Move to next data entry X03 0.00
11 Continue to enter remaining data values
12 Enter [STAT] mode 2ND [STAT]
13 Press [SET] until LIN appears 2ND [SET] LIN
14 Move to 1st statistical result $n=n=$ 12.00
15 Move to next statistical result $x¯=x¯=$ 103.58
16 Continue to scroll down until the value of a is displayed $a=a=$ 9,376.70
17 Continue to scroll down until the value of b is displayed $b=b=$ 61.80
Table 14.5 Calculator Steps for the Slope and y-Intercept
From the statistical results shown on the calculator display, the slope b is 61.8 and the y-intercept a is 9,367.7.
Based on this, the regression equation can be written as
$y^ = a+bxy^ = 9,376.7+61.8xy^ = a+bxy^ = 9,376.7+61.8x$
14.14
Interpret and Apply the Slope and y-Intercept
The slope of the line, b, describes how changes in the variables are related. It is important to interpret the slope of the line in the context of the situation represented by the data. You should be able to write a sentence interpreting the slope in plain English.
Interpretation of the Slope
The slope of the best-fit line tells us how the dependent variable (y) changes for every one unit increase in the independent (x) variable, on average.
In the previous example, the linear regression model for the monthly amount spent on advertising and the monthly revenue for a Fortune 500 company for 12 months was generated as follows:
$y^ = a+bxy^=9,376.7+61.8xy^ = a+bxy^=9,376.7+61.8x$
14.15
Since the slope was determined to be 61.8, the company can interpret this to mean that for every \$1,000 dollars spent on advertising, on average, this will result in an increase in revenues of \$61,800.
The intercept of the regression equation is the corresponding y-value when.
Interpretation of the Intercept
The intercept of the best-fit line tells us the expected mean value of y in the case where the x-variable is equal to zero.
However, in many scenarios it may not make sense to have the x-variable equal zero, and in these cases, the intercept does not have any meaning in the context of the problem. In other examples, the x-value of zero is outside the range of the x-data that was collected. In this case, we should not assign any interpretation to the y-intercept.
In the previous example, the range of data collected for the x-variable was from \$49 to \$153 spent per month on advertising. Since this interval does not include an x-value of zero, we would not provide any interpretation for the intercept. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/14%3A_Regression_Analysis_in_Finance/14.04%3A_Best-Fit_Linear_Model.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Calculate the regression model for a single independent variable as applied to financial forecasting.
• Extract measures of slope and intercept from regression analysis in financial applications.
Regression Model for a Single Independent Variable
Regression analysis is used extensively in finance-related applications. Many typical applications involve determining if there is a correlation between various stock market indices such as the S&P 500, the Dow Jones Industrial Average (DJIA), and the Russell 2000 index.
As an example, suppose we would like to determine if there is a correlation between the Russell 2000 index and the DJIA. Does the value of the Russell 2000 index depend on the value of the DJIA? Is it possible to predict the value of the Russell 2000 index for a certain value of the DJIA? We can explore these questions using regression analysis.
Table 14.6 shows a summary of monthly closing prices of the DJIA and the Russell 2000 for a 12-month time period. We consider the DJIA to be the independent variable and the Russell 2000 index to be the dependent variable.
Monthly Close DJIA Russell 2000
1-Apr-21 34,200.67 2,262.67
1-Mar-21 32,981.55 2,220.52
1-Feb-21 30,932.37 2,201.05
1-Jan-21 29,982.62 2,073.64
1-Dec-20 30,606.48 1,974.86
1-Nov-20 29,638.64 1,819.82
1-Oct-20 26,501.60 1,538.48
1-Sep-20 27,781.70 1,507.69
1-Aug-20 28,430.05 1,561.88
1-Jul-20 26,428.32 1,480.43
1-Jun-20 25,812.88 1,441.37
1-May-20 25,383.11 1,394.04
Table 14.6 Monthly Closing Prices of the DJIA and the Russell 2000 for a 12-Month Time Period (source: Yahoo! Finance)
The first step is to create a scatter plot to determine if the data points appear to follow a linear pattern. The scatter plot is shown in Figure 14.7. The scatter plot clearly shows a linear pattern; the next step is to calculate the correlation coefficient and determine if the correlation is significant.
• Using the Excel command =CORREL, the correlation coefficient is calculated to be 0.947. This value of the correlation coefficient is significant using the test for significance referenced earlier in Correlation Analysis.
• Using the Excel commands =SLOPE and =INTERCEPT, the value of the slope and y-intercept are calculated as 0.11 and $-1,496.34-1,496.34$, respectively, when rounded to two decimal places.
The Excel output is shown below:
`=CORREL(C3:C14,B3:B14): 0.947`
`=SLOPE(C3:C14,B3:B14): 0.113`
`=INTERCEPT(C3:C14,B3:B14): -1,496.340`
Figure 14.7 Scatter Plot for Monthly Closing Prices of the DJIA versus the Russell 2000 for a 12-Month Time Period (data source: Yahoo! Finance)
Based on these results, the corresponding linear regression model is
$y^ = a+bxy^=-1,496.34+0.11xy^ = a+bxy^=-1,496.34+0.11x$
14.16
Assume the DJIA has reached a value of 32,000. Predict the corresponding value of the Russell 2000 index. To determine this, substitute the value of the independent variable, $x=32,000x=32,000$ (this is the given value of the DJIA), and calculate the corresponding value for the dependent variable, which is the predicted value for the Russell 2000 index:
$y^ = -1,496.34+0.1132,000y^=2,023.66y^ = -1,496.34+0.1132,000y^=2,023.66$
14.17
Thus the predicted value for the Russell 2000 index is approximately 2,024 when the DJIA reached a value of 32,000.
Measures of Slope and Intercept from Regression Analysis
An important application of regression analysis is to determine the systematic risk for a particular stock, which is referred to as beta. A stock’s beta is a measure of the volatility of the stock compared to a benchmark such as the S&P 500 index. If a stock has more volatility compared to the benchmark, then the stock will have a beta greater than 1.0. If a stock has less volatility compared to the benchmark, then the stock will have a beta less than 1.0.
Beta can be determined as the slope of the regression line when the stock returns are plotted versus the returns for the benchmark, such as the S&P 500. As an example, consider the calculation for beta of Nike stock based on monthly returns of Nike stock versus monthly returns for the S&P 500 over the time period from May 2020 to March 2021. The monthly return data is shown in Table 14.7.
Date S&P 500
S&P
Monthly
Return (%)
Nike
Stock
Price (\$)
Nike
Monthly
Return (%)
4/1/2020 2,912.43 N/A 87.18 N/A
5/1/2020 3,044.31 0.05 98.58 0.13
6/1/2020 3,100.29 0.02 98.05 -0.01
7/1/2020 3,271.12 0.06 97.61 0.00
8/1/2020 3,500.31 0.07 111.89 0.15
9/1/2020 3,363.00 -0.04 125.54 0.12
10/1/2020 3,269.96 -0.03 120.08 -0.04
11/1/2020 3,621.63 0.11 134.70 0.12
12/1/2020 3,756.07 0.04 141.47 0.05
1/1/2021 3,714.24 -0.01 133.59 -0.06
2/1/2021 3,811.15 0.03 134.78 0.01
3/1/2021 3,943.34 0.03 140.45 0.04
3/12/2021 3,943.34 0.00 140.45 0.00
Table 14.7 Monthly Returns of Nike Stock versus Monthly Returns for the S&P 500 (source: Yahoo! Finance)
The scatter plot that graphs S&P monthly return versus Nike monthly return is shown in Figure 14.8.
Figure 14.8 Scatter Plot of Monthly Returns of Nike Stock versus Monthly Returns for the S&P 500 (\$) (data source: Yahoo! Finance)
The slope of the regression line is 0.83, obtained by using the =SLOPE command in Excel.
`=SLOPE (E4:E15,C4:C15)`
`=0.830681658`
This indicates the value of beta for Nike stock is 0.83, which indicates that Nike stock had lower volatility versus the S&P 500 for the time period of interest. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/14%3A_Regression_Analysis_in_Finance/14.05%3A_Regression_Applications_in_Finance.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Calculate predictions for the dependent variable using the regression model.
• Generate prediction intervals based on a prediction for the dependent variable.
Predicting the Dependent Variable Using the Regression Model
A key aspect of generating the linear regression model is to use the model for predictions, provided the correlation is significant. To generate predictions or forecasts using the linear regression model, substitute the value of the independent variable (x) in the regression equation and solve the equation for the dependent variable (y).
In a previous example, the linear regression equation was generated to relate the amount of monthly revenue for a Fortune 500 company to the amount of monthly advertising spend. From the previous example, it was determined that the regression equation can be written as
$y^ = a+bxy^ = 9,376.7+61.8xy^ = a+bxy^ = 9,376.7+61.8x$
14.18
where x represents the amount spent on advertising (in thousands of dollars) and y represents the amount of revenue (in thousands of dollars).
Let’s assume the Fortune 500 company would like to predict the monthly revenue for a month where it plans to spend \$80,000 for advertising. To determine the estimate of monthly revenue, let $x=80x=80$ in the regression equation and calculate a corresponding value for ŷ:
$y^ = 9,376.7+61.8xy^ = 9,376.7+61.880y^ = 14,320.70y^ = 9,376.7+61.8xy^ = 9,376.7+61.880y^ = 14,320.70$
14.19
This predicted value of y indicates that the forecasted revenue would be \$14,320,700, assuming an advertising spend of \$80,000.
• Excel can provide this forecasted value directly using the =FORECAST command.
• To use this command, enter the value of the independent variable x, followed by the cell range for the y-data and the cell range for the x-data, as follows: `=FORECAST(X_VALUE, Range of Y-DATA, Range of X-DATA)`
• Using this Excel command, the forecasted value for the revenue is \$14,320.52 when the advertising spend is \$80 (in thousands of dollars) (see Figure 14.9). (Note: The discrepancy in the more precise Excel result and the formula result is due to rounding in interim calculations.)
Figure 14.9 Revenue versus Advertising for Fortune 500 Company (\$000s) Showing FORECAST Command in Excel
A word of caution when predicting values for y: it is generally recommended to only predict values for y using values of x that are in the original range of the data collection.
As an example, assume we have developed a linear model to predict the height of male children based on their age. We have collected data for the age range from $x=3x=3$ years old to $x=10x=10$ years old, and we have confirmed that the scatter plot shows a linear trend and that the correlation is significant.
It would be erroneous to use this model to predict the height of a 25-year-old male since $x=25x=25$ is outside the range of the x-data, which was from 3 to 10 years old. The reason this is not recommended is that a linear pattern cannot be assumed to continue beyond the x-value of 10 years old unless some data collection has occurred at ages greater than 10 to confirm that the linear pattern is consistent for x-values beyond 10 years old.
Generating Prediction Intervals
One important value of an estimated regression equation is its ability to predict the effects on y of a change in one or more values of the independent variables. The value of this is obvious. Careful policy cannot be made without estimates of the effects that may result. Indeed, it is the desire for particular results that drive the formation of most policy. Regression models can be, and have been, invaluable aids in forming such policies.
Remember that point estimates do not carry a particular level of probability, or level of confidence, because points have no “width” above which there is an area to measure. There are actually two different approaches to the issue of developing estimates of changes in the independent variable (or variables) on the dependent variable. The first approach wishes to measure the expected mean value of y from a specific change in the value of x.
The second approach to estimate the effect of a specific value of x on y treats the event as a single experiment: you choose x and multiply it times the coefficient, and that provides a single estimate of y. Because this approach acts as if there were a single experiment, the variance that exists in the parameter estimate is larger than the variance associated with the expected value approach.
The conclusion is that we have two different ways to predict the effect of values of the independent variable(s) on the dependent variable, and thus we have two different intervals. Both are correct answers to the question being asked, but there are two different questions. To avoid confusion, the first case where we are asking for the expected value of the mean of the estimated y is called a confidence interval. The second case, where we are asking for the estimate of the impact on the dependent variable y of a single experiment using a value of x, is called the prediction interval.
The prediction interval for an individual y for $x=xpx=xp$ can be calculated as
$y^=± tα2se 1+1n+xp-x¯2sxy^=± tα2se 1+1n+xp-x¯2sx$
14.20
where se is the standard deviation of the error term, sx is the standard deviation of the x-variable, and $tα2tα2$ is the critical value of the t-distribution at the $1 – α1 – α$ confidence level.
Tabulated values of the t-distribution are available in online references such as the Engineering Statistics Handbook. The mathematical computations for prediction intervals are complex, and usually the calculations are performed using software. The formula above can be implemented in Excel to create a 95% prediction interval for the forecast for monthly revenue when $x=80,000x=80,000$ is spent on monthly advertising. Figure 14.10 shows the detailed calculations in Excel to arrive at a 95% prediction interval of (13,270.95, 15,370.09) for the monthly revenue. (The commands refer to the Excel data table shown in Figure 14.9.)
Figure 14.10 Calculations for 95% Prediction Interval for Monthly Revenue
This prediction interval can be interpreted as follows: there is 95% confidence that when the amount spent on monthly advertising is \$80,000, the corresponding monthly revenue will be between \$13,270.95 and \$15,370.09.
Various computer regression software packages provide programs within the regression functions to provide answers to inquiries of estimated predicted values of y given various values chosen for the x-variable(s). For example, the statistical program R provides these prediction intervals directly. It is important to know just which interval is being tested in the computer package because the difference in the size of the standard deviations will change the size of the interval estimated. This is shown in Figure 14.11.
Figure 14.11 Prediction and Confidence Intervals for Regression Equation at 95% Confidence Level
Figure 14.11 shows visually the difference the standard deviation makes in the size of the estimated intervals. The confidence interval, measuring the expected value of the dependent variable, is smaller than the prediction interval for the same level of confidence. The expected value method assumes that the experiment is conducted multiple times rather than just once, as in the other method. The logic here is similar, although not identical, to that discussed when developing the relationship between the sample size and the confidence interval using the central limit theorem. There, as the number of experiments increased, the distribution narrowed, and the confidence interval became tighter around the expected value of the mean.
It is also important to note that the intervals around a point estimate are highly dependent upon the range of data used to estimate the equation, regardless of which approach is being used for prediction. Remember that all regression equations go through the point of means—that is, the mean value of y and the mean values of all independent variables in the equation. As the value of x gets further and further from the (x, y) point corresponding to the mean value of x and the mean value of y, the width of the estimated interval around the point estimate increases. Choosing values of x beyond the range of the data used to estimate the equation poses an even greater danger of creating estimates with little use, very large intervals, and risk of error. Figure 14.12 shows this relationship.
Figure 14.12 Confidence Interval for an Individual Value of x, $X p X p$, at 95% Confidence Level
Figure 14.12 demonstrates the concern for the quality of the estimated interval, whether it is a prediction interval or a confidence interval. As the value chosen to predict y, $XpXp$ in the graph, is further from the central weight of the data, $X¯X¯$, we see the interval expand in width even while holding constant the level of confidence. This shows that the precision of any estimate will diminish as one tries to predict beyond the largest weight of the data and most certainly will degrade rapidly for predictions beyond the range of the data. Unfortunately, this is just where most predictions are desired. They can be made, but the width of the confidence interval may be so large as to render the prediction useless. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/14%3A_Regression_Analysis_in_Finance/14.06%3A_Predictions_and_Prediction_Intervals.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Generate correlation coefficients using the R statistical tool.
• Generate linear regression models using the R statistical tool.
Generate Correlation Coefficients Using the R Statistical Tool
R is an open-source statistical analysis tool that is widely used in the finance industry. R is available as a free program and provides an integrated suite of functions for data analysis, graphing, and statistical programming. R provides many functions and capabilities for regression analysis.
Recall that most calculations in R are handled via functions.
The typical method for using functions in statistical applications is to first create a vector of data values. There are several ways to create vectors in R. For example, the c function is often used to combine values into a vector. For example, this R command will generate a vector called salaries, containing the data values 40,000, 50,000, 75,000, and 92,000:
``` > salaries <- c(40000, 50000, 75000, 92000)
```
To calculate the correlation coefficient r, we use the R command called cor.
As an example, consider the data set in Table 14.8, which tracks the return on the S&P 500 versus return on Coca-Cola stock for a seven-month time period.
Month
S&P 500
Monthly
Return (%)
Coca-Cola
Monthly
Return (%)
Jan 8 6
Feb 1 0
Mar 0 -2
Apr 2 1
May -3 -1
Jun 7 8
Jul 4 2
Table 14.8 Monthly Returns of Coca-Cola Stock versus Monthly Returns for the S&P 500
Create two vectors in R, one vector for the S&P 500 returns and a second vector for Coca-Cola returns:
``` > SP500 <- c(8,1,0,2,-3,7,4)
> CocaCola <- c(6,0,-2,1,-1,8,2)
```
The R command called cor returns the correlation coefficient for the x-data vector and y-data vector:
``` > cor(SP500, CocaCola)
```
Generate Linear Regression Models Using the R Statistical Tool
To create a linear model in R, assuming the correlation is significant, the command lm (for linear model) will provide the slope and y-intercept for the linear regression equation.
The format of the R command is
``` lm(dependent_variable_vector ~ independent_variable_vector)
```
Notice the use of the tilde symbol as the separator between the dependent variable vector and the independent variable vector.
We use the returns on Coca-Cola stock as the dependent variable and the returns on the S&P 500 as the independent variable, and thus the R command would be
``` > lm(CocaCola ~ SP500)
Call:
lm(formula = CocaCola ~ SP500)
Coefficients:
(Intercept) SP500
-0.3453 0.8641
```
The R output provides the value of the y-intercept as $-0.3453-0.3453$ and the value of the slope as 0.8641. Based on this, the linear model would be
$y^ = a+bxy^ = -0.3453+0.8641xy^ = a+bxy^ = -0.3453+0.8641x$
14.21
where x represents the value of S&P 500 return and y represents the value of Coca-Cola stock return.
The results can also be saved as a formula and called “model” using the following R command. To obtain more detailed results for the linear regression, the summary command can be used, as follows:
``` > model <- lm(CocaCola ~ SP500)
> summary(model)
Call:
lm(formula = CocaCola ~ SP500)
Residuals:
1 2 3 4 5 6 7
-0.5672 -0.5188 -1.6547 -0.3828 1.9375 2.2969 -1.1109
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.3453 0.7836 -0.441 0.67783
SP500 0.8641 0.1734 4.984 0.00416 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.658 on 5 degrees of freedom
Multiple R-squared: 0.8325, Adjusted R-squared: 0.7989
F-statistic: 24.84 on 1 and 5 DF, p-value: 0.004161
```
In this output, the y-intercept and slope is given, as well as the residuals for each x-value. The output includes additional statistical details regarding the regression analysis.
Predicted values and prediction intervals can also be generated within R.
First, we can create a structure in R called a data frame to hold the values of the independent variable for which we want to generate a prediction. For example, we would like to generate the predicted return for Coca-Cola stock, given that the return for the S&P 500 is 6.
We use the R command called predict.
To generate a prediction for the linear regression equation called model, using the data frame where the value of the S&P 500 is 6, the R commands will be
``` > a <- data.frame(SP500=6)
> predict(model, a)
1
4.839062
```
The output from the predict command indicates that the predicted return for Coca-Cola stock will be 4.8% when the return for the S&P 500 is 6%.
We can extend this analysis to generate a 95% prediction interval for this result by using the following R command, which adds an option to the predict command to generate a prediction interval:
``` > predict(model,a, interval="predict")
fit lwr upr
1 4.839062 0.05417466 9.62395
```
Thus the 95% prediction interval for Coca-Cola return is (0.05%, 9.62%) when the return for the S&P 500 is 6%. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/14%3A_Regression_Analysis_in_Finance/14.07%3A_Use_of_R_Statistical_Analysis_Tool_for_Regression_Analysis.txt |
14.1 Correlation Analysis
Correlation is the measure of association between two numeric variables. A correlation coefficient called r is used to assess the strength and direction of the correlation. The value of r is always between $-1-1$ and $+1+1$. The size of the correlation r indicates the strength of the linear relationship between the two variables. Values of r close to $-1-1$ or to $+1+1$ indicate a stronger linear relationship. A positive value of r means that when x increases, y tends to increase and when x decreases, y tends to decrease (positive correlation). A negative value of r means that when x increases, y tends to decrease and when x decreases, y tends to increase (negative correlation).
14.2 Linear Regression Analysis
Linear regression analysis uses a straight-line fit to model the relationship between the two variables. Once a straight-line model is developed, this model can then be used to predict the value of the dependent variable for a specific value of the independent variable. Two parameters are calculated for the linear model, the slope of the best-fit line and the y-intercept of the best-fit line. The method of least squares is used to generate these parameters; this method is based on minimizing the squared differences between the predicted values and observed values for y.
14.3 Best-Fit Linear Model
Once a correlation has been deemed significant, a linear regression model is developed. The goal in the regression analysis is to determine the coefficients a and b in the following regression equation: $y^=a+bxy^=a+bx$. Typically some technology, such as Excel, R statistical tool, or a calculator, is used to generate the coefficients a and b since manual calculations are cumbersome.
14.4 Regression Applications in Finance
Regression analysis is used extensively in finance-related applications. Many typical applications involve determining if there is a correlation between various stock market indices such as the S&P 500, the DJIA, and the Russell 2000 index. The procedure is to first generate a scatter plot to determine if a visual trend is observed, then calculate a correlation coefficient and check for significance. If the correlation coefficient is significant, a linear model can then be generated and used for predictions.
14.5 Predictions and Prediction Intervals
A key aspect of generating the linear regression model is to then use the model for predictions, provided that the correlation is significant. To generate predictions or forecasts using the linear regression model, substitute the value of the independent variable (x) in the regression equation and solve the equation for the dependent variable (y). When making predictions using the linear model, it is generally recommended to only predict values for y using values of x that are in the original range of the data collection.
14.6 Use of R Statistical Analysis Tool for Regression Analysis
R is an open-source statistical analysis tool that is widely used in the finance industry and can be found online. R provides an integrated suite of functions for data analysis, graphing, and correlation and regression analysis. R is increasingly being used as a data analysis and statistical tool because it is an open-source language and additional features are constantly being added by the user community. The tool can be used on many different computing platforms.
14.09: Key Terms
best-fit linear regression model
an equation of the form $y^=a+bxy^=a+bx$ that provides the best-fit straight line to the (x, y) data points
beta
the measure of the volatility of a stock as compared to a benchmark such as the S&P 500 index
correlation
the measure of association between two numeric variables
correlation coefficient
a measure of the strength and direction of the linear relationship between two variables
linear correlation
a measure of the association between two variables that exhibit an approximate straight-line fit when plotted on a scatter plot
method of least squares
a mathematical method to generate a linear equation that is the “best fit” to the points on the scatter plot in the sense that the line minimizes the differences between the predicted values and observed values for y
prediction
a forecast for the dependent variable based on a specific value of the independent variable generated using the linear model
residual
the difference between an observed y-value and the predicted y-value obtained from the linear regression equation
scatter plot (scatter diagram)
graphical display that shows values of the independent variable plotted on the x-axis and values of the dependent variable plotted on the y-axis | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/14%3A_Regression_Analysis_in_Finance/14.08%3A_Summary.txt |
1.
Two correlation coefficients are compared: Correlation Coefficient A is 0.83. Correlation Coefficient B is /**/-0.91/**/. Which correlation coefficient represents the stronger linear relationship?
1. Correlation Coefficient A
2. Correlation Coefficient B
3. equal strength
4. not enough information to determine
2.
A data set containing 10 pairs of (x, y) data points is analyzed, and the correlation coefficient is calculated to be 0.58. Does this value of /**/r = 0.58/**/ indicate a significant or nonsignificant correlation?
1. significant
2. nonsignificant
3. neither significant nor nonsignificant
4. not enough information to determine
3.
A linear regression model is developed, and for /**/x = 10/**/, the corresponding predicted y-value is 22.7. The actual observed value for /**/x = 10/**/ is /**/y = 31.3/**/. Is the residual for this data point positive, negative, or zero?
1. positive
2. negative
3. zero
4. not enough information to determine
4.
A linear model is developed for the relationship between salary of finance professionals and years of experience. The data was collected based on years of experience ranging from 1 to 15. Assuming the correlation is significant, should the linear model be used to predict the salary for a person with 25 years of experience?
1. It is acceptable to predict the salary for a person with 25 years of experience.
2. A linear model cannot be created for these two variables.
3. It is not recommended to predict the salary for a person with 25 years of experience.
4. There is not enough information to determine the answer.
5.
Which of the following is the best interpretation for the slope of the linear regression model?
1. The slope is the expected mean value of y when the x-variable is equal to zero.
2. The slope indicates the change in y for every unit increase in x.
3. The slope indicates the strength of the linear relationship between x and y.
4. The slope indicates the direction of the linear relationship between x and y.
6.
A linear model is developed for the relationship between the annual salary of finance professionals and years of experience, and the following is the linear model /**/\widehat y=\text{55,000}+\text{1,000}x/**/. Which is the correct interpretation of this linear model?
1. /**/\text{slope} = \text{55,000},y \text {-intercept} = \text{1,000}/**/
2. /**/\text{slope} = 55,y \text {-intercept} = \text{1,000}/**/
3. /**/\text{slope} = \text{1,000},y \text {-intercept} = 55/**/
4. /**/\text{slope} = \text{1,000},y \text {-intercept} = \text{55,000}/**/
7.
Which of the following is the correct sequence of steps needed to create a linear regression model?
1. create scatter plot, calculate correlation coefficient, check for significance, create linear model
2. create linear model, calculate correlation coefficient, check for significance, create scatter plot
3. check for significance, create linear model, calculate correlation coefficient, create scatter plot
4. create scatter plot, check for significance, create linear model, calculate correlation coefficient
8.
A linear model is developed for the relationship between the annual salary of finance professionals and years of experience, and the linear model is: /**/\hat y = \text{55,000} + \text{1,000}x./**/ The correlation is determined to be significant. Predict the salary for a finance professional with 7 years of experience.
1. $55,010 2.$60,000
3. $62,000 4.$125,000
9.
As predictions are made for x-values that are further and further away from the mean of x, which is true about the prediction intervals for these x-values?
1. The prediction intervals will become smaller.
2. The prediction intervals will become larger.
3. The prediction intervals will remain the same.
4. There is not enough information to determine the answer.
10.
Which of the following is the R command to calculate the correlation coefficient r?
1. correl
2. cor
3. slope
4. lm
11.
Which of the following is the R command to calculate the slope and y-intercept for a linear regression model?
1. cor
2. slope
3. lm
4. intercept | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/14%3A_Regression_Analysis_in_Finance/14.10%3A_Multiple_Choice.txt |
1.
A correlation coefficient is calculated as /**/ -0.92/**/. Provide an interpretation for this correlation coefficient.
2.
Explain what a residual is and how this relates to the best-fit regression model.
3.
Explain how to interpret the slope of the best-fit line.
4.
Explain how to generate a prediction using a linear regression model.
5.
Will the sign of the correlation coefficient always be the same as the sign of the slope of the best-fit linear regression model?
14.12: Problems
1.
A Fortune 500 company is tracking revenues versus cash flow for recent years, and the data is shown in the table below. Consider cash flow to be the dependent variable. Create a scatter plot of the data set, comment on the correlation between these two variables, and comment on the correlation for this data (all dollar amounts are in thousands).
Revenues (\$000s) Cash Flow (\$000s)
237 82
241 86
229 77
284 94
307 93
Table 14.9
2.
A Fortune 500 company is tracking revenues versus cash flow for recent years, and the data is shown in the table below. Consider cash flow to be the dependent variable. Calculate the correlation coefficient for this data (all dollar amounts are in thousands).
Revenues (\$000s) Cash Flow (\$000s)
237 82
241 86
229 77
284 94
307 93
Table 14.10
3.
A chief financial officer calculates the correlation coefficient for bond prices versus interest rate as -0.71. The data set contained nine (x, y) data points. Determine if the correlation is significant or not significant at the 0.05 level of significance.
4.
A Fortune 500 company is tracking revenues versus cash flow for recent years, and the data is shown in the table below. Consider cash flow to be the dependent variable. Determine the best-fit linear regression equation for this data set (all dollar amounts are in thousands).
Revenues (\$000s) Cash Flow (\$000s)
237 82
241 86
229 77
284 94
307 93
Table 14.11
5.
A Fortune 500 company is tracking revenues versus cash flow for recent years, and the data is shown in the table below. Consider cash flow to be the dependent variable. Assume the correlation is significant. Predict the cash flow for company revenues of \$250,000 (all dollar amounts are in thousands).
Revenues (\$000s) Cash Flow (\$000s)
237 82
241 86
229 77
284 94
307 93
Table 14.12
6.
A Fortune 500 company is tracking revenues versus cash flow for recent years, and the data is shown in the table below. Consider cash flow to be the dependent variable. Assume the correlation is significant. Predict the cash flow for company revenues of \$750,000 (all dollar amounts are in thousands).
Revenues (\$000s) Cash Flow (\$000s)
237 82
241 86
229 77
284 94
307 93
Table 14.13
7.
A Fortune 500 company is tracking revenues versus cash flow for recent years, and the data is shown in the table below. Consider cash flow to be the dependent variable. Calculate the residual for the revenue value of \$284,000 (all dollar amounts are in thousands):
Revenues (\$000s) Cash Flow (\$000s)
237 82
241 86
229 77
284 94
307 93
Table 14.14
14.13: Video Activity
Simple Linear Regression
1.
Based on the scatter plot shown, will the correlation coefficient be a positive value or negative value? Would you estimate that the correlation is significant for the relationship between radio ads and revenue?
2.
For the linear regression model for ads versus revenue, the slope is shown as 78.075. How would this slope be interpreted (that is, provide a verbal description for the meaning of the slope of 78.075)?
How to Calculate Correlation for Stocks, Bonds, and Funds
3.
Based on the presentation shown in the video, is the FTSE 100 index correlated with the value of sterling? Or are the two measures uncorrelated? What data leads to your conclusion?
4.
Based on the presentation in the video, is there a correlation between stock funds and bond funds? Why is this information important to an investor trying to design a portfolio? | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/14%3A_Regression_Analysis_in_Finance/14.11%3A_Review_Questions.txt |
Figure 15.1 Investing can often provide great returns, but it can also be a risk. (credit: modification of work “E-ticker” by klip game/Wikimedia Commons, Public Domain)
Having finished her college degree and embarked on her career, Maria is now contemplating her financial future. She is considering how she might invest some of her hard-earned money. As a short-term goal, she wants to build an emergency fund so that she could cover her expenses for six months if she became ill or injured and had to take time off of work. She would also like to save money for a down payment on a home and to purchase new furniture. Although she is not yet 30 years old, Maria also knows that it is prudent to begin saving for retirement.
What should she do with her savings? Maria has some friends who have told her how successful they have been investing in stocks. Bart bragged about doubling his money in just over a year when he purchased Facebook stock, and Tiffany quickly tripled her money when she purchased shares in Netflix. But Maria also knows that her uncle lost a significant amount of money when his Boeing stock dropped from over \$300 per share to under \$150 within a couple of months at the beginning of 2020. Just how risky would it be to invest in stocks? What type of return might Maria expect? Are there strategies she could follow that would allow her to avoid her uncle’s fate?
15.02: Risk and Return to an Individual Asset
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Compute the realized return from an individual investment.
• Compute the average return and volatility of returns from historical data.
• Describe firm-specific risk.
Measuring Historical Returns
Risk and return are often referred to as the two Rs of finance. Investors are interested in both risk and return because understanding one without the other is really meaningless. In terms of investment, the concept of return is fairly straightforward; return is the benefit, or profit, the investor expects from an expenditure. It is the reward for investing—the reason an investment is made in the first place. However, no investment is a sure thing. The return may not be what the investor was expecting. This uncertainty about what the return will be is referred to as risk.
We begin by looking at how to measure both risk and return when considering an individual asset, such as one stock. If your grandparents bought 100 shares of Apple, Inc. stock for you when you were born, you are interested in knowing how well that investment has done. You may even want to compare how that investment has fared to how an investment in a different stock, perhaps Disney, would have done. You are interested in measuring the historical return.
Individual Investment Realized Return
The realized return of an investment is the total return that occurs over a particular time period. Suppose that you purchased a share of Target (TGT) at the beginning of January 2020 for \$128.74. At the end of the year, you sold the stock for \$176.53, which was \$47.79 more than you paid for it. This increase in value is known as a capital gain. As the owner of the stock, you also received \$2.68 in dividends during 2020. The total dollar return from your investment is calculated as
$Total Dollar Return = Dividend Income + Capital Gain= 2.68+ 47.79 = 50.47Total Dollar Return = Dividend Income + Capital Gain= 2.68+ 47.79 = 50.47$
15.1
It is common to express investment returns in percentage terms rather than dollar terms. This allows you to answer the question “How much do I receive for each dollar invested?” so that you can compare investments of different sizes. The total percent return from your investment is
$TotalPercentReturn = DividendYield+CapitalGainYield= 2.68128.74+47.79128.74=0.0208+0.3712=0.3920=39.20%TotalPercentReturn = DividendYield+CapitalGainYield= 2.68128.74+47.79128.74=0.0208+0.3712=0.3920=39.20%$
15.2
The dividend yield is calculated by dividing the dividends you received by the initial stock price. This calculation says that for each dollar invested in TGT in 2020, you received \$0.0208 in dividends. The capital gain yield is the change in the stock price divided by the initial stock price. This calculation says that for each dollar invested in TGT in 2020, you received \$0.3712 in capital gains. Your total percent return of 39.20% means that you made \$0.392 for every dollar invested when your gains from both dividends and stock price appreciation are totaled together.
Think It Through
Calculating Return
You purchased 10 shares of 3M (MMM) stock in January 2020 for \$175 per share, received dividends of \$5.91 per share, and sold the stock at the end of the year for \$169.72 per share. Calculate your total dollar return, your dividend yield, your capital gain yield, and your total percent yield.
Your dividend yield is $5.91175.00 = 0.03385.91175.00 = 0.0338$, or 3.38%, and your capital gain yield is $169.72 - 175.00175.00 = -0.0302,169.72 - 175.00175.00 = -0.0302,$ or 3.02%. Your total percent return is $3.38%+(-3.02%)=0.36%.3.38%+(-3.02%)=0.36%.$
Notice that you sold MMM for a price lower than what you paid for it at the beginning of the year. Your capital gain is negative, or what is often referred to as a capital loss. Although the price fell, you still had a positive total dollar return because of the dividend income.
Of course, investors seldom purchase a stock and then sell it exactly one year later. Assume that you purchased shares of Facebook (FB) on June 1, 2020, for \$228.50 per share and sold the shares three months later for \$261.90. You received no dividends. In this case, your holding period percentage return is calculated as
$261.90-228.50228.50=0.1462=14.62%261.90-228.50228.50=0.1462=14.62%$
15.4
This 14.62% is your return for a three-month holding period. To compare them to other investment opportunities, you need to express returns on a per-year, or annualized, basis. The holding period returned is converted to an effective annual rate (EAR) using the formula
$EAR=1+HoldingPeriodPercentageReturnm-1EAR=1+HoldingPeriodPercentageReturnm-1$
15.5
where m is the number of holding periods in a year.
There are four three-month periods in a year. So, the EAR for this investment is
$EAR=1+0.14624-1=0.7260=72.60%EAR=1+0.14624-1=0.7260=72.60%$
15.6
What happens if you own a stock for more than one year? Your holding period return would have occurred over a period longer than a year, but the process to calculate the EAR is the same. Suppose you purchased shares of FB in May 2015, when it was selling for \$79.30 per share. You held the stock until May 2020, when you sold it for \$224.59. Your holding period percentage return would be $224.59 - 79.3079.30 = 183.22%224.59 - 79.3079.30 = 183.22%$. You more than tripled your money, but it took you five years to do so. Your EAR, which will be smaller than this five-year holding period return rate, is calculated as
$EAR=1+1.832215-1=23.15%EAR=1+1.832215-1=23.15%$
15.7
Average Annual Returns
Suppose that you purchased shares of Delta Airlines (DAL) at the beginning of 2011 for \$11.19 and held the stock for 10 years before selling it for \$40.21. You made $40.21 - 11.19 = 29.02 40.21 - 11.19 = 29.02$ on your investment over a 10-year period. This is a 259.34% holding period return. The EAR for this investment is
$EAR=1+2.5934110-1=13.65%EAR=1+2.5934110-1=13.65%$
15.8
To calculate the EAR using the above formula, the holding period return must first be calculated. The holding period return represents the percentage return earned over the entire time the investment is held. Then the holding period return is converted to an annual percentage rate using the formula.
You can also use the basic time value of money formula to calculate the EAR on an investment. In time value of money language, the initial price paid for the investment, \$11.19, is the present value. The price the stock is sold for, \$40.21, is the future value. It takes 10 years for the \$11.19 to grow to \$40.21. Using the time value of money will result in a calculation of
$PV × 1 + in = FV1.19 × 1 + i10 = 40.211 + i = 3.59340.10i = 13.65%PV × 1 + in = FV1.19 × 1 + i10 = 40.211 + i = 3.59340.10i = 13.65%$
15.9
The EAR formula and the time value of money both result in a 13.65% annual return. Mathematically, the two formulas are the same; one is simply an algebraic rearrangement of the other.
If you earned 13.65% each year, compounded for 10 years, you would have converted your \$11.19 per share investment to \$40.21 per share. Of course, DAL stock did not increase by exactly 13.65% each year. The returns for DAL for each year are shown in Table 15.1. Some years, the return was much higher than 13.65%. In 2013, the return was almost 133%! Other years, the return was much lower than 13.65%; in fact, in the return was negative in four of the years.
Year Return Value of Investment (\$)
Initial investment of 11.19
2011 −0.3579 7.19
2012 0.4672 10.54
2013 1.3261 24.52
2014 0.8053 44.27
2015 0.0405 46.06
2016 −0.0135 45.44
2017 0.1623 52.81
2018 −0.0866 48.24
2019 0.2038 58.07
2020 −0.3077 40.20
Table 15.1 Yearly Returns for DAL, 2011–2020: Value of Initial Investment at Each Year End
Although an investment in DAL of \$11.19 at the beginning of 2011 grew to \$40.20 by the end of 2020, this growth was not consistent each year. The amount that the stock was worth at the end of each year is also shown in Table 15.1. During 2011, the return for DAL was −35.79%, resulting in the value of the investment falling to $11.19 × 1 + -0.3579 = 7.1911.19 × 1 + -0.3579 = 7.19$. The following year, 2012, the return for DAL was 46.72%. Therefore, the value of the investment was $7.19 × 1 + 1+0.4672 = 10.547.19 × 1 + 1+0.4672 = 10.54$ at the end of 2012. This process continues each year that the stock is held.
The compounded annual return derived from the EAR and time value of money formulas is also known as a geometric average return. A geometric average return is calculated using the formula
$Geometric Average Return = 1 + R1 × 1 + R2 × … × 1+RN 1N- 1Geometric Average Return = 1 + R1 × 1 + R2 × … × 1+RN 1N- 1$
15.10
where RN is the return for each year in the time period for which the average is calculated.
The calculation of the geometric average return for DAL is shown in the right column of Table 15.2. (The slight difference in the geometric average return of 13.64% from the 13.65% derived from the EAR and time value of money calculations is due to rounding errors.)
Year Return 1 + Return
2011 −0.3579 0.6421
2012 0.4672 1.4672
2013 1.3261 2.3261
2014 0.8053 1.8053
2015 0.0405 1.0405
2016 −0.0135 0.9865
2017 0.1623 1.1623
2018 −0.0866 0.9134
2019 0.2038 1.2038
2020 −0.3077 0.6923
Arithmetic Avg 0.2240 3.5928 Product of (1 + Return)
Std Dev 0.5190 1.1364 Product raised to 1/N
0.1364 Geometric Average
Table 15.2 Yearly Returns for DAL, 2011–2020, with Calculation of the Arithmetic Mean, Standard Deviation, and Geometric Mean
Looking at Table 15.2, you will notice that the geometric average return differs from the mean return. Adding each of the annual returns and dividing the sum by 10 results in a 22.4% average annual return. This 22.4% is called the arithmetic average return.
The geometric average return will be smaller than the arithmetic average return (unless the returns for all years are identical). This is due to the basic arithmetic of compounding. Think of a very simple example in which you invest \$100 for two years. If you have a positive return of 50% the first year and a negative 50% return the second year, you will have an arithmetic average return of $0.5 + -0.502 = 0.0%0.5 + -0.502 = 0.0%$, but you will have a geometric average return of $(1 + 0.5) × (1 - 0.5)0.5 - 1 = -13.4%(1 + 0.5) × (1 - 0.5)0.5 - 1 = -13.4%$. With a 50% positive return the first year, you ended the year with \$150. The second year, you lost 50% of that balance and were left with only \$75.
Another important fact when studying average returns is that the order in which you earn the returns is not important. Consider what would have occurred if the returns in the two years were reversed, so that you faced a loss of 50% in the first year of your investment and a gain of 50% in the second year of your investment. With a −50% return in the first year, you would have ended that year with only \$50. Then, if that \$50 earned a positive 50% return the second year, you would have a \$75 balance at the end of the two-year period. A negative return of 50% followed by a positive return of 50% still results in an arithmetic average return of 0% and a geometric average return of $(1 - 0.5) × (1 + 0.5)0.5 - 1 = -13.4%(1 - 0.5) × (1 + 0.5)0.5 - 1 = -13.4%$.
Think It Through
Calculating Arithmetic and Geometric Average Return
The annual returns for CVS Health Corp. (CVS) for the 10-year period of 2011–2020 are shown in Table 15.3.
Year Returns
2011 18.94%
2012 20.28%
2013 50.38%
2014 37.12%
2015 2.90%
2016 −17.83%
2017 −5.75%
2018 −7.04%
2019 17.26%
2020 −5.14%
Table 15.3 CVS Annual Returns, 2011–2020 (source: Yahoo! Finance)
What was the arithmetic average return during the decade? What was the geometric average return during the decade?
Both the arithmetic average return and the geometric average return are “correct” calculations. They simply answer different questions. The geometric average tells you what you actually earned per year on average, compounded annually. It is useful for calculating how much a particular investment grows over a period of time. The arithmetic average tells you what you earned in a typical year. When we are looking at the historical description of the distribution of returns and want to predict what to expect in a particular year, the arithmetic average is the relevant calculation.
Measuring Risk
Although the arithmetic average return for Delta Airlines (DAL) for 2011–2020 was 22.4%, there is not a year in which the return was exactly 22.4%. In fact, in some years, the return was much higher than the average, such as in 2013, when it was 132.61%. In other years, the return was negative, such as 2011, when it was −35.79%. Looking at the yearly returns in Table 15.2, the return for DAL varies widely from year to year. In finance, this volatility of returns is considered risk.
Volatility of Returns
The most commonly used measure of volatility of returns in finance is the standard deviation of the returns. The standard deviation of returns for DAL for the sample period 2011–2020 is 51.9%. Remember that if the normal distribution (a bell−shaped curve) describes returns, then 68% (or about two-thirds) of the time, the return in a particular year will be within one standard deviation above and one standard deviation below the arithmetic average return. Given DAL’s average return of 22.4%, the actual yearly return will be somewhere between −29.5% and 74.29% in two out of three years. A very high return of greater than 74.29% would occur 16% of the time; a very large loss of more than 29.5% would also occur 16% of the time.
As you can see, there is a wide range of what can be considered a “typical” year for DAL. Although we can calculate an average return, the return in any particular year is likely to vary from that average. The larger the standard deviation, the greater this range of returns is. Thus, a larger standard deviation indicates a greater volatility of returns and, hence, more risk.
Think It Through
Calculating the Standard Deviation of Returns
You calculated the arithmetic average return for CVS to be 11.11% for the 10-year period of 2011–2020. Calculate the standard deviation of returns for CVS for the same period (see Table 15.5). What does this tell you about what an investor in CVS experienced in a typical year during that decade?
Firm-Specific Risk
Investors purchase a share of stock hoping that the stock will increase in value and they will receive a positive return. You can see, however, that even with well-established companies such as ExxonMobil and CVS, returns are highly volatile. Investors can never perfectly predict what the return on a stock will be, or even if it will be positive.
The yearly returns for four companies—Delta Airlines (DAL), Southwest Airlines (LUV), ExxonMobil (XOM), and CVS Health Corp. (CVS)—are shown in Table 15.6. Each of these stocks had years in which the performance was much better or much worse than the arithmetic average. In fact, none of the stocks appear to have a typical return that occurs year after year.
Two-Stock Portfolio
Year DAL LUV XOM CVS
2011 −35.79% −33.93% 18.67% 18.94%
2012 46.72% 20.03% 4.70% 20.28%
2013 132.61% 85.38% 20.12% 50.38%
2014 80.53% 126.47% −6.06% 37.12%
2015 4.05% 2.43% −12.79% 2.90%
2016 −1.35% 16.72% 19.88% −17.83%
2017 16.23% 32.41% −3.81% −5.75%
2018 −8.66% −28.28% −15.09% −7.04%
2019 20.38% 17.69% 7.23% 17.26%
2020 −30.77% −13.04% −36.21% −5.14%
Average 22.40% 22.59% −0.34% 11.11%
Std Dev 51.90% 49.84% 18.18% 21.56%
Table 15.6 Yearly Returns for DAL, LUV, XOM, and CVS (source: Yahoo! Finance)
Figure 15.2 contains a graph of the returns for each of these four stocks by year. In this graph, it is easy to see that DAL and LUV both have more volatility, or returns that vary more from year to year, than do XOM or CVS. This higher volatility leads to DAL and LUV having higher standard deviations of returns than XOM or CVS.
Figure 15.2 Yearly Returns for DAL, LUV, XOM, and CVS (data source: Yahoo! Finance)
Standard deviation is considered a measure of the risk of owning a stock. The larger the standard deviation of a stock’s annual returns, the further from the average that stock’s return is likely to be in any given year. In other words, the return for the stock is highly unpredictable. Although the return for CVS varies from year to year, it is not subject to the wide swings of the returns for DAL or LUV.
Why are stock returns so volatile? The value of the stock of a company changes as the expectations of the future revenues and expenses of the company change. These expectations may change due to a number of events and new information. Good news about a company will tend to result in an increase in the stock price. For example, DAL announcing that it will be opening new routes and flying to cities it has not previously serviced suggests that DAL will have more customers and more revenue in future years. Or if CVS announces that it has negotiated lower rent for many of its locations, investors will expect the expenses of the company to fall, leading to more profits. Those types of announcements will often be associated with a higher stock price. Conversely, if the pilots and flight attendants for DAL negotiate higher salaries, the expenses for DAL will increase, putting downward pressure on profits and the stock price.
Link to Learning
Peloton and Risk
An example of how news can impact the price of a stock occurred on May 5, 2021, when Peloton recalled all of its Tread+ and Tread products after the tragic death of a child and 70 injuries associated with use of its products.1 The previous day, Peloton stock traded for \$96.70 per share. The stock price dropped approximately 15% when the recall was announced. The closing price for a share of Peloton on May 5 was \$82.62.2 You can read the company’s statement about this recall online. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/15%3A_How_to_Think_about_Investing/15.01%3A_Why_It_Matters.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Explain the benefits of diversification.
• Describe the relationship between risk and return for large portfolios.
• Compare firm-specific and systematic risk.
• Discuss how portfolio size impacts risk.
Diversification
So far, we have looked at the return and the volatility of an individual stock. Most investors, however, own shares of stock in multiple companies. This collection of stocks is known as a portfolio. Let’s explore why it is wise for investors to hold a portfolio of stocks rather than to pick just one favorite stock to own.
We saw that investors who owned DAL experienced an average annual return of 20.87% but also a large standard deviation of 51.16%. Investors who used all their funds to purchase DAL stock did exceptionally well during 2012–2014. But in 2020, those investors lost almost one-third of their money as COVID-19 caused a sharp reduction in air travel worldwide. To protect against these extreme outcomes, investors practice what is called diversification, or owning a variety of stocks in their portfolios.
Suppose, for example, you have saved \$50,000 that you want to invest. If you purchased \$50,000 of DAL stock, you would not be diversified. Your return would depend solely on the return on DAL stock. If, instead, you used \$5,000 to purchase DAL stock and used the remaining \$45,000 to purchase nine other stocks, you would be diversifying. Your return would depend not only on DAL’s return but also on the returns of the other nine stocks in your portfolio. Investors practice diversification to manage risk.
It is akin to the saying “Don’t put all of your eggs in one basket.” If you place all of your eggs in one basket and that basket breaks, all of your eggs will fall and crack. If you spread your eggs out across a number of baskets, it is unlikely that all of the baskets will break and all of your eggs will crack. One basket may break, and you will lose the eggs in that basket, but you will still have your other eggs. The same idea holds true for investing. If you own stock in a company that does poorly, perhaps even goes out of business, you will lose the money you placed in that particular investment. However, with a diversified portfolio, you do not lose all your money because your money is spread out across a number of different companies.
Link to Learning
Diversification
Fidelity Investments Inc. is a multinational financial services firm and one of the largest asset managers in the world. In this educational video for investors, Fidelity provides an explanation of what diversification is and how it impacts an investor’s portfolio.
Table 15.7 shows the returns of investors who placed 50% of their money in DAL and the remaining 50% in LUV, XOM, or CVS. Notice that the standard deviation of returns is lower for the two-stock portfolios than for DAL as an individual investment.
Two-Stock Portfolio
Year DAL DAL and LUV DAL and XOM DAL and CVS
2011 −35.79% −34.86% −8.56% −8.43%
2012 46.72% 33.38% 25.71% 33.50%
2013 132.61% 109.00% 76.36% 91.50%
2014 80.53% 103.50% 37.24% 58.83%
2015 4.05% 3.24% −4.37% 3.47%
2016 −1.35% 7.69% 9.27% −9.59%
2017 16.23% 24.32% 6.21% 5.24%
2018 −8.66% −18.47% −11.88% −7.85%
2019 20.38% 19.03% 13.81% 18.82%
2020 −30.77% −21.90% −33.49% −17.95%
Average 22.40% 22.49% 11.03% 16.75%
Std Dev 51.90% 49.11% 30.43% 35.10%
Table 15.7 Yearly Returns for DAL Versus a Two-Stock Portfolio Containing DAL and LUV, XOM, or CVS (data source: Yahoo! Finance)
As investors diversify their portfolios, the volatility of one particular stock becomes less important. XOM has good years with above-average returns and bad years with below-average (and even negative) returns, just like DAL. But the years in which those above-average and below-average returns occur are not always the same for the two companies. In 2014, for example, the return for DAL was greater than 80%, while the return for XOM was negative. On the other hand, in 2011, when DAL had a return of −35.15%, XOM had a positive return. When more than one stock is held, the gains in one stock can offset the losses in another stock, washing away some of the volatility.
When an investor holds only one stock, that one stock’s volatility contributes 100% to the portfolio’s volatility. When two stocks are held, the volatility of each stock contributes to the volatility of the portfolio. However, the volatility of the portfolio is not simply the average of the volatility of each stock held independently. How correlated the two stocks are, or how much they move together, will impact the volatility of the portfolio.
You will recall from our study of correlation in Regression Analysis in Finance that a correlation coefficient describes how two variables move relative to each other. A correlation coefficient of 1 means that there is a perfect, positive correlation between the two variables, while a correlation coefficient of −1 means that the two variables move exactly opposite of each other. Stocks that are in the same industry will tend to be more strongly correlated than stocks that are in much different industries. During the 2011–2020 time period, the correlation coefficient for DAL and LUV was 0.87, the correlation coefficient for DAL and XOM was 0.35, and the correlation coefficient for DAL and CVS was 0.79. Combining stocks that are not perfectly positively correlated in a portfolio decreases risk.
Notice that investors who owned DAL and LUV from 2011 to 2020 would have had a lower portfolio standard deviation, but not much lower, than investors who just owned DAL. Because the correlation coefficient is less than one, the standard deviation fell. However, because the two stocks are in the same industry and exposed to many of the same economic issues, the correlation coefficient is relatively high, and combining those two stocks provides only a small decrease in risk.
This is because, as airlines, DAL and LUV face many of the same market conditions. In years when the economy is strong, the weather is good, fuel prices are low, and people are traveling a lot, both companies will do well. When something such as bad weather conditions reduces the amount of air travel for several weeks, both companies are harmed. By holding LUV in addition to DAL, investors can reduce exposure to risk that is specific to DAL (perhaps a problem that DAL has with its reservation system), but they do not reduce exposure to the risk associated with the airline industry (perhaps rising jet fuel prices). DAL and LUV tend to experience positive returns in the same years and negative returns in the same years.
On the other hand, investors who added XOM to their portfolio saw a significantly lower standard deviation than those who held just DAL. In years when jet fuel prices rise, harming the profits of both DAL and LUV, XOM is likely to see high profits. Diversifying a portfolio across firms that are less correlated will reduce the standard deviation of the portfolio more.
Link to Learning
How to Build a Diversified Portfolio
TV personality, former hedge fund manager, and author Jim Cramer encourages investors to build a diversified portfolio, having no more than 20% of a portfolio in one sector.3 Watch this CNBC video to learn more about how he suggests investors can build a diversified portfolio by purchasing five to 10 stocks.
Portfolio Size and Risk
As you add more stocks to a portfolio, the volatility, or standard deviation, of the portfolio decreases. The volatility of individual assets becomes less and less important. As we discussed earlier, the risk that is associated with events related to a particular company is called firm-specific risk, or unsystematic, risk. Examples of unsystematic risk would include a company facing a product liability lawsuit, a company inventing a new product, or accounting irregularities being detected. Holding a portfolio of stocks means that if one company you have invested in goes out of business because of poor management, you do not lose all your savings because some of your money is invested in other companies. Portfolio diversification protects you from being significantly impacted by unsystematic risk.
However, there is a level below which the portfolio risk does not drop, no matter how diversified the portfolio becomes. The risk that never goes away is known as systematic risk. Systematic risk is the risk of holding the market portfolio.
We have talked about reasons why a firm’s returns might be volatile; for example, the firm discovering a new technology or having a product liability lawsuit brought against it will impact that firm specifically. There are also events that broadly impact the stock market. Changes in the Federal Reserve Bank’s monetary policy and interest rates impact all companies. Geopolitical events, major storms, and pandemics can also impact the entire market. Investors in stocks cannot avoid this type of risk. This unavoidable risk is the systematic risk that investors in stocks have. This systematic risk cannot be eliminated through diversification.
In addition, as per research conducted by Meir Statman,4 the standard deviation of a portfolio drops quickly as the number of stocks in the portfolio increases from one to two or three (see Figure 2 illustration in this subsequent article by Statman for context). Increasing the size of the portfolio decreases the standard deviation, and thus the risk, of the portfolio. However, as the portfolio increases in size, the amount of risk reduced by adding one more stock to the portfolio will decrease. How many stocks does an investor need for a portfolio to be well-diversified? There is not an exact number that all financial managers agree on. A portfolio of 15 highly correlated stocks will offer less benefits of diversification than a portfolio of 10 stocks with lower correlation coefficients. A portfolio that consists of American Airlines, Spirit Airlines, United Airlines, Southwest Airlines, Delta Airlines, and Jet Blue, along with a few other stocks, is not very diversified because of the heavy concentration in the airline industry. The term diversified portfolio is a relative concept, but the average investor can create a reasonably diversified portfolio with approximately a dozen stocks. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/15%3A_How_to_Think_about_Investing/15.03%3A_Risk_and_Return_to_Multiple_Assets.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Define risk premium.
• Explain the concept of beta.
• Compute the required return of a security using the CAPM.
Risk-Free Rate
The capital asset pricing model (CAPM) is a financial theory based on the idea that investors who are willing to hold stocks that have higher systematic risk should be rewarded more for taking on this market risk. The CAPM focuses on systematic risk, rather than a stock’s individual risk, because firm-specific risk can be eliminated through diversification.
Suppose that your grandparents have given you a gift of \$20,000. After you graduate from college, you plan to work for a few years and then apply to law school. You want to use the \$20,000 your grandparents gave you to pay for part of your law school tuition. It will be several years before you are ready to spend the money, and you want to keep the money safe. At the same time, you would like to invest the money and have it grow until you are ready to start law school.
Although you would like to earn a return on the money so that you have more than \$20,000 by the time you start law school, your primary objective is to keep the money safe. You are looking for a risk-free investment. Lending money to the US government is considered the lowest-risk investment that you can make. You can purchase a US Treasury security. The chances of the US government not paying its debts is close to zero. Although, in theory, no investment is 100% risk-free, investing in US government securities is generally considered a risk-free investment because the risk is so miniscule.
The rate that you can earn by purchasing US Treasury securities is a proxy for the risk-free rate. It is used as an investing benchmark. The average rate of return for the three-month US Treasury security from 1928 to 2020 is 3.36%.5 You can see that you will not become immensely wealthy by investing in US Treasury bills. Another characteristic of US Treasury securities, however, is that their volatility tends to be much lower than that of stocks. In fact, the standard deviation of returns for the US Treasury bills is 3.0%. Unlike the returns for stocks, the return on US Treasury bills has never been negative. The lowest annual return was 0.03%, which occurred in 2014.6
Link to Learning
US Treasury Securities
Visit the website of the US Department of the Treasury to learn more about US Treasury securities. You will find current interest rates for both short-term securities (US Treasury bills) and long-term securities (US Treasury bonds).
Risk Premium
You know that if you use your \$20,000 to invest in stock rather than in US Treasury bills, the outcome of the investment will be uncertain. Your investments may do well, but there is also a risk of losing money. You will only be willing to take on this risk if you are rewarded for doing so. In other words, you will only be willing to take the risk of investing in stocks if you think that doing so will make you more than you would make investing in US Treasury securities.
From 1928 to 2020, the average return for the S&P 500 stock index has been 11.64%, which is much higher than the 3.36% average return for US Treasury bills.7 Stock returns, with a standard deviation of 19.49%, however, have also been much more volatile. In fact, there were 25 years in which the return for the S&P 500 index was negative.
You may not be willing to take the risk of losing some of the money your grandparents gave you because you have been setting it aside for law school. If that’s the case, you will want to invest in US Treasury securities. You may have money that you are saving for other long-term goals, such as retirement, with which you are willing to take some risk. The extra return that you will earn for taking on risk is known as the risk premium. The risk premium can be thought of as your reward for being willing to bear risk.
The risk premium is calculated as the difference between the return you receive for taking on risk and what you would have returned if you did not take on risk. Using the average return of the S&P 500 (to measure what investors who bear the risk earn) and the US Treasury bill rate (to measure what investors who do not bear risk earn), the risk premium is calculated as
$RiskPremium = S&P 500AvgReturn-US T-Bill Avg Return= 11.64%-3.36% = 8.28%RiskPremium = S&P 500AvgReturn-US T-Bill Avg Return= 11.64%-3.36% = 8.28%$
15.11
Beta
The risk premium represents how much an investor who takes on the market portfolio is rewarded for risk. Investors who purchase one stock—DAL, for example—experience volatility, which is measured by the standard deviation of that stock’s returns. Remember that some of that volatility, the volatility caused by firm-specific risk, can be diversified away. Because investors can eliminate firm-specific risk through diversification, they will not be rewarded for that risk. Investors are rewarded for the amount of systematic risk they incur.
Interpreting Beta
The relevant risk for investors is the systematic risk they incur. The systematic risk of a particular stock is measured by how much the stock moves with the market. The measure of how much a stock moves with the market is known as its beta. A stock that tends to move in sync with the market will have a beta of 1. For these stocks, if the market goes up 10%, the stock generally also goes up 10%; if the market goes down 5%, stocks with a beta of 1 also tend to go down 5%.
If a company has a beta greater than 1, then the stock tends to have a more pronounced move in the same direction as a market move. For example, if a stock has a beta of 2, the stock will tend to increase by 20% when the market goes up by 10%. If the market falls by 5%, that same stock will tend to fall by twice as much, or 10%. Thus, stocks with a beta greater than 1 experience greater swings than the overall market and are considered to be riskier than the average stock.
On the other hand, stocks with a beta less than 1 experience smaller swings than the overall market. A beta of 0.5, for example, means that a stock tends to experience moves that are only 50% of overall market moves. So, if the market increases by 10%, a stock with a beta of 0.5 would tend to rise by only 5%. A market decline of 5% would tend to be associated with a 2.5% decrease in the stock.
Calculating Betas
The calculation of beta for DAL is demonstrated in Figure 15.3. Monthly returns for DAL and for the S&P 500 are plotted in the diagram. Each dot in the scatter plot corresponds to a month from 2018 to 2020; for example, the dot that lies furthest in the upper right-hand corner represents November 2020. The return for the S&P 500 was 10.88% that month; this return is plotted along the horizontal axis. The return for DAL during November 2020 was 31.36%; this return is plotted along the vertical axis.
You can see that generally, when the overall stock market as measured by the S&P 500 is positive, the return for DAL is also positive. Likewise, in months in which the return for the S&P 500 is negative, the return for DAL is also usually negative. Drawing a line that best fits the data, also known as a regression line, summarizes the relationship between the returns for DAL and the S&P 500. The slope of this line, 1.39, is DAL’s beta. Beta measures the amount of systematic risk that DAL has.
Figure 15.3 Calculation of Beta for DAL (data source: Yahoo! Finance)
CAPM Equation
Because DAL’s beta of 1.39 is greater than 1, DAL is riskier than the average stock in the market. Finance theory suggests that investors who purchase DAL will expect a higher rate of return to compensate them for this risk. DAL has 139% of the average stock’s systematic risk; therefore, investors in the stock should receive 139% of the market risk premium.
The capital asset pricing model (CAPM) equation is
$Re=Rf + Beta × Market Risk PremiumRe=Rf + Beta × (Rm - Rf)Re=Rf + Beta × Market Risk PremiumRe=Rf + Beta × (Rm - Rf)$
15.12
where Re is the expected return of the asset, Rf is the risk-free rate of return, and Rm is the expected return of the market. Given the average S&P 500 return of 11.64% and the average US Treasury bill return of 3.36%, the expected return of DAL would be calculated as
$Re=RUS T-bill + Beta × RS&P - RUS T-billRe=0.0336 + 1.39 × 0.1164 - 0.0336 = 14.87%Re=RUS T-bill + Beta × RS&P - RUS T-billRe=0.0336 + 1.39 × 0.1164 - 0.0336 = 14.87%$
15.13
Link to Learning
Calculating Beta
Many providers of stock data and investment information will list a company’s beta. Two internet sources that can be used to find a company’s beta are Yahoo! Finance and MarketWatch. Various sources may not provide the exact same value for beta for a company. For example, in early February 2021, Yahoo! Finance reported that the beta for DAL was 1.46,8 while MarketWatch reported it as 1.29.9 Both of these numbers are slightly different from the 1.39 calculated in the graph above.
There are several reasons why beta may vary slightly from source to source. One is the time frame used in the beta calculation. Data from three years were used to calculate the beta in Figure 15.3. Time frames ranging from three to five years are commonly used when calculating beta. Another reason different sources might report different betas is the frequency with which the data is collected. Monthly returns are used in Figure 15.3; some analysts will use weekly data. Finally, the S&P 500 is used to measure the market return in Figure 15.3; the S&P 500 is one of the most common measures of overall market returns, but alternatives exist and are used by some analysts
Link to Learning
CAPM
Watch this video for further information about the CAPM. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/15%3A_How_to_Think_about_Investing/15.04%3A_The_Capital_Asset_Pricing_Model_%28CAPM%29.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Interpret a Sharpe ratio.
• Interpret a Treynor measurement.
• Interpret Jensen’s alpha.
Sharpe Ratio
Investors want a measure of how good a professional money manager is before they entrust their hard-earned funds to that professional for investing. Suppose that you see an advertisement in which McKinley Investment Management claims that the portfolios of its clients have an average return of 20% per year. You know that this average annual return is meaningless without also knowing something about the riskiness of the firm’s strategy. In this section, we consider some ways to evaluate the riskiness of an investment strategy.
A basic measure of investment performance that includes an adjustment for risk is the Sharpe ratio. The Sharpe ratio is computed as a portfolio’s risk premium divided by the standard deviation of the portfolio’s return, using the formula
$Sharpe Ratio=RP - RfσPSharpe Ratio=RP - RfσP$
15.14
The portfolio risk premium is the portfolio return RP minus the risk-free return Rf; this is the basic reward for bearing risk. If the risk-free return is 3%, McKinley Investment Management’s clients who are earning 20% on their portfolios have an excess return of 17%.
The standard deviation of the portfolio’s return, $σPσP$, is a measure of risk. Although you see that McKinley’s clients earn a nice 20% return on average, you find that the returns are highly volatile. In some years, the clients earn much more than 20%, and in other years, the return is much lower, even negative. That volatility leads to a standard deviation of returns of 26%. The Sharpe ratio would be $17%26%17%26%$, or 0.65.
Thus, the Sharpe ratio can be thought of as a reward-to-risk ratio. The standard deviation in the denominator can be thought of as the units of risk the investor has. The numerator is the reward the investor is receiving for taking on that risk.
Link to Learning
Sharpe Ratio
The Sharpe ratio was developed by Nobel laureate William F. Sharpe. You can visit Sharpe’s Stanford University website to find videos in which he discusses financial topics and links to his research as well as his advice on how to invest.
Treynor Measurement of Performance
Another reward-to-risk ratio measurement of investment performance is the Treynor ratio. The Treynor ratio is calculated as
$Treynor Ratio = RP - RfβPTreynor Ratio = RP - RfβP$
15.15
Just as with the Sharpe ratio, the numerator of the Treynor ratio is a portfolio’s risk premium; the difference is that the Treynor ratio focus focuses on systematic risk, using the beta of the portfolio in the denominator, while the Shape ratio focuses on total risk, using the standard deviation of the portfolio’s returns in the denominator.
If McKinley Investment Management has a portfolio with a 20% return over the past five years, with a beta of 1.2 and a risk-free rate of 3%, the Treynor ratio would be $(0.20 - 0.03)1.2=0.14.(0.20 - 0.03)1.2=0.14.$
Both the Sharpe and Treynor ratios are relative measures of investment performance, meaning that there is not an absolute number that indicates whether an investment performance is good or bad. An investment manager’s performance must be considered in relation to that of other managers or to a benchmark index.
Jensen’s Alpha
Jensen’s alpha is another common measure of investment performance. It is computed as the raw portfolio return minus the expected portfolio return predicted by the CAPM:
$Jensen's Alpha = αP = RP - Re= RP - Rf + βPRm - RfJensen's Alpha = αP = RP - Re= RP - Rf + βPRm - Rf$
15.16
Suppose that the average market return has been 12%. What would Jensen’s alpha be for McKinley Investment Management’s portfolio with a 20% average return and a beta of 1.2?
$αMcKinley = 0.2 - 0.03 + 1.2(0.12 - 0.03) = 0.062αMcKinley = 0.2 - 0.03 + 1.2(0.12 - 0.03) = 0.062$
15.17
Unlike the Sharpe and Treynor ratios, which are meaningful in a relative sense, Jensen’s alpha is meaningful in an absolute sense. An alpha of 0.062 indicates that the McKinley Investment Management portfolio provided a return that was 6.2% higher than would be expected given the riskiness of the portfolio. A positive alpha indicates that the portfolio had an abnormal return. If Jensen’s alpha equals zero, the portfolio return was exactly what was expected given the riskiness of the portfolio as measured by beta.
Think It Through
Comparing the Returns and Risks of Portfolios
You are interviewing two investment managers. Mr. Wong shows that the average return on his portfolio for the past 10 years has been 14%, with a standard deviation of 8% and a beta of 1.2. Ms. Petrov shows that the average return on her portfolio for the past 10 years has been 16%, with a standard deviation of 10% and a beta of 1.6. You know that over the past 10 years, the US Treasury security rate has averaged 2% and the return on the S&P 500 has averaged 11%. Which portfolio manager do you think has done the better job?
Jensen’s alpha for Ms. Petrov’s portfolio is
$αPetrov = 0.16 - 0.02 + 1.6(0.11 - 0.02) = -0.004αPetrov = 0.16 - 0.02 + 1.6(0.11 - 0.02) = -0.004$
15.19
All three measures of portfolio performance suggest that Mr. Wong’s portfolio has performed better than Ms. Petrov’s has. Although Ms. Petrov has had a larger average return, the portfolio she manages is riskier. Ms. Petrov’s portfolio is more volatile than Mr. Wong’s, resulting in a higher standard deviation. Ms. Petrov’s portfolio has a higher beta, which means it has a higher amount of systematic risk. The CAPM suggests that a portfolio with a beta of 1.6 should have an expected return of 16.4%. Because Ms. Petrov’s portfolio has an average return of less than that, investors in Ms. Petrov’s portfolio are not rewarded for the risk that they have taken as much as would be expected. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/15%3A_How_to_Think_about_Investing/15.05%3A_Applications_in_Performance_Measurement.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Calculate the average return and standard deviation for a stock.
• Calculate the average return and standard deviation for a portfolio.
• Calculate the beta of a stock.
Average Return and Standard Deviation for a Single Stock
Excel can be used to calculate the average returns and the standard deviation of returns for both a single stock and a portfolio of stocks. It can also be used to calculate the beta for a stock. Historic stock price data for stocks you are interested in analyzing can easily be downloaded from sites such as Yahoo! Finance into Excel. The examples in this section use monthly stock data from December 2017 to December 2020 from Yahoo! Finance.
Monthly price data for AMZN (Amazon) is shown in column B of Figure 15.4. To begin, monthly returns must be calculated from the price data using the formula
$Monthly Return = Ending Price - Beginning PriceBeginning PriceMonthly Return = Ending Price - Beginning PriceBeginning Price$
15.20
The ending prices shown in Figure 15.4 are the last price the stock traded for each month. Each month, the return is calculated under the assumption that you purchased the stock at the last trading price of the previous month and sold at the last price of the current month. Thus, the return for January 2018 is calculated as
$Monthly Return = Jan2018 Price - Dec2017 PriceDec2017 Price = 1,450.89 - 1,169.471,169.47 = 0.2406 = 24.06%Monthly Return = Jan2018 Price - Dec2017 PriceDec2017 Price = 1,450.89 - 1,169.471,169.47 = 0.2406 = 24.06%$
15.21
This is accomplished in Excel by placing the formula =(B3-B2)/B2 in cell C3. This formula can then be copied down the spreadsheet through row C38. Now that each monthly return is in column C, you can calculate the average of the monthly returns in cell C39 and the standard deviation of returns in cell C40.
Figure 15.4 Calculating the Average Return and the Standard Deviation of Returns for AMZN (data source: Yahoo! Finance)
Over the three-year period, the average monthly return for AMZN was 3.3%. However, this return was highly volatile, with a standard deviation of 9.33%. Remember that this means that approximately two-thirds of the time, the monthly return from AMZN was between −6.03% and 12.63%.
Download the spreadsheet file containing key Chapter 15 Excel exhibits.
Average Return and Standard Deviation for a Portfolio
The Excel screenshot in Figure 15.5 shows the return and standard deviation calculation for a portfolio. This sample four-stock portfolio contains AMZN, CVS, AAPL (Apple), and NFLX (Netflix). This portfolio is constructed as an equally weighted portfolio; because there are four stocks in this portfolio, each has a weight of 25%.
Figure 15.5 Calculation of the Average Return and Standard Deviation for a Portfolio (data source: Yahoo! Finance)
The monthly returns for each stock are recorded in their respective columns. The portfolio return for each month is calculated as the weighted average of the four monthly individual stock returns. The formula for the portfolio return is
$Portfolio Return = 0.25 × RAMZN + 0.25 × RCVS + 0.25 × RAAPL + 0.25 × RNFLXPortfolio Return = 0.25 × RAMZN + 0.25 × RCVS + 0.25 × RAAPL + 0.25 × RNFLX$
15.22
The formula =\$B\$1*B3+\$C\$1*C3+\$D\$1*D3+\$E\$1*E3 is placed in cell F3. The formula is then copied down column F to calculate the portfolio return for each month. After the monthly portfolio return is calculated, then the average monthly portfolio return is calculated in cell F39. The average monthly portfolio return is 2.69%.
Because this is an equally weighted portfolio, with each of the four stocks impacting the portfolio return in the same way, the average monthly portfolio return of 2.69% is the same as the sum of the average monthly returns of the four stocks divided by four, or $0.0330 + 0.0021 + 0.0380 + 0.03474 = 0.0269 = 2.69%0.0330 + 0.0021 + 0.0380 + 0.03474 = 0.0269 = 2.69%$.
The standard deviation of the monthly portfolio returns is calculated in cell F40. This four-stock portfolio has a standard deviation of 7.10%. Unlike the average return, this standard deviation is not equal to the average of the standard deviations of returns of the four stocks. In fact, the standard deviation for the portfolio is less than the standard deviation for any one of the four stocks. Remember that this occurs because the stock returns are not perfectly positively correlated. The high return of one of the stocks in one month is dampened by a lower return in another stock during the same month. Likewise, a negative return in one stock during a month might be offset by a positive return in one of the other three stocks during the same month. This is the risk reduction benefit of holding a portfolio of stocks.
Calculating Beta
The standard deviation of a stock’s returns indicates the stock’s volatility. Remember that the volatility is caused by both firm-specific and systematic risk. Investors will not be rewarded for firm-specific risk because they can diversify away from it. Investors are, however, rewarded for systematic risk. To determine how much of a firm’s risk is due to systematic risk, you can use Excel to calculate the stock’s beta.
To calculate a stock’s beta, you need the monthly return for the market in addition to the monthly market return for the stock. In column B in Figure 15.6, the monthly return for SPY, the SPDR S&P 500 Trust, is recorded. SPY is an ETF that was created to mimic the performance of the S&P 500 index by State Street Global Advisors and is often used as a proxy for the overall market performance. The monthly returns for AMZN are visible in column C. It is important that these returns be lined up so that the returns for a particular month for both securities appear in the same row number. Also, you want to place the returns for SPY in the column to the left of the returns for AMZN so that when you create your graph, SPY will automatically appear on the horizontal axis.
Link to Learning
State Street Global Advisors
You can learn more about State Street Global Advisors and its creation of the first ETF by visiting the company’s history page at its website (ssga.com).
You will use a scatter plot to create a graph. In Excel, go to the Insert tab, and then from the Chart menu, choose the first scatter plot option.
Figure 15.6 Excel Format for Calculating Beta (data source: Yahoo! Finance)
Selecting the scatter plot option will result in a chart being inserted that looks like the chart in Figure 15.7. Each dot represents one month’s combination of returns, with the return for SPY measured on the horizontal axis and the return for AMZN measured on the vertical axis. Consider, for example, the dot in the furthest upper right-hand section of the figure. This dot is the plot of returns for the month of April 2020, when the return for SPY was 13.36% (measured on the horizontal axis) and the return for AMZN was 26.89% (measured on the vertical axis).
Hover your mouse over one of the dots, and right-click the dot to pull up a chart formatting menu. This menu will allow you to add labels to your axis and polish your chart in additional ways if you would like. Select the option Add Trendline.
Figure 15.7 Creating a Scatter Plot in Excel (data source: Yahoo! Finance)
When the trendline is inserted, a formatting box will appear on the right of your screen (see Figure 15.8). If it is not already selected, choose the Linear trendline option. Scroll down and select the “Display Equation on chart” option. You will see the equation $y = 1.1477x + 0.0186y = 1.1477x + 0.0186$ appear on the screen. This is the equation for the best-fit line that shows how AMZN moves when the market moves. The slope of this line, 1.1477, is the beta for AMZN. This tells you that for every 10% move the overall market makes, AMZN tends to move 11.477%. Because AMZN tends to move a little more than the broader market, it has a little more systematic risk than the average stock in the market.
Figure 15.8 Inserting a Trendline to Determine Beta (data source: Yahoo! Finance) | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/15%3A_How_to_Think_about_Investing/15.06%3A_Using_Excel_to_Make_Investment_Decisions.txt |
15.1 Risk and Return to an Individual Asset
Investors are interested in both the return they can expect to receive when making an investment and the risk associated with that investment. In finance, risk is considered the volatility of the return from time period to time period. Historical returns are measured by the arithmetic average, and the risk is measured by the standard deviation of returns.
15.2 Risk and Return to Multiple Assets
As investors hold multiple assets in a portfolio, they are able to eliminate firm-specific risk. However, systematic or market risk remains, even if an investor holds the market portfolio. The return to a portfolio is measured by the arithmetic average, and the risk is measured by the standard deviation of the returns of the portfolio. The risk of the portfolio will be lower than the weighted average of the risk of the individual securities because the returns of the securities are not perfectly correlated. A low or negative return for one stock in a period can be offset by a high return for another stock in the same period.
15.3 The Capital Asset Pricing Model (CAPM)
The capital asset pricing model (CAPM) relates the expected return of an asset to the systematic risk of that asset. Investors will be rewarded for taking on systematic risk. They will not be rewarded for taking on firm-specific risk, however, because that risk can be diversified away.
15.4 Applications in Performance Measurement
Because investors are not simply interested in returns but are also interested in risk, the success of a portfolio cannot be measured simply by considering the portfolio’s return. In order to compare investment portfolios, risk and return must both be taken into consideration. The Sharpe ratio and the Treynor ratio are two measures that provide a reward-to-risk measure of a portfolio. Jensen’s alpha provides a measure of the abnormal return of a portfolio, considering the portfolio’s risk level.
15.5 Using Excel to Make Investment Decisions
Using Excel to manipulate publicly available stock data makes calculating the average return of a stock and the standard deviation of returns easy. The average return for a portfolio and the standard deviation of the portfolio returns can also be calculated easily. By comparing the returns of a stock with the returns of the overall market using Excel charting tools, the beta for a stock, which measures systematic risk, can be determined.
15.08: Key Terms
arithmetic average return
the sum of an asset’s annual returns over a number of years divided by the number of years
beta
a measure of how a stock moves relative to the market
capital asset pricing model (CAPM)
the expected return of a security, equal to the risk-free rate plus a premium for the amount of risk taken
capital gain yield
the difference between the price a stock is sold for and the price that was originally paid for it divided by the price originally paid
diversification
holding a variety of assets in a portfolio
dividend yield
the total dividends received by the owner of a share of stock divided by the price originally paid for the stock
effective annual rate (EAR)
returns expressed on an annualized or yearly basis; allows for the comparison of various investments
firm-specific risk
the risk that an event may impact the expected revenue or costs of a firm, thereby impacting the returns to investors; also known as diversifiable risk
geometric average return
the compound annual return derived from the effective annual rate and time value of money formulas
holding period percentage return
the gain received from holding a stock, calculated by adding the amount received when the stock is sold to any dividends earned while holding the stock, subtracting the price originally paid for the stock, then dividing the difference by the price originally paid
Jensen’s alpha
a measure of portfolio performance, calculated as the raw portfolio return minus the expected portfolio return predicted by the CAPM
market risk premium
the reward for taking on the average amount of market risk
portfolio
a collection of owned stocks
realized return
the total return of an investment that occurs over a particular time period
risk premium
the extra return earned by taking on risk
risk-free rate
the reward for lending money when there is no risk of not receiving the principal and interest as promised
Sharpe ratio
a reward-to-risk measure of portfolio performance, calculated by subtracting the risk-free rate from the average portfolio return and then dividing by the standard deviation of the portfolio
systematic risk
risk that impacts the entire market and cannot be diversified away; also known as market risk
Treynor ratio
a reward-to-risk measure of portfolio performance, calculated by subtracting the risk-free rate from the average portfolio return and then dividing by the beta of the portfolio
15.09: CFA Institute
This chapter supports some of the Learning Outcome Statements (LOS) in this CFA® Level I Study Session. Reference with permission of CFA Institute. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/15%3A_How_to_Think_about_Investing/15.07%3A_Summary.txt |
1.
The total dollar return equals _______________.
1. the EPS of a stock
2. capital gains income plus dividend income
3. the price paid for a share of stock minus the selling price of the stock
4. the price paid for a share of stock divided by the selling price of the stock
2.
The dividend yield is calculated by _______________.
1. dividing the price of the stock by the EPS
2. subtracting any capital loss from the capital gain
3. dividing the annual dividend by the initial stock price
4. dividing the annual dividend by the net income for the year
3.
Which of the following is the best example of firm-specific risk?
1. A global pandemic causes major disruptions in the economy.
2. The Federal Reserve increases the money supply dramatically, leading to massive inflation.
3. AAA Pharmaceuticals withdraws a medication as it studies whether strokes five people suffered after taking the medication were related to the medication.
4. As an arctic blast descends on North America, most of the United States is blanketed in snow or ice.
4.
Investors diversify their portfolio in order to _______________.
1. reduce risk
2. increase risk
3. increase return
4. increase the standard deviation
5.
Which of the following would be the best example of systematic risk?
1. An error in the company’s computer system miscalculates the amount of inventory that Monique’s Boutique is holding.
2. BlueJay Air has a reduction in new reservations following a crash of one of its jets.
3. The spokesperson for Serena’s Sports Shoes is involved in an ethical scandal.
4. Interest rates rise after the Federal Reserve announces it will slow down the rate of growth of the money supply.
6.
As the number of stocks in a portfolio increases, _______________.
1. firm-specific risk increases
2. systematic risk becomes zero
3. systematic risk decreases and returns increase
4. firm-specific risk is reduced but systematic risk remains
7.
Beta is a measure of _______________.
1. systematic risk
2. firm-specific risk
3. a firm’s profitability
4. a stock’s dividend yield
8.
Which of the following would be the best estimate of the risk-free rate?
1. The rate of inflation
2. The average return on the S&P 500
3. The average return on Amazon’s stock
4. The average return on US Treasury bills
9.
The Sharpe ratio can be considered a measure of _______________.
1. the reward of an investment in relation to the risk
2. the systematic risk of a stock
3. the total return of a stock investment
4. the historical return of an individual security
10.
A positive Jensen’s alpha indicates that a portfolio has _______________.
1. a negative beta
2. an abnormal return
3. more total risk than the average portfolio
4. more systematic risk than the average portfolio
11.
If an equally weighted portfolio contains 10 stocks, then _______________.
1. the stocks in the portfolio will each have a weight of 0.10
2. the return of the portfolio must be multiplied by 10 to get the annualized return
3. the standard deviation of the portfolio will be one-tenth the standard deviation of one of the stocks
4. the standard deviation of the portfolio will be 10 times the standard deviation of one of the stocks | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/15%3A_How_to_Think_about_Investing/15.10%3A_Multiple_Choice.txt |
1.
What is the difference between firm-specific risk and unsystematic risk?
2.
Explain why diversification reduces unsystematic risk but not systematic risk.
3.
Explain what happens to the standard deviation of returns of a portfolio as the number of stocks in the portfolio increases.
4.
Enrique owns five stocks: Alaska Airlines, American Airlines, Delta Airlines, Southwest Airlines, and Ford. Radha also owns five stocks: Apple, McDonald’s, Tesla, Facebook, and Disney. Does Enrique or Radha have a more diversified portfolio?
5.
You are considering purchasing shares in a company that has a beta of 0.8. Explain what this beta means.
6.
Explain how the Sharpe ratio and the Treynor ratio can be considered reward-to-risk measures.
15.12: Problems
1.
You purchase 100 shares of COST (Costco) for \$280 per share. Three months later, you sell the stock for \$290 per share. You receive a dividend of \$0.57 a share. What is your total dollar return?
2.
You purchase 100 shares of COST for \$280 per share. Three months later, you sell the stock for \$290 per share. You receive a dividend of \$0.57 a share. What are your dividend yield, capital gain yield, and total percentage return?
3.
You purchase 100 shares of COST for \$280 per share. Three months later, you sell the stock for \$290 per share. You receive a dividend of \$0.57 a share. What is the EAR of your investment?
4.
You invest in a stock for four years. The returns for the four years are 20%, -10%, 15%, and -5%. Calculate the arithmetic average return and the geometric average return.
5.
You are considering purchasing shares in a company that has a beta of 0.9. The average return for the S&P 500 is 11%, and the average return for US Treasury bills has been 2%. Based on the CAPM, what is your expected return for the stock?
6.
Your portfolio has had a 15% rate of return with a standard deviation of 18% and a beta of 1.1. The average return for the S&P 500 has been 11%, and the average return for US Treasury bills has been 2%. Calculate the Sharpe ratio, Treynor ratio, and Jensen’s alpha for your portfolio.
7.
The monthly returns for Visa (V) and Pfizer (PFE) for 2018–2020 are provided in the chart below. In addition, the monthly return for the SPDR S&P 500 ETF Trust (SPY) is provided; SPY is often used as a proxy for the returns of the S&P 500, or a broad market index. Using Excel, calculate the arithmetic average monthly returns for V, PFE, and SPY. Also, calculate the standard deviation of returns for each of V, PFE, and SPY.
Monthly Returns for SPY, V, and PFE for 2018–2020
Date SPY V PFE
Jan-18 0.0618 0.0895 0.0226
Feb-18 -0.0364 -0.0104 -0.0197
Mar-18 -0.0313 -0.0253 -0.0135
Apr-18 0.0092 0.0607 0.0316
May-18 0.0243 0.0303 -0.0186
Jun-18 0.0013 0.0149 0.0196
Jul-18 0.0417 0.0324 0.1006
Aug-18 0.0319 0.0742 0.0398
Sep-18 0.0014 0.0233 0.0705
Oct-18 -0.0649 -0.0816 -0.0229
Nov-18 0.0185 0.0280 0.0736
Dec-18 -0.0933 -0.0673 -0.0485
Jan-19 0.0864 0.0233 -0.0275
Feb-19 0.0324 0.0971 0.0301
Mar-19 0.0136 0.0563 -0.0203
Apr-19 0.0454 0.0528 -0.0438
May-19 -0.0638 -0.0189 0.0224
Jun-19 0.0644 0.0774 0.0526
Jul-19 0.0201 0.0256 -0.1034
Aug-19 -0.0167 0.0158 -0.0847
Sep-19 0.0148 -0.0473 0.0201
Oct-19 0.0268 0.0398 0.0679
Nov-19 0.0362 0.0316 0.0039
Dec-19 0.0240 0.0201 0.0270
Jan-20 0.0045 0.0589 -0.0495
Feb-20 -0.0792 -0.0865 -0.0934
Mar-20 -0.1300 -0.1123 -0.0233
Apr-20 0.1336 0.1092 0.1752
May-20 0.0476 0.0924 -0.0044
Jun-20 0.0133 -0.0089 -0.1352
Jul-20 0.0636 -0.0143 0.1768
Aug-20 0.0698 0.1134 -0.0083
Sep-20 -0.0413 -0.0553 -0.0288
Oct-20 -0.0210 -0.0913 -0.0332
Nov-20 0.1088 0.1576 0.1381
Dec-20 0.0326 0.0414 -0.0293
Table 15.8
8.
Using the monthly returns provided in the table in problem 7, use Excel to calculate the beta for V and the beta for PFE. Which of these stocks has more systematic risk? What would you expect for the comparative returns of V and PFE?
15.13: Video Activity
How to Double Your Money in Seven Years
In this video, Jim Cramer explains how compounding can help investors build and preserve wealth. He provides suggestions for how young people can use the stock market to build financial independence.
1.
According to Jim Cramer, if you invest \$1,000 in the S&P 500, how much can you expect your investment to be worth in 35 years?
2.
Gather data over the past 10 years for the level of the S&P 500. How many of those years did the S&P 500 have a return of at least 10%? If you had invested in \$1,000 in an S&P 500 index fund 10 years ago, would you have doubled your money yet? Is your answer consistent with Jim Cramer’s message?
John Bogle and the Buy-and-Hold Strategy
In this video, the legendary investor Jack Bogle, founder and former CEO of Vanguard, discusses strategies for investors.
3.
When discussing following a buy-and-hold strategy, in which an investor makes a purchase and holds the same investment for the long term, Bogle says that the success of such a strategy depends on what is bought. What distinction does Bogle make between buying and holding an individual stock and buying and holding a broad-based index fund?
4.
How would you describe Bogle’s attitude toward risk in the stock market? Do you agree with this attitude? Why or why not? | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/15%3A_How_to_Think_about_Investing/15.11%3A_Review_Questions.txt |
Figure 16.1 Companies make decisions about investments every day. (credit: modification of “Tesla Factory, Fremont (CA, USA)” by Maurizio Pesce/flickr, CC BY 2.0)
One of the most important decisions a company faces is choosing which investments it should make. Should an automobile manufacturer purchase a new robot for its assembly line? Should an airline purchase a new plane to add to its fleet? Should a hotel chain build a new hotel in Atlanta? Should a bakery purchase tables and chairs to provide places for customers to eat? Should a pharmaceutical company spend money on research for a new vaccine? All of these questions involve spending money today to make money in the future.
The process of making these decisions is often referred to as capital budgeting. In order to grow and remain competitive, a firm relies on developing new products, improving existing products, and entering new markets. These new ventures require investments in fixed assets. The company must decide whether the project will generate enough cash to cover the costs of these initial expenditures once the project is up and running.
For example, Sam’s Sporting Goods sells sporting equipment and uniforms to players on local recreational and school teams. Customers have been inquiring about customizing items such as baseball caps and equipment bags with logos and other designs. Sam’s is considering purchasing an embroidery machine so that it can provide these customized items in-house. The machine will cost \$16,000. Purchasing the embroidery machine would be an investment in a fixed asset. If it purchases the machine, Sam’s will be able to charge customers for customization.
The managers think that selling customized items will allow the company to increase its cash flow by \$2,000 next year. They predict that as customers become more aware of this service, the ability to customize products in-house will increase the company’s cash flow by \$4,000 the following year. The managers expect the machine will be used for five years, with the embroidery products increasing cash flows by \$5,000 during each of the last three years the machine is used. Should Sam’s Sporting Goods invest in the embroidery machine? In this chapter, we consider the main capital budgeting techniques Sam’s and other companies can use to evaluate these types of decisions.
16.02: Payback Period Method
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Define payback period.
• Calculate payback period.
• List the advantages and disadvantages of using the payback period method.
The payback period method provides a simple calculation that the managers at Sam’s Sporting Goods can use to evaluate whether to invest in the embroidery machine. The payback period calculation focuses on how long it will take for a company to make enough free cash flow from the investment to recover the initial cost of the investment.
Payback Period Calculation
In order to purchase the embroidery machine, Sam’s Sporting Goods must spend \$16,000. During the first year, Sam’s expects to see a \$2,000 benefit from purchasing the machine, but this means that after one year, the company will have spent \$14,000 more than it has made from the project. During the second year that it uses the machine, Sam’s expects that its cash inflow will be \$4,000 greater than it would have been if it had not had the machine. Thus, after two years, the company will have spent \$10,000 more than it has benefited from the machine. This process is continued year after year until the accumulated increase in cash flow is \$16,000, or equal to the original investment. The process is summarized in Table 16.1.
Year 0 1 2 3 4 5
Initial Investment (\$) (16,000)
Cash Inflow (\$) - 2,000 4,000 5,000 5,000 5,000
Accumulated Inflow (\$) - 2,000 6,000 11,000 16,000 21,000
Balance (\$) (16,000) (14,000) (10,000) (5,000) - 5,000
Table 16.1
Sam’s Sporting Goods is expecting its cash inflow to increase by \$16,000 over the first four years of using the embroidery machine. Thus, the payback period for the embroidery machine is four years. In other words, it takes four years to accumulate \$16,000 in cash inflow from the embroidery machine and recover the cost of the machine.
Link to Learning
Calculating the Payback Period
It is possible that a project will not fully recover the initial cost in one year but will have more than recovered its initial cost by the following year. In these cases, the payback period will not be an integer but will contain a fraction of a year. This video demonstrates how to calculate the payback period in such a situation.
Advantages
The principal advantage of the payback period method is its simplicity. It can be calculated quickly and easily. It is easy for managers who have little finance training to understand. The payback measure provides information about how long funds will be tied up in a project. The shorter the payback period of a project, the greater the project’s liquidity.
Disadvantages
Although it is simple to calculate, the payback period method has several shortcomings. First, the payback period calculation ignores the time value of money. Suppose that in addition to the embroidery machine, Sam’s is considering several other projects. The cash flows from these projects are shown in Table 16.2. Both Project B and Project C have a payback period of five years. For both of these projects, Sam’s estimates that it will take five years for cash inflows to add up to \$16,000. The payback period method does not differentiate between these two projects.
Year 0 1 2 3 4 5 6
Project A (\$) (16,000) 2,000 4,000 5,000 5,000 5,000 5,000
Project B (\$) (16,000) 1,000 2,000 3,000 4,000 6,000 -
Project C (\$) (16,000) 6,000 4,000 3,000 2,000 1,000 -
Project D (\$) (16,000) 1,000 2,000 3,000 4,000 6,000 8,000
Table 16.2
However, we know that money has a time value, and receiving \$6,000 in year 1 (as occurs in Project C) is preferable to receiving \$6,000 in year 5 (as in Projects B and D). From what we learned about the time value of money, Projects B and C are not identical projects. The payback period method breaks the important finance rule of not adding or comparing cash flows that occur in different time periods.
A second disadvantage of using the payback period method is that there is not a clearly defined acceptance or rejection criterion. When the payback period method is used, a company will set a length of time in which a project must recover the initial investment for the project to be accepted. Projects with longer payback periods than the length of time the company has chosen will be rejected. If Sam’s were to set a payback period of four years, Project A would be accepted, but Projects B, C, and D have payback periods of five years and so would be rejected. Sam’s choice of a payback period of four years would be arbitrary; it is not grounded in any financial reasoning or theory. No argument exists for a company to use a payback period of three, four, five, or any other number of years as its criterion for accepting projects.
A third drawback of this method is that cash flows after the payback period are ignored. Projects B, C, and D all have payback periods of five years. However, Projects B and C end after year 5, while Project D has a large cash flow that occurs in year 6, which is excluded from the analysis. The payback method is shortsighted in that it favors projects that generate cash flows quickly while possibly rejecting projects that create much larger cash flows after the arbitrary payback time criterion.
Fourth, no risk adjustment is made for uncertain cash flows. No matter how careful the planning and analysis, a business is seldom sure what future cash flows will be. Some projects are riskier than others, with less certain cash flows, but the payback period method treats high-risk cash flows the same way as low-risk cash flows. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/16%3A_How_Companies_Think_about_Investing/16.01%3A_Why_It_Matters.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Define net present value.
• Calculate net present value.
• List the advantages and disadvantages of using the net present value method.
• Graph an NPV profile.
Net Present Value (NPV) Calculation
Sam’s purchasing of the embroidery machine involves spending money today in the hopes of making more money in the future. Because the cash inflows and outflows occur in different time periods, they cannot be directly compared to each other. Instead, they must be translated into a common time period using time value of money techniques. By converting all of the cash flows that will occur from a project into present value, or current dollars, the cash inflows from the project can be compared to the cash outflows. If the cash inflows exceed the cash outflows in present value terms, the project will add value and should be accepted. The difference between the present value of the cash inflows and the present value of cash outflows is known as net present value (NPV).
The equation for NPV can be written as
$NPV=PVCashInflows - PVCashOutflowsNPV=PVCashInflows - PVCashOutflows$
16.1
Consider Sam’s Sporting Goods’ decision of whether to purchase the embroidery machine. If we assume that after six years the embroidery machine will be obsolete and the project will end, when placed on a timeline, the project’s expected cash flow is shown in Table 16.3:
Year 0 1 2 3 4 5 6
Cash Flow (\$) (16,000) 2,000 4,000 5,000 5,000 5,000 5,000
Table 16.3
Calculating NPV is simply a time value of money problem in which each cash flow is discounted back to the present value. If we assume that the cost of funds for Sam’s is 9%, then the NPV can be calculated as
$NPV = 2,0001.091 + 4,0001.092 + 5,0001.093 + 5,0001.094 + 5,0001.095 + 5,0001.096 - 16,000= 1,834.86 + 3,366.72 + 3,860.92 + 3,542.13 + 3,249.66 + 2,981.34 - 16,000= 2,835.63NPV = 2,0001.091 + 4,0001.092 + 5,0001.093 + 5,0001.094 + 5,0001.095 + 5,0001.096 - 16,000= 1,834.86 + 3,366.72 + 3,860.92 + 3,542.13 + 3,249.66 + 2,981.34 - 16,000= 2,835.63$
16.2
Because the NPV is positive, Sam’s Sporting Goods should purchase the embroidery machine. The value of the firm will increase by \$2,835.63 as a result of accepting the project.
Calculating NPV involves computing the present value of each cash flow and then summing the present values of all cash flows from the project. This project has six future cash flows, so six present values must be computed. Although this is not difficult, it is tedious.
A financial calculator is able to calculate a series of present values in the background for you, automating much of the process. You simply have to provide the calculator with each cash flow, the time period in which each cash flow occurs, and the discount rate that you want to use to discount the future cash flows to the present.
Follow the steps in Table 16.4 for calculating NPV:
Step Description Enter Display
1 Select cash flow worksheet CF CF0 XXXX
2 Clear the cash flow worksheet 2ND [CLR WORK] CF0 0
3 Enter initial cash flow 16000 +/- ENTER CF0 = -16,000.00
4 Enter cash flow for the first year ↓ 2000 ENTER C01 = 2,000.00
F01 = 1.0
5 Enter cash flow for the second year ↓ 4000 ENTER C02 = 4,000.00
F02 = 1.0
6 Enter cash flow for the third year ↓ 5000 ENTER C03 = 5,000.00
F03 = 1.0
7 Enter cash flow for the fourth year ↓ 5000 ENTER C04 = 5,000.00
F04 = 1.0
8 Enter cash flow for the fifth year ↓ 5000 ENTER C05 = 5,000.00
F05 = 1.0
9 Enter cash flow for the sixth year ↓ 5000 ENTER C06 = 5,000.00
F06 = 1.0
10 Select NPV NPV I 0.00
11 Enter discount rate 9 ENTER I = 9.00
12 Compute NPV CPT NPV = 2,835.63
Table 16.4 Calculator Steps for NPV1
Link to Learning
Net Present Value
This video provides another example of how to use NPV to evaluate whether a project should be accepted or rejected.
Advantages
The NPV method solves several of the listed problems with the payback period approach. First, the NPV method uses the time value of money concept. All of the cash flows are discounted back to their present value to be compared. Second, the NPV method provides a clear decision criterion. Projects with a positive NPV should be accepted, and projects with a negative NPV should be rejected. Third, the discount rate used to discount future cash flows to the present can be increased or decreased to adjust for the riskiness of the project’s cash flows.
Disadvantages
The NPV method can be difficult for someone without a finance background to understand. Also, the NPV method can be problematic when available capital resources are limited. The NPV method provides a criterion for whether or not a project is a good project. It does not always provide a good solution when a company must make a choice between several acceptable projects because funds are not available to pursue them all.
Think It Through
Calculating NVP
Suppose your company is considering a project that will cost \$30,000 this year. The cash inflow from this project is expected to be \$6,000 next year and \$8,000 the following year. The cash inflow is expected to increase by \$2,000 yearly, resulting in a cash inflow of \$18,000 in year 7, the final year of the project. You know that your company’s cost of funds is 9%. Use a financial calculator to calculate NPV to determine whether this is a good project for your company to undertake (see Table 16.5).
Link to Learning
Calculating the NPV of an MBA Program
The NPV calculation can be used as a decision tool when you are deciding whether you should spend money today to make money in the future. This website on calculating the NPV on an MBA degree lets you apply this concept in an educational setting. The initial cost of the MBA includes both the dollars spent on tuition and the wages that a full-time student could have earned if they were not in school. Why is it appropriate to include these forgone wages in the calculation? What adjustments would students need to make to this analysis if they wanted to consider attending a part-time MBA program that allowed them to continue working while completing the program?
NPV Profile
The NPV of a project depends on the expected cash flows from the project and the discount rate used to translate those expected cash flows to the present value. When we used a 9% discount rate, the NPV of the embroidery machine project was \$2,836. If a higher discount rate is used, the present value of future cash flows falls, and the NPV of the project falls.
Theoretically, we should use the firm’s cost to attract capital as the discount rate when calculating NPV. In reality, it is difficult to estimate this cost of capital accurately and confidently. Because the discount rate is an approximate value, we want to determine whether a small error in our estimate is important to our overall conclusion. We can do this by creating an NPV profile, which graphs the NPV at a variety of discount rates and allows us to determine how sensitive the NPV is to changes in the discount rate.
To construct an NPV profile for Sam’s, select several discount rates and compute the NPV for the embroidery machine project using each of those discount rates. Table 16.6 below shows the NPV for several discount rates. Notice that if the discount rate is zero, the NPV is simply the sum of the cash flows. As the discount rate becomes larger, the NPV falls and eventually becomes negative.
The information in Table 16.6 is presented in a graph in Figure 16.2. We can see that the graph crosses the horizontal axis at about 14%. To the left, or at lower discount rates, the NPV is positive. If you are confident that the firm’s cost of attracting funds is less than 14%, the company should accept the project. If the cost of capital is more than 14%, however, the NPV is negative, and the company should reject the project.
Discount RateNPV (\$)
0%10,000
3%7,231
9%2,836
12%1,081
14%42
15%(442)
18%(1,773)
21%(2,939)
Table 16.6 NPV for Various Discount Rates
Figure 16.2 NPV Profile Graph
Footnotes
• 1The specific financial calculator in these examples is the Texas Instruments BA II PlusTM Professional model, but you can use other financial calculators for these types of calculations. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/16%3A_How_Companies_Think_about_Investing/16.03%3A_Net_Present_Value_%28NPV%29_Method.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Define internal rate of return (IRR).
• Calculate internal rate of return.
• List advantages and disadvantages of using the internal rate of return method.
Internal Rate of Return (IRR) Calculation
The internal rate of return (IRR) is the discount rate that sets the present value of the cash inflows equal to the present value of the cash outflows. In considering whether Sam’s Sporting Goods should purchase the embroidery machine, the IRR method approaches the time value of money problem from a slightly different angle. Instead of using the company’s cost of attracting funds for the discount rate and solving for NPV, as we did in the first NPV equation, we set NPV equal to zero and solve for the discount rate to find the IRR:
$NPV=2,0001 + i1+4,000(1 + i)2+5,0001 + i3+5,0001 + i4+5,0001 + i5+5,0001 + i6-16,000=0NPV=2,0001 + i1+4,000(1 + i)2+5,0001 + i3+5,0001 + i4+5,0001 + i5+5,0001 + i6-16,000=0$
16.3
The IRR is the discount rate at which the NPV profile graph crosses the horizontal axis. If the IRR is greater than the cost of capital, a project should be accepted. If the IRR is less than the cost of capital, a project should be rejected. The NPV profile graph for the embroidery machine crossed the horizontal axis at 14%. Therefore, if Sam’s Sporting Goods can attract capital for less than 14%, the IRR exceeds the cost of capital and the embroidery machine should be purchased. However, if it costs Sam’s more than 14% to attract capital, the embroidery machine should not be purchased.
In other words, a company wants to accept projects that have an IRR that exceed the company’s cost of attracting funds. The cash flow from these projects will be great enough to cover the cost of attracting money from investors in addition to the other costs of the project. A company should reject any project that has an IRR less than the company’s cost of attracting funds; the cash flows from such a project are not enough to compensate the investors for the use of their funds.
Calculating IRR without a financial calculator is an arduous, time-consuming process that requires trial and error to find the discount rate that makes NPV exactly equal zero. Your calculator uses the same type of trial-and-error iterative process, but because it uses an automated process, it can do so much more quickly than you can. A problem that might require 30 minutes of detailed mathematical calculations by hand can be completed in a matter of seconds with the assistance of a financial calculator.
All the information your calculator needs to calculate IRR is the value of each cash flow and the time period in which it occurs. To calculate IRR, begin by entering the cash flows for the project, just as you do for the NPV calculation (see Table 16.7). After these cash flows are entered, simply compute IRR in the final step.
Step Description Enter Display
1 Select cash flow worksheet CF CF0 XXXX
2 Clear the cash flow worksheet 2ND [CLR WORK] CF0 0
3 Enter initial cash flow 16000 +/- ENTER CF0 = -16,000.00
4 Enter cash flow for the first year ↓ 2000 ENTER C01 = 2,000.00
F01 = 1.0
5 Enter cash flow for the second year ↓ 4000 ENTER C02 = 4,000.00
F02 = 1.0
6 Enter cash flow for the third year ↓ 5000 ENTER C03 = 5,000.00
F03 = 1.0
7 Enter cash flow for the fourth year ↓ 5000 ENTER C04 = 5,000.00
F04 = 1.0
8 Enter cash flow for the fifth year ↓ 5000 ENTER C05 = 5,000.00
F05 = 1.0
9 Enter cash flow for the sixth year ↓ 5000 ENTER C06 = 5,000.00
F06 = 1.0
10 Compute IRR IRR CPT IRR = 14.09
Table 16.7 Calculator Steps for IRR
Advantages
The primary advantage of using the IRR method is that it is easy to interpret and explain. Investors like to speak in terms of annual percentage returns when evaluating investment possibilities.
Disadvantages
One disadvantage of using IRR is that it can be tedious to calculate. We knew the IRR was about 14% for the embroidery machine project because we had previously calculated the NPV for several discount rates. The IRR is about, but not exactly, 14%, because NPV is not exactly equal to zero (just very close to zero) when we use 14% as the discount rate. Before the prevalence of financial calculators and spreadsheets, calculating the exact IRR was difficult and time-consuming. With today’s technology, this is no longer a major consideration. Later in this chapter, we will look at how to use a spreadsheet to do these calculations.
No Single Mathematical Solution. Another disadvantage of using the IRR method is that there may not be a single mathematical solution to an IRR problem. This can happen when negative cash flows occur in more than one period in the project. Suppose your company is considering building a facility for an upcoming Olympic competition. The construction cost would be \$350 million. The facility would be used for one year and generate cash inflows of \$950 million. Then, the following year, your company would be required to convert the facility into a public park area for the city, which is expected to cost \$620 million. Placing these cash flows in a timeline results in the following (Table 16.8):
Year 0 1 2
Cash Flow (\$Millions) (350) 950 (620)
Table 16.8
The NPV profile for this project looks like Figure 16.3. The NPV is negative at low interest rates, becomes positive at higher interest rates, and then turns negative again as the interest rate continues to rise. Because the NPV profile line crosses the horizontal axis twice, there are two IRRs. In other words, there are two interest rates at which NPV equals zero.
Figure 16.3 NPV Profile Graph for a Project with Two IRRs
Reinvestment Rate Assumption. The IRR assumes that the cash flows are reinvested at the internal rate of return when they are received. This is a disadvantage of the IRR method. The firm may not be able to find any other projects with returns equal to a high-IRR project, so the company may not be able to reinvest at the IRR.
The reinvestment rate assumption becomes problematic when a company has several acceptable projects and is attempting to rank the projects. We will look more closely at the issues that can arise when considering mutually exclusive projects later in this chapter. If a company is simply deciding whether to accept a single project, the reinvestment assumption limitation is not relevant.
Overlooking Differences in Scale. Another disadvantage of using the IRR method to choose among various acceptable projects is that it ignores differences in scale. The IRR converts the cash flows to percentages and ignores differences in the size or scale of projects. Issues that occur when comparing projects of different scales are covered later in this chapter. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/16%3A_How_Companies_Think_about_Investing/16.04%3A_Internal_Rate_of_Return_%28IRR%29_Method.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Calculate profitability index.
• Calculate discounted payback period.
• Calculate modified internal rate of return.
Profitability Index (PI)
The profitability index (PI) uses the same inputs as the NPV calculation, but it converts the results to a ratio. The numerator is the present value of the benefits of doing a project. The denominator is the present value of the cost of doing the project. The formula for calculating PI is
$PI=PV(CashInflows)PV(CashOutflows)PI=PV(CashInflows)PV(CashOutflows)$
16.4
For the embroidery machine project that Sam’s Sporting Goods is considering, the PI would be calculated as
$PI=18,83616,000=1.18PI=18,83616,000=1.18$
16.5
The numerator of the PI formula is the benefit of the project, and the denominator is the cost of the project. Thus, the PI is the benefit relative to the cost. When NPV is greater than zero, PI will be greater than 1. When NPV is less than zero, PI will be less than 1. Therefore, the decision criterion using the PI method is to accept a project if the PI is greater than 1 and reject a project if the PI is less than 1.
Note that the NPV method and the PI method of project evaluation will always provide the same answer to the accept-or-reject question. The advantage of using the PI method is that it is helpful in ranking projects from best to worst. Issues that arise when ranking projects are discussed later in this chapter.
Discounted Payback Period
The payback period method provides a fast, simple approach to evaluating a project, but it suffers from the fact that it ignores the time value of money. The discounted payback period method addresses this flaw by discounting cash flows using the company’s cost of funds and then using these discounted values to determine the payback period.
Consider Sam’s Sporting Goods’ decision regarding whether to purchase an embroidery machine. The expected cash flows and their values when discounted using the company’s 9% cost of funds are shown in Table 16.9. Earlier, we calculated the project’s payback period as four years; that is how long it would take the company to recover all of the cash that it would spend on the project. Remember, however, that the payback period does not consider the company’s cost of funds, so it underestimates the true breakeven time period.
Year 0 1 2 3 4 5 6
Cash Flow (\$) (16,000.00) 2,000.00 4,000.00 5,000.00 5,000.00 5,000.00 5,000.00
Discounted Cash Flow (\$) (16,000) 1,834.86 3,366.72 3,860.92 3,542.13 3,249.66 2,981.34
Cumulative Discounted
Cash Flow (\$)
(16,000.00) (14,165.14) (10,798.42) (6,937.50) (3,395.37) (145.72) 2,835.62
Table 16.9
When the cash flows are appropriately discounted, the project still has not broken even by the end of year 5. The discounted payback period would be $5+145.722,981.34=5.055+145.722,981.34=5.05$ years. This adjusted calculation addresses the payback period method’s flaw of not considering the time value of money, but managers are still confronted with the other disadvantages. No objective criterion for acceptance or rejection exists because of the lack of a theoretical underpinning for what is an acceptable payback period length. The discounted payback period ignores any cash flows after breakeven occurs; this is a serious drawback, especially when comparing mutually exclusive projects.
Modified Internal Rate of Return (MIRR)
Financial analysts have developed an alternative evaluation technique that is similar to the IRR but modified in an attempt to address some of the weakness of the IRR method. This modified internal rate of return (MIRR) is calculated using the following steps:
1. Find the present value of all of the cash outflows using the firm’s cost of attracting capital as the discount rate.
2. Find the future value of all cash inflows using the firm’s cost of attracting capital as the discount rate. All cash inflows are compounded to the point in time at which the last cash inflow will be received. The sum of the future value of cash inflows is known as the project terminal value.
3. Compute the yield that sets the future value of the inflows equal to the present value of the outflows. This yield is the modified internal rate of return.
For our embroidery machine project, the MIRR would be calculated as shown in Table 16.10:
Year 0 1 2 3 4 5 6
Cash Flow (\$) (16,000.00) 2,000.00 4,000.00 5,000.00 5,000.00 5,000.00 5,000.00
3,077.25
5,646.33
6,475.15
5,940.50
5,450.00
Terminal Value \$31,595.22
Table 16.10
1. The only cash outflow is the \$16,000 at time period 0.
2. The future value of each of the six expected cash inflows is calculated using the company’s 9% cost of attracting capital. Each of the cash flows is translated to its value in time period 6, the time period of the final cash inflow. The sum of the future value of these six cash flows is \$31,595.22. Thus, the terminal value is \$31,595.22
3. The interest rate that equates the present value of the outflows, \$16,000, to the terminal value of \$31,595.22 six years later is found using the formula
$16,0001 + i6 = 31,595.221 + i6 = 1.97i = 0.12=12%16,0001 + i6 = 31,595.221 + i6 = 1.97i = 0.12=12%$
16.6
The MIRR solves the reinvestment rate assumption problem of the IRR method because all cash flows are compounded at the cost of capital. In addition, solving for MIRR will result in only one solution, unlike the IRR, which may have multiple mathematical solutions. However, the MIRR method, like the IRR method, suffers from the limitation that it does not distinguish between large-scale and small-scale projects. Because of this limitation, the MIRR cannot be used to rank projects; it can only be used to make accept-or-reject decisions. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/16%3A_How_Companies_Think_about_Investing/16.05%3A_Alternative_Methods.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Choose between mutually exclusive projects.
• Compare projects with different lives.
• Compare projects of different scales.
• Rank projects when resources are limited.
So far, we have considered methods for deciding to accept or to reject a single stand-alone project. Sometimes, managers must make decisions regarding which of two projects to accept, or a company might be faced with a number of good, acceptable projects and have to decide which of those projects to take on during the current year.
Choosing between Mutually Exclusive Projects
Earlier in this chapter, we saw that the embroidery machine that Sam’s Sporting Goods was considering had a positive NPV, making it a project that Sam’s should accept. However, another, more expensive embroidery machine may be available that is able to make more stitches per minute. Although the initial cost of this heavy-duty machine is higher, it would allow Sam’s to embroider and sell more items each year, generating more revenue. The two embroidery machines are mutually exclusive projects. Mutually exclusive projects compete with one another; purchasing one embroidery machine excludes Sam’s from purchasing the other embroidery machine.
Table 16.11 shows the cash outflow and inflows expected from the original embroidery machine considered as well as the heavy-duty machine. The heavy-duty machine costs \$25,000, but it will generate more cash inflows in years 3 through 6. Both machines have a positive NPV, leading to decisions to accept the projects. Also, both machines have an IRR exceeding the company’s 9% cost of raising capital, also leading to decisions to accept the projects.
When considered by themselves, each of the machines is a good project for Sam’s to pursue. The question the managers face is which is the better of the two projects. When faced with this type of decision, the rule is to take the project with the highest NPV. Remember that the goal is to choose projects that add value to the company. Because the NPV of a project is the estimate of how much value it will create, choosing the project with the higher NPV is choosing the project that will create the greater value.
Year 0 1 2 3 4 5 6 NPV IRR
Regular Machine (\$) (16,000) 2,000 4,000 5,000 5,000 5,000 5,000 2,835.62 14.10%
Heavy-Duty Machine (\$) (25,000) 2,000 4,000 8,000 9,000 9,000 9,000 3,970.67 13.20%
Table 16.11
Link to Learning
Olympic Project Economics
The investment analysis procedures used by companies are also used by government entities when evaluating projects. Olympic host cities receive direct revenues from broadcast rights, ticket sales, and licensing agreements. The cities also expect indirect benefits from increased tourism, including increased employment and higher tax revenues. These benefits come after the city makes a major investment in infrastructure, spending money on stadiums, housing, and transportation. The investment in infrastructure for the 2014 Winter Olympics in Sochi, Russia, was over \$50 billion.2 Why do you think the infrastructure investment for these games was so much higher than the amount spent by cities hosting previous games? If your city were discussing the possibility of bidding to be an Olympic host city, what would you suggest it consider when evaluating the opportunity? Check out this article for more information.
Choosing between Projects with Different Lives
Suppose you are considering starting an ice-cream truck business. You find that you can purchase a used truck for \$50,000. You estimate that the truck will last for three years, and you will be able to sell enough ice cream treats to generate a cash inflow of \$40,000 during each of those years. Your cost of capital is 10%. The positive NPV of \$49,474 for the project makes this an acceptable project.
Another ice-cream truck is also for sale for \$50,000. This truck is smaller and will not be able to hold as many frozen treats. However, the truck is newer, with lower mileage, and you estimate that you can use it for six years. This newer truck will allow you to generate a cash inflow of \$30,000 each year for the next six years. The NPV of the newer truck is \$80,658.
Because both trucks are acceptable projects but you can only drive one truck at a time, you must choose which truck to purchase. At first, it may be tempting to purchase the newer, lower-mileage truck because of its higher NPV. Unfortunately, when comparing two projects that have different lives, a decision cannot be made simply by comparing the NPVs. Although the ice-cream truck with the six-year life span has a much higher NPV than the larger truck, it consumes your resources for a long time.
There are two methods for comparing projects with different lives. Both assume that when the short-life project concludes, another, similar project will be available.
Replacement Chain Approach
With the replacement chain approach, as many short-life projects as necessary are strung together to equal the life of the long-life project. You can purchase the newer, lower-mileage ice-cream truck and run your business for six years. To make a comparison, you assume that if you purchase the larger truck that will last for three years, you will be able to repeat the same project, purchasing another larger truck that will last for the next three years. In essence, you are comparing a six-year project with two consecutive three-year projects so that both options will generate cash inflows for six years. Your timeline for the projects (comparing an older, larger truck with a newer, lower-mileage truck) will look like Table 16.12:
Year 0 1 2 3 4 5 6
Older Truck (\$) (\$50,000) 40,000 40,000 40,000 40,000 40,000 40,000
Older Truck (\$) (50,000)
Newer Truck (\$) (50,000) 30,000 30,000 30,000 30,000 30,000 30,000
Table 16.12
The present values of all of the cash inflows and outflows from purchasing two of the older, larger trucks consecutively are added together to find the NPV of that alternative. The NPV of this alternative is \$86,645, which is higher than the NPV of \$80,658 of the newer truck, as shown in Table 16.13:
Year 0 1 2 3 4 5 6
Older Truck (\$) 40,000 40,000 40,000 40,000 40,000 40,000
Older Truck (\$) (50,000)
Net Present Value (50,000) 36,363.64 33,057.85 (7,513.15) 27,320.54 24,836.85 22,578.96
$NPV=86,644.69NPV=86,644.69$
16.7
Table 16.13
When using the replacement chain approach, the short-term project is repeated any number of times to equal the length of the longer-term project. If one project is 5 years and another is 20 years, the short one is repeated four times. This method can become tedious when the length of the longer project is not a multiple of the shorter project. For example, when choosing between a five-year project and a seven-year project, the short one would have to be duplicated seven times and the long project would have to be repeated five times to get to a common length of 35 years for the two projects.
Equal Annuity Approach
The equal annuity approach assumes that both the short-term and the long-term projects can be repeated forever. This approach involves the following steps:
Step 1: Find the NPV of each of the projects.
• The NPV of the larger, older ice-cream truck is \$49,474.
• The NPV of the smaller, newer ice-cream truck is \$80,658.
Step 2: Find the annuity that has the same present value as the NPV and the same number of periods as the project.
• For the larger, older ice-cream truck, we want to find the three-year annuity that would have a present value of \$49,474 when using a 10% discount rate. This is \$19,894.
• For the smaller, newer ice-cream truck, we want to find the six-year annuity that would have a present value of \$80,658 when using a 10% discount rate. This is \$18,520.
Step 3: Assume that these projects, or similar projects, can be repeated over and over and that these annuities will continue forever. Calculate the present value of these annuities continuing forever using the perpetuity formula.
$PVLargerTruck = 19,8940.10=198,940PVSmallerTruck = 18,5200.10=185,200PVLargerTruck = 19,8940.10=198,940PVSmallerTruck = 18,5200.10=185,200$
16.8
We again find that the older, larger truck is preferred to the newer, smaller truck.
These methods correct for unequal lives, but managers need to be aware that some unavoidable issues come up when these adjustments are made. Both the replacement chain and equal annuity approaches assume that projects can be replicated with identical projects in the future. It is important to note that this is not always a reasonable assumption; these replacement projects may not exist. Estimating cash flows from potential projects is prone to errors, as we will discuss in Financial Forecasting these errors are compounded and become more significant as projects are expected to be repeated. Inflation and changing market conditions are likely to result in cash flows varying in the future from our predictions, and as we go further into the future, these changes are potentially greater.
Choosing Projects When Resources Are Limited
Choosing positive NPV projects adds value to a company. Although we often assume that the company will choose to pursue all positive NPV projects, in reality, managers often face a budget that restricts the amount of capital that they may invest in a given time period. Thus, managers are forced to choose among several positive NPV projects. The goal is to maximize the total NPV of the firm’s projects while remaining within budget constraints.
Link to Learning
Profitability Index
Managers should reject any project with a negative NPV. When managers find themselves with an array of projects with a positive NPV, the profitability index can be used to choose among those projects. To learn more, watch this video about how a company might use the profitability index.
For example, suppose Southwest Manufacturing is considering the seven projects displayed in Table 16.14. Each of the projects has a positive NPV and would add value to the company. The firm has a budget of \$200 million to put toward new projects in the upcoming year. Doing all seven of the projects would require initial investments totaling \$430 million. Thus, although all of the projects are good projects, Southwest Manufacturing cannot fund them all in the upcoming year and must choose among these projects. Southwest Manufacturing could choose the combination of Projects A and D; the combination of Projects B, C, and E; or several other combinations of projects and exhaust its \$200 million investment budget.
Project NPV (\$Millions) Initial Investment (\$Millions) Profitability Index Cumulative Investment Required (\$Millions)
A 60 150 1.40 150
B 25 100 1.25 250
C 10 70 1.14 320
D 15 50 1.30 370
E 11 30 1.37 400
F 7 20 1.35 420
G 2 10 1.20 430
Table 16.14 Projects Being Considered by Southwest Manufacturing
To decide which combination results in the largest added NPV for the company, rank the projects based on their profitability index, as is done in Table 16.15. Projects A, E, and F should be chosen, as they have the highest profitability indexes. Because those three projects require a cumulative investment of \$200 million, none of the remaining projects can be undertaken at the present time. Doing those three projects will add \$78 million in NPV to the firm. Out of this set of choices, there is no combination of projects that is affordable given Southwest Manufacturing’s budget that would add more than \$78 million in NPV.
Project NPV (\$Millions) Initial Investment (\$Millions) Profitability Index Cumulative Investment Required (\$Millions)
A 60 150 1.40 150
E 11 30 1.37 180
F 7 20 1.35 200
D 15 50 1.30 250
B 25 100 1.25 350
G 2 10 1.20 360
C 10 70 1.14 430
Table 16.15 Projects Ranked by Profitability Index
Notice that when choices must be made among projects, the decision cannot be made by simply ranking the projects from highest to lowest NPV. Project D has an NPV of \$15 million, which is higher than both the \$11 million of Project E and the \$7 million of Project F. However, Project D requires \$50 million for an initial investment. For the same \$50 million of investment funds, Southwest Manufacturer can accept both Projects E and F for a total NPV of \$18 million. Investment capital is a scare resource for this company. By ranking projects based on their profitability index, the company is able to determine the best way to allocate its scarce capital for the largest potential increase in NPV.
Concepts In Practice
Capital Budgeting Challenges
Although the basic techniques of project evaluation are straightforward, real-world capital budgeting decisions are complex and multifaceted. The goal of capital budgeting is to choose the projects that will bring the most value to the shareholders of the company. The NPV rule provides a clear, concise criterion for which projects will bring value to the shareholders. It is important to remember, however, that all of the project valuation calculations are based on projected cash flows. These projected cash flows are estimates, based on the best educated guesses that a company makes about its business opportunities over the next few years. Because no company has a crystal ball that can predict the future, its calculation of NPV is an estimate of what it expects.
Think, for example, of an oil company deciding whether to drill for oil. The project will require expenditures on equipment, land, and other items. The cash inflows will depend on the likelihood of oil being found, the quantity of oil produced by the well, and the price at which the oil can be sold. If a company estimates that oil will sell for \$100 per barrel during the next few years, the project will have a much higher NPV than if the company estimates that oil will sell for only \$50 per barrel.
A project that has a positive NPV and is accepted when a company is planning how to allocate its capital toward investments may end up being a bad project that the company wishes it had avoided if the future is much different from what it projected. Managers must stay attuned to economic developments and reevaluate capital budgeting decisions when significant changes occur. In spring 2020, managers around the globe were faced with a dramatically changing economic environment amid a pandemic. Oil companies, for example, saw oil prices drop from over \$50 per barrel at the beginning of March to under \$15 per barrel by the end of April.
Link to Learning
Reducing Capital Spending
In June 2020, McKinsey & Company looked at major companies around the world that were reducing their capital expenditures in the face of the COVID-19 pandemic. These companies were cutting their capital budgets by 10% to 80% from their originally planned levels for 2020. Reductions were especially large in the oil and gas industry, as companies found their revenue projections, and thus the NPV of their planned projects, falling dramatically. In addition, companies found themselves needing to free up cash; with more limited cash resources, fewer positive NPV projects could be accepted and funded.3 Due to the COVID-19 pandemic, many CFOs were challenged to stabilize their corporate cash flows. This article explains how quickly reducing capital spending, which is usually a quick enough fix, was able to help. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/16%3A_How_Companies_Think_about_Investing/16.06%3A_Choosing_between_Projects.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Calculate NPV using Excel.
• Calculate IRR using Excel.
• Create an NPV profile using Excel.
A Microsoft Excel spreadsheet provides an alternative to using a financial calculator to automate the arithmetic necessary to calculate NPV and IRR. An advantage of using Excel is that you can quickly change any assumptions or numbers in your problem and recalculate NPV or IRR based on that updated information. Excel is a versatile tool with more than one way to set up most problems. We will consider a couple of straightforward examples of using Excel to calculate NPV and IRR.
Suppose your company is considering a project that will cost \$30,000 this year. The cash inflow from this project is expected to be \$6,000 next year and \$8,000 the following year. The cash inflow is expected to increase by \$2,000 yearly, resulting in a cash inflow of \$18,000 in year 7, the final year of the project. You know that your company’s cost of funds is 9%. Your company would like to evaluate this project.
Calculating NPV Using Excel
To calculate NPV using Excel, you would begin by placing each year’s expected cash flows in a sheet, as in row 5 in Figure 16.4. One approach to calculating NPV is to use the formula for discounting future cash flows, as is shown in row 6.
Figure 16.4 Inserting Present Cash Flows Using Excel (\$ except Cost of Funds)
Figure 16.5 shows the present value of each year’s cash flow resulting from the formula. The NPV is then calculated by summing the present values of the cash flows.
Download the spreadsheet file containing key Chapter 16 Excel exhibits.
Figure 16.5 NPV Calculated by Summing Present Values of Cash Flows (\$ except Cost of Funds)
Alternatively, Excel is programmed with financial functions, including a calculation for NPV. The NPV formula is shown in cell J7 in Figure 16.6 below. However, it is important to pay attention to how Excel defines NPV. The Excel NPV function calculates the sum of the present values of the cash flows occurring from period 1 through the end of the project using the designated discount rate, but it fails to include the initial investment at time period zero at the beginning of the project. The NPV function in cell J6 will return \$56,947 for this project. You must subtract the initial cash outflow of \$30,000 that occurs at time 0 to get the NPV of \$26,947 for the project.
When entering the Excel-programmed NPV function, you must remember to include references only to the cells that contain cash flows from year 1 to the end of the project. Then, subtract the initial investment of year 0 to calculate NPV according to the standard definition of NPV—the present values of the cash inflows minus the present value of the cash outflow. Note: Because of the nonstandard use of the term NPV by Excel, many users prefer to use the method described above rather than this predefined function.
Figure 16.6 NPV Formula (\$ except IRR/Cost of Funds)
Calculating IRR Using Excel
Excel also provide a function for calculating IRR. This function is shown in Figure 16.7, cell J8. The IRR function properly uses all the project’s cash flows, including the initial cash outflow at time 0, in its calculation, unlike the NPV function. This function will correctly return the IRR of 27.7% for the project. Figure 16.8 shows the completed spreadsheet.
Figure 16.7 Function for Calculating IRR (\$ except IRR/Cost of Funds)
Figure 16.8 Final Spreadsheet (\$ except IRR/Cost of Funds)
Using Excel to Create an NPV Profile
Firms often do not know exactly what their cost of attracting capital is, so they must use estimates in their decision-making. Also, the cost of attracting capital can change with economic and market conditions. Especially if markets are volatile, a company may use an NPV profile to see how sensitive their decisions are to changes in financing costs. Excel simplifies the creation of an NPV profile.
Middleton Manufacturing is considering installing solar panels to heat water and provide lighting throughout its plant. To do so will cost the company \$800,000 this year. However, this upgrade will save the company an estimated \$150,000 in electrical costs each year for the next 10 years. Constructing an NPV profile of this project will allow Middleton to see how the NPV of the project changes with the cost of attracting funds.
First, the project cash flows must be placed in an Excel spreadsheet, as is shown in cells D2 through N2 in Figure 16.9. The company’s cost of funds is placed in cell B1; begin by putting in 10% for this rate. Next, the formula for NPV is placed in cell B6; cell B6 shows the NPV of the cash flows in cells D2 through N2, using the rate that is in cell B1.
For reference, compute IRR in cell B4. Calculating IRR is not necessary for creating the NPV profile. However, it gives a good reference point. Remember that if the IRR of a project is greater than the firm’s cost of attracting capital, then the NPV will be positive; if the IRR of a project is less than the firm’s cost of attracting capital, then the NPV will be negative.
An NPV profile is created by calculating the NPV of the project for a variety of possible costs of attracting capital. In other words, you want to calculate NPV using the project cash flows in cells D2 through N2, using a variety of discount rates in cell B1. This is accomplished by using the Excel data table function. The data table function shows how the outcome of an Excel formula changes when one of the cells in the spreadsheet changes. In this instance, you want to determine how the value of the NPV formula (cell B6) changes when the discount rate (cell B1) changes.
Figure 16.9 Project Cash Flows Inserted into Excel
To do this, enter the range of interest rates that you want to consider down a column, beginning in cell A7. This example shows rates from 1% to 20% entered in cells A7 through A26. Your Excel file should now look like the screenshot in Figure 16.9.
Next, highlight the cells containing the NPV calculation and the range of discount rates. Thus, you will highlight cells A6 through A26 and B6 through B26 (see Figure 16.10). Click Data at the top of the Excel menu so that you see the What-If Analysis feature. Choose Data Table. Because the various discount rates you want to use are in a column, use the “Column input cell” option. Enter “B1” in this box. You are telling Excel to calculate NPV using each of the numbers in this column as the cost of attracting funds in cell B1. Click OK.
Figure 16.10 Creating a Data Table in Excel
After clicking OK, the cells in column B next to the list of various discount rates will fill with the NPVs corresponding to each of the rates. This is shown in Figure 16.11.
Figure 16.11 NPV Calculated for Various Discount Rates
Now that the various NPVs are calculated, you can create the NPV profile graph. To create the graph, begin by highlighting the discount rates and NPVs that are in cells A7 through A26 and B7 through B26. Next, go to the Insert tab in the menu at the top of Excel. Several different chart options will be available; choose Scatter. You will end up with a chart that looks like the one in Figure 16.12. You can customize the chart by renaming it, labeling the axes, and making other cosmetic changes if you like.
You will notice that the NPV profile crosses the x-axis between 13% and 14%; remember that the NPV will be zero when the discount rate that is used to calculate the NPV is equal to the project’s IRR, which we previously calculated to be 13.43%. If the firm’s cost of raising funds is lower than 13.43%, the NPV profile shows that the project has a positive NPV, and the project should be accepted. Conversely, if the firm’s cost of raising funds is greater than 13.43%, the NPV of this project will be negative, and the project should not be accepted.
Figure 16.12 NPV Profile Created Using Excel
Middleton Manufacturing can use this NPV profile to evaluate its solar panel installation project. If the managers think that the cost of attracting funds for the company is 10%, then the project has a positive NPV of \$121,685 and the company should install the panels. The NPV profile shows that if the managers are underestimating the cost of funds even by 30% and it will really cost Middleton 13% to attract funds, the project is still a good project. The cost of attracting funds would have to be higher than 13.43% for the solar panel project to be rejected. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/16%3A_How_Companies_Think_about_Investing/16.07%3A_Using_Excel_to_Make_Company_Investment_Decisions.txt |
16.1 Payback Period Method
The payback period is the simplest project evaluation method. It is the time it takes the company to recoup its initial investment. Its usefulness is limited, however, because it ignores the time value of money.
16.2 Net Present Value (NPV) Method
Net present value (NPV) is calculated by subtracting the present value of a project’s cash outflows from the present value of the project’s cash inflows. A project should be accepted if its NPV is positive and rejected if its NPV is negative.
16.3 Internal Rate of Return (IRR) Method
The internal rate of return (IRR) of a project is the discount rate that sets the present value of a project’s cash inflows exactly equal to the present value of the project’s cash outflows. A project should be accepted if its IRR is greater than the firm’s cost of attracting capital.
16.4 Alternative Methods
The discounted payback period uses the time value of money to discount future cash flows to see how long it will be before the initial investment of a project is recovered. MIRR provides a variation on IRR in which all cash flows are compounded using the cost of capital, resolving the reinvestment rate assumption problem of the IRR method; unlike IRR, which may have multiple mathematical solutions, MIRR will result in one solution. The profitability index is calculated as the NPV of the project divided by the initial cost of the project.
16.5 Choosing between Projects
Firms may need to choose among a variety of good projects. The projects may have different lives or be differently sized projects that require different amounts of resources. By choosing projects with the highest profitability index, companies can take on the projects that will lead to the greatest increase in value for the company.
16.6 Using Excel to Make Company Investment Decisions
Excel spreadsheets provide a way to easily calculate the NPV and IRR of a project. Using Excel to create an NPV profile allows a company to see how much its estimates of the cost of raising funds can err from the true cost and have the project still be an acceptable project.
16.09: Key Terms
capital budgeting
the process a business follows to evaluate potential major projects or investments
discounted payback period
the length of time it will take for the present value of the future cash inflows of a project to equal the initial cost of the investment
equal annuity approach
a method of comparing projects of different lives by assuming that the projects can be repeated forever
internal rate of return (IRR)
the discount rate that sets the NPV of a project equal to zero
modified internal rate of return (MIRR)
the yield that sets the future value of the cash inflows of a project equal to the present value of the cash outflows of the project
mutually exclusive projects
projects that compete against each other so that when one project is chosen, the other project cannot be done
net present value (NPV)
the present value of the cash inflows of a project minus the present value of the cash outflows of the project
payback period
the length of time it will take for a company to make enough money from an investment to recover the initial cost of the investment
profitability index (PI)
the present value of cash inflows divided by the present value of cash outflows
replacement chain approach
a method of comparing projects of differing lives by repeating shorter projects multiple times until they reach the lifetime of the longest project
16.10: CFA Institute
This chapter supports some of the Learning Outcome Statements (LOS) in this CFA® Level I Study Session. Reference with permission of CFA Institute. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/16%3A_How_Companies_Think_about_Investing/16.08%3A_Summary.txt |
1.
Which of the following is a disadvantage of using the payback method?
1. It only considers cash flows that occur after the project breaks even.
2. It ignores the time value of money.
3. It is difficult to calculate.
4. You must know the company’s cost of raising funds to be able to use it.
2.
A company should accept a project if ________.
1. the NPV of the project is positive
2. the NPV of the project is negative
3. the IRR of the project is positive
4. the IRR of the project is negative
3.
The net present value of a project equals ________.
1. the future value of the cash inflows minus the future value of the cash outflows
2. the present value of the cash inflows minus the future value of the cash outflows
3. the present value of the cash inflows minus the present value of the cash outflows
4. the future value of the cash inflows minus the present value of the cash outflows
4.
The IRR of a project is the discount rate that ________.
1. makes the NPV equal to zero
2. equates the present value of the cash inflows to the future value of the cash outflows
3. makes the NPV positive
4. equates the present value of cash outflows to the future value of the cash inflows
5.
The IRR method assumes that ________.
1. cash flows are reinvested at the firm’s cost of attracting funds when they are received
2. cash flows of a project are never reinvested
3. cash flows are reinvested at the internal rate of return when they are received
4. the NPV of a project is negative
6.
When cash outflows occur during more than one time period, ________.
1. the project’s NPV will definitely be negative
2. the project can have multiple IRRs
3. the project should not be done
4. the time value of money is not important
7.
The discounted payback period method ________.
1. is used to compare two projects that have different lives
2. fails to consider the time value of money
3. provides an objective criterion for an accept-or-reject decision grounded in financial theory
4. discounts cash flows using the company’s cost of funds to overcome a flaw of the payback period method
8.
Which of the following is a method of adjustment for comparing projects of different lives?
1. IRR
2. Modified IRR
3. Payback period
4. Equal annuity
9.
When a company can only fund some of its good projects, it should rank the projects by ________.
1. PI
2. IRR
3. NPV
4. payback period
10.
If a company is considering two mutually exclusive projects, which of the following statements is true?
1. The company must do both projects if it chooses to do one of the projects.
2. The IRR method should be used to compare the projects.
3. Doing one of the projects means the other project cannot be done.
4. The company does not need to compare the projects because it can choose to do both.
16.12: Review Questions
1.
Describe the disadvantages of using the payback period to evaluate a project.
2.
Explain why a company would want to accept a project with a positive NPV and reject a project with a negative NPV.
3.
Westland Manufacturing could spend \$5,000 to update its existing fluorescent lighting fixtures to newer fluorescent fixtures that would be more energy efficient. Explain why updating the light fixtures with newer fluorescent fixtures and replacing the existing fixtures with LED fixtures would be considered mutually exclusive projects.
4.
When faced with a decision between two good but mutually exclusive projects, should a manager base the decision on NPV or IRR? Why? | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/16%3A_How_Companies_Think_about_Investing/16.11%3A_Multiple_Choice.txt |
1.
Westland Manufacturing spends \$20,000 to update the lighting in its factory to more energy-efficient LED fixtures. This will save the company \$4,000 per year in electricity costs. What is the payback period of this project?
2.
Westland Manufacturing spends \$20,000 to update the lighting in its factory to more energy-efficient LED fixtures. This will save the company \$4,000 per year in electricity costs. The company estimates that these fixtures will last for 10 years. If the company’s cost of funds is 8%, what is the NPV of this project?
3.
If Westland Manufacturing finds that its cost of funds is 11%, what will happen to the NPV of the project in problem 2?
4.
Westland Manufacturing spends \$20,000 to update the lighting in its factory to more energy-efficient LED fixtures. This will save the company \$4,000 per year in electricity costs. The company estimates that these fixtures will last for 10 years. What is the IRR of this project?
5.
Westland Manufacturing spends \$20,000 to update the lighting in its factory to more energy-efficient LED fixtures. This will save the company \$4,000 per year in electricity costs. The company estimates that these fixtures will last for 10 years. If the company’s cost of funds is 8%, what is the PI of this project?
6.
Westland Manufacturing spends \$20,000 to update the lighting in its factory to more energy-efficient LED fixtures. This will save the company \$4,000 per year in electricity costs. The company estimates that these fixtures will last for 10 years. If the company’s cost of funds is 8%, what is the modified IRR of this project?
7.
Westland Manufacturing spends \$20,000 to update the lighting in its factory to more energy-efficient LED fixtures. This will save the company \$4,000 per year in electricity costs. The company estimates that these fixtures will last for 10 years. If the company’s cost of funds is 8%, what is the discounted payback period of this project?
8.
Holiday Hotels is considering two different floorings to use in its buildings. The less expensive tile will need to be replaced every five years. The more durable, more expensive tile will need to be replaced every eight years. To use the replacement chain approach to compare these two projects, how many times would you have to assume each type of tile would be replaced?
9.
You will be living in your college town for two more years. You are considering purchasing a townhouse that will cost you \$250,000 today. You estimate that if you do, your expenses for each of the next two years will be \$6,000 less than if you rented an apartment. You think that you would be able to lease the townhouse to another college student afterward for \$12,000 per year and that your taxes, maintenance, and other expenses for the townhouse would be \$5,000 per year. You expect to lease the townhouse for five years before you sell it, and you expect to be able to sell the townhouse for \$275,000. Use Excel to create an NPV profile for this undertaking. If it will cost you 3% to borrow money, should you buy the townhouse? What if it will cost you 8% to borrow money?
16.14: Video Activity
Calculating NPV and IRR
Businesses use NPV and IRR to determine whether or not a project will add value for shareholders. Watch this CFA® Level I Corporate Finance video to learn more. Working along with the video, you will gain practice in using your financial calculator to calculate IRR.
1.
According to the video, how should a company use NPV and IRR to decide whether a project should be undertaken?
2.
In the video, Trim Corp. is considering a project that is expected to have cash inflows of \$350, \$250, and \$150 in years 1, 2, and 3, respectively. What do you think would happen to the NPV of the project if the company expected the same cash flows, but in reverse order? In other words, what do you think would happen to the NPV if the \$150 were the cash inflow of year 1, \$250 were the cash inflow for year 2, and \$350 were the cash inflow for year 3? Using the same discount rate as in the video, 25%, calculate the NPV for the project with this string of cash outflows. Was the outcome what you thought it would be?
The Tokyo Olympics
The capital investment a city must undertake to host the Olympic Games is massive. Learn more about the capital investments and expenses Tokyo faced as host of the 2020 Summer Olympics and how it was impacted by a global pandemic by watching this video, How the Tokyo Olympics Became the Most Expensive Summer Games Ever.
3.
Given the costs discussed in the video, create an Excel spreadsheet to estimate the NPV and IRR of hosting the Olympic Games for a city.
4.
How would the numbers in your Excel spreadsheet change because of the COVID-19 pandemic? Create an NPV profile for Tokyo’s Olympic Games given the changes that were caused by the pandemic. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/16%3A_How_Companies_Think_about_Investing/16.13%3A_Problems.txt |
Figure 17.1 A company can only attract capital if it offers an expected return that is competitive with other options. (credit: modification of “1166357_33949449” by Jenifer Corrêa/flickr, CC BY 2.0)
The most important job that company managers have is to maximize the value of the company. Some obvious things come to mind when you think of how managers would do this. For example, to maximize the value of American Airlines, the managers need to attract customers and sell seats on flights. They also need to keep costs as low as possible, which means keeping the costs of purchasing fuel and making plane repairs as low as possible. While the concept of keeping costs low is simple, the specific decisions a firm makes can be complex. If American Airlines wants to purchase a new airplane, it needs to consider not just the dollar cost of the initial purchase but also the passenger and cargo capacity of the plane as well as ongoing maintenance costs.
In addition to paying salaries to its pilots and flight attendants, American Airlines must pay to use investors’ money. If the company wants to purchase a new airplane, it may borrow money to pay for the plane. Even if American Airlines does not need to incur debt to buy the plane, the money it uses to buy the plane ultimately belongs to the owners or shareholders of the company. The company must consider the opportunity cost of this money and the return that shareholders are expecting on their investments.
Just as different planes have distinctive characteristics and costs, the different types of financing that American Airlines can use will have different characteristics and costs. One of the tasks of the financial manager is to consider the trade-offs of these sources of funding. In this chapter, we look at the basic principles that managers use to minimize the cost of funding and maximize the value of the firm.
17.02: The Concept of Capital Structure
Learning Objectives
By the end of this section, you will be able to:
• Distinguish between the two major sources of capital appearing on a balance sheet.
• Explain why there is a cost of capital.
• Calculate the weights in a company’s capital structure.
The Basic Balance Sheet
In order to produce and sell its products or services, a company needs assets. If a firm will produce shirts, for example, it will need equipment such as sewing machines, cutting boards, irons, and a building in which to store its equipment. The company will also need some raw materials such as fabric, buttons, and thread. These items the company needs to conduct its operations are assets. They appear on the left-hand side of the balance sheet.
The company has to pay for these assets. The sources of the money the company uses to pay for these assets appear on the right-hand side of the balance sheet. The company’s sources of financing represent its capital. There are two broad types of capital: debt (or borrowing) and equity (or ownership).
Figure 17.2 is a representation of a basic balance sheet. Remember that the two sides of the balance sheet must be $Assets=Liabilities + EquityAssets=Liabilities + Equity$. Companies typically finance their assets through equity (selling ownership shares to stockholders) and debt (borrowing money from lenders). The debt that a firm uses is often referred to as financial leverage. The relative proportions of debt and equity that a firm uses in financing its assets is referred to as its capital structure.
Figure 17.2 Basic Balance Sheet for Company with Debt, Preferred Stock, and Common Equity in Capital Structure
Attracting Capital
When a company raises money from investors, those investors forgo the opportunity to invest that money elsewhere. In economics terms, there is an opportunity cost to those who buy a company’s bonds or stock.
Suppose, for example, that you have \$5,000, and you purchase Tesla stock. You could have purchased Apple stock or Disney stock instead. There were many other options, but once you chose Tesla stock, you no longer had the money available for the other options. You would only purchase Tesla stock if you thought that you would receive a return as large as you would have for the same level of risk on the other investments.
From Tesla’s perspective, this means that the company can only attract your capital if it offers an expected return high enough for you to choose it as the company that will use your money. Providing a return equal to what potential investors could expect to earn elsewhere for a similar risk is the cost a company bears in exchange for obtaining funds from investors. Just as a firm must consider the costs of electricity, raw materials, and wages when it calculates the costs of doing business, it must also consider the cost of attracting capital so that it can purchase its assets.
Weights in the Capital Structure
Most companies have multiple sources of capital. The firm’s overall cost of capital is a weighted average of its debt and equity costs of capital. The average of a firm’s debt and equity costs of capital, weighted by the fractions of the firm’s value that correspond to debt and equity, is known as the weighted average cost of capital (WACC).
The weights in the WACC are the proportions of debt and equity used in the firm’s capital structure. If, for example, a company is financed 25% by debt and 75% by equity, the weights in the WACC would be 25% on the debt cost of capital and 75% on the equity cost of capital. The balance sheet of the company would look like Figure 17.3.
These weights can be derived from the right-hand side of a market-value-based balance sheet. Recall that accounting-based book values listed on traditional financial statements reflect historical costs. The market-value balance sheet is similar to the accounting balance sheet, but all values are current market values.
Figure 17.3 Balance Sheet of Company with Capital Structure of 25% Debt and 75% Equity
Just as the accounting balance sheet must balance, the market-value balance sheet must balance:
$Market Value of Assets = Market Value of Debt + Market Value of EquityMarket Value of Assets = Market Value of Debt + Market Value of Equity$
17.1
This equation reminds us that the values of a company’s debt and equity flow from the market value of the company’s assets.
Let’s look at an example of how a company would calculate the weights in its capital structure. Bluebonnet Industries has debt with a book (face) value of \$5 million and equity with a book value of \$3 million. Bluebonnet’s debt is trading at 97% of its face value. It has one million shares of stock, which are trading for \$15 per share.
First, the market values of the company’s debt and equity must be determined. Bluebonnet’s debt is trading at a discount; its market value is $0.97×5,000,000=4,850,0000.97×5,000,000=4,850,000$. The market value of Bluebonnet’s equity equals $Number of Shares × Price per Share = 1,000,000 × 15 = 15,000,000Number of Shares × Price per Share = 1,000,000 × 15 = 15,000,000$. Thus, the total market value of the company’s capital is $4,850,000 + 15,000,000 = 19,850,0004,850,000 + 15,000,000 = 19,850,000$. The weight of debt in Bluebonnet’s capital structure is $4,850,00019,850,000=24.4%4,850,00019,850,000=24.4%$. The weight of equity in its capital structure is $15,000,00019,850,000=75.6%15,000,00019,850,000=75.6%$. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/17%3A_How_Firms_Raise_Capital/17.01%3A_Why_It_Matters.txt |
Learning Objectives
By the end of this section, you will be able to:
• Calculate the after-tax cost of debt capital.
• Explain why the return to debt holders is not the same as the cost to the firm.
• Calculate the cost of equity capital.
The costs of debt and equity capital are what company lenders (those who allow the firm to use their capital) expect in return for providing that capital. Just as current market values of debt and equity should be used in determining their weights in the capital structure, current market values of debt and equity should be used in determining the costs of those types of financing.
Cost of Debt Capital
A company’s cost of debt is the interest rate it would have to pay to refinance its existing debt. Because a firm’s existing debt trades in the marketplace, its price changes according to market conditions. The overall credit environment can change due to changing macroeconomic conditions, causing a change in the price of debt securities. In addition, as there are changes in the overall riskiness of the firm and its ability to repay its creditors, the price of the debt securities issued by the firm will change.
The market price of a company’s existing bonds implies a yield to maturity. Recall that the yield to maturity is the return that current purchasers of the debt will earn if they hold the bond to maturity and receive all of the payments promised by the borrowing firm.
Yield to Maturity and the Cost of Debt
Bluebonnet’s debt is selling for 97% of its face value. This means that for every \$100 of face value, investors are currently paying \$97 for an outstanding bond issued by Bluebonnet Industries. This debt has a coupon rate of 6%, paid semiannually, and the bonds mature in 15 years.
Because the bonds are selling at a discount, the yield that investors who purchase these bonds will receive if they hold the bond to maturity exceeds 6%. The purchasers of these bonds will receive a coupon payment of $100 × 0.062=3100 × 0.062=3$ every six months for the next 15 years. They will also receive the \$100 face value when the bonds mature in 15 years. To calculate the yield to maturity of these bonds using your financial calculator, input the information shown in Table 17.1.
Step Description Enter Display
1 Enter number of coupon payments 30 N N = 30.00
2 Enter the price paid for the bond 97 +/- PV PV = -97.00
3 Enter the coupon payment 3 PMT PMT = 3.00
4 Enter the face value of the bond 100 FV FV = 100.00
5 Compute the semiannual rate CPT I/Y I/Y = 3.156
6 Multiply 3.156 by 2 to get YTM $× 2 =× 2 =$ 6.312
Table 17.1 Calculator Steps for Finding the Yield to Maturity1
The yield to maturity (YTM) of Bluebonnet Industries bonds is 6.312%. This YTM should be used in estimating the firm’s overall cost of capital, not the coupon rate of 6% that is stated on the outstanding bonds. The coupon rate on the existing bonds is a historical rate, set under economic conditions that may have been different from the current market conditions. The YTM of 6.312% represents what investors are currently requiring to purchase the debt issued by the company.
After-Tax Cost of Debt
Although current debt holders demand to earn 6.312% to encourage them to lend to Bluebonnet Industries, the cost to the firm is less than 6.312%. This is because interest paid on debt is a tax-deductible expense. When a firm borrows money, the interest it pays is offset to some extent by the tax savings that occur because of this deductible expense.
The after-tax cost of debt is the net cost of interest on a company’s debt after taxes. This after-tax cost of debt is the firm’s effective cost of debt. The after-tax cost of debt is calculated as $rd(1 - T)rd(1 - T)$, where $rdrd$ is the before-tax cost of debt, or the return that the lenders receive, and T is the company’s tax rate. If Bluebonnet Industries has a tax rate of 21%, then the firm’s after-tax cost of debt is $6.312% 1 - 0.21 = 4.986%.6.312% 1 - 0.21 = 4.986%.$
This means that for every \$1,000 Bluebonnet borrows, the company will have to pay its lenders $1,0006.312% = 63.121,0006.312% = 63.12$ in interest every year. The company can deduct \$63.12 from its income, so this interest payment reduces the taxes the company must pay to the government by $63.12 (0.21) = 13.26%63.12 (0.21) = 13.26%$. Thus, Bluebonnet’s effective cost of debt is $63.12 - 13.26 = 49.8663.12 - 13.26 = 49.86$, or $49.861,000=4.986%49.861,000=4.986%$.
Think It Through
Calculating the After-Tax Cost of Debt
Royer Roasters has issued bonds that will mature in 18 years. The bonds have a coupon rate of 8%, and coupon payments are made semiannually. These bonds are currently selling at a price of \$102.20 per \$100 face value. Royer’s tax rate is 28%. What is Royer’s after-tax cost of debt?
Cost of Equity Capital
Companies can raise money by selling stock, or ownership shares, of the company. Stock is known as equity capital. The cost of common stock capital cannot be directly observed in the market; it must be estimated. Two primary methods for estimating the cost of common stock capital are the capital asset pricing model (CAPM) and the constant dividend growth model.
CAPM
The CAPM is based on using the firm’s systematic risk to estimate the expected returns that shareholders require to invest in the stock. According to the CAPM, the cost of equity (re) can be estimated using the formula
$re = Risk-Free Rate + (Equity Beta × Market Risk Premium)re = Risk-Free Rate + (Equity Beta × Market Risk Premium)$
17.2
For example, suppose that Bluebonnet Industries has an equity beta of 1.3. Because the beta is greater than one, the stock has more systematic risk than the average stock in the market. Assume that the rate on 10-year US Treasury notes is 3% and serves as a proxy for the risk-free rate. If the long-run average return for the stock market is 11%, the market risk premium is $11% - 3% = 8%;11% - 3% = 8%;$ this means that people who invest in the stock market are rewarded for the risk they are taking by being paid 8% more than they would have been paid if they had purchased US Treasury notes. Bluebonnet Industries cost of equity capital can be estimated as
$re = 0.03 + 1.3 × 0.08 = 0.03 + 0.104 = 0.134 = 13.4%re = 0.03 + 1.3 × 0.08 = 0.03 + 0.104 = 0.134 = 13.4%$
17.3
Constant Dividend Growth Model
The constant dividend growth model provides an alternative method of calculating a company’s cost of equity. The basic formula for the constant dividend growth model is
$re = Dividend in One YearCurrent Stock Price + Dividend Growth Rate = Div1P0+gre = Dividend in One YearCurrent Stock Price + Dividend Growth Rate = Div1P0+g$
17.4
Thus, three things are needed to complete this calculation: the current stock price, what the dividend will be in one year, and the growth rate of the dividend. The current price of the stock is easy to obtain by looking at the financial news. The other two items, the dividend next year and the growth rate of the dividend, will occur in the future and at the current time are not known with certainty; these two items must be estimated.
Suppose Bluebonnet paid a dividend of \$1.50 per share to its shareholders last year. Also suppose that this dividend has been growing at a rate of 2% each year for the past several years and that growth rate is expected to continue into the future. Then, the dividend in one year can be expected to be $1.50(1 + 0.02) = 1.531.50(1 + 0.02) = 1.53$. If the current stock price is \$12.50 per share, then that cost of equity is estimated as
$re = 1.5312.50 + 0.02 = 0.1224 + 0.02 = 0.1424 = 14.24%re = 1.5312.50 + 0.02 = 0.1224 + 0.02 = 0.1424 = 14.24%$
17.5
Think It Through
Using the Constant Dividend Growth Model
What does an increase in the price of a company’s stock imply about the equity cost of capital for the company? To find out what the constant dividend growth model suggests, assume that the stock price for Bluebonnet Industries increases to \$16.50 per share. If there is no expectation that the growth rate of the dividends will increase, what would the new estimated equity cost of capital be?
Thus, an increase in the price of the stock, holding all of the other variables in the equation constant, implies that the equity cost of capital drops to 11.27%.
Footnotes
• 1The specific financial calculator in these examples is the Texas Instruments BA II PlusTM Professional model, but you can use other financial calculators for these types of calculations. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/17%3A_How_Firms_Raise_Capital/17.03%3A_The_Costs_of_Debt_and_Equity_Capital.txt |
Learning Objectives
By the end of this section, you will be able to:
• Calculate the weighted average cost of capital (WACC).
• Describe issues that arise from estimating the cost of equity capital.
• Describe the use of net debt in calculating WACC.
Once you know the weights in a company’s capital structure and have estimated the costs of the different sources of its capital, you can calculate the company’s weighted average cost of capital (WACC).
WACC Equation
WACC is calculated using the equation
$WACC = D% × rd1 - T + P% × rpfd + E% × reWACC = D% × rd1 - T + P% × rpfd + E% × re$
17.7
D%, P%, and E% represent the weight of debt, preferred stock, and common equity, respectively, in the capital structure. Note that $D% + P% + E%D% + P% + E%$ must equal 100% because the company must account for 100% of its financing. The after-tax cost of debt is $rd1-Trd1-T$. The cost of preferred stock capital is represented by rpfd, and the cost of common stock capital is represented by re.
For a company that does not issue preferred stock, P% is equal to zero, and the WACC equation is simply
$WACC = D% × rd1 - T + E% × reWACC = D% × rd1 - T + E% × re$
17.8
Earlier in this chapter, we calculated the weights in Bluebonnet Industries’ capital structure to be $D%=24.4%D%=24.4%$ and $E%=75.6%E%=75.6%$. We also calculated the after-tax cost of debt for Bluebonnet to be 4.99%. If we use the CAPM to estimate the cost of equity capital for the firm, Bluebonnet’s WACC is computed as
$WACC = 24% × 4.99% + 75.6% × 13.4% = 1.20% + 10.13% = 11.33%WACC = 24% × 4.99% + 75.6% × 13.4% = 1.20% + 10.13% = 11.33%$
17.9
If we use the constant dividend discount model to estimate the cost of equity for Bluebonnet Industries, the WACC is computed as
$WACC = 24% × 4.99% + 75.6% × 14.24% = 1.20% + 10.77% = 11.97%WACC = 24% × 4.99% + 75.6% × 14.24% = 1.20% + 10.77% = 11.97%$
17.10
Calculating WACC in Practice
The equation for calculating WACC is straightforward. However, issues come up when financial managers calculate WACC in practice. Both the weights of the equity components and the cost of the equity components are needed to calculate the WACC. The WACC that financial managers derive will depend on the assumptions and models they use to determine what weights and capital costs to use.
Issues in Estimating the Cost of Equity Capital
We have explored two ways of estimating the cost of equity capital: the CAPM and the constant dividend growth model. Often, these methods will produce similar estimates of the cost of capital; seldom will the two methods provide the same value.
In our example for Bluebonnet Industries, the CAPM estimated the cost of equity capital as 13.4%. The constant dividend growth model estimated the cost of capital as 14.24%. The exact value of the WACC calculation depends on which of these estimates is used. It is important to remember that the WACC is an estimate that is based on a number of assumptions that financial managers made.
For example, using the CAPM requires assumptions be made regarding the values of the risk-free interest rate, the market risk premium, and a firm’s beta. The risk-free interest rate is generally determined using US Treasury security yields. In theory, the yield on US Treasury securities that have a maturity equivalent to the length of the company’s investors’ investment horizon should be used. It is common for financial analysts to use yields on long-term US Treasury bonds to determine the risk-free rate.
To estimate the market risk premium, analysts turn to historical data. Because this historical data is used to estimate the future market risk premium, the question arises of how many years of historical data should be used. Using more years of historical data can lead to more accurate estimates of what the average past return has been, but very old data may have little relevance if today’s financial market environment is different from what it was in the past. Old data may have little relevance for investors’ expectations today. Typical market risk premiums used by financial managers range from 5% to 8%.
The same issue with how much historical data should be considered arises when calculating a company’s beta. Different financial managers can calculate significantly different betas even for well-established, stable companies. In April 2021, for example, the beta for IBM was reported as 0.97 by MarketWatch and as 1.25 by Yahoo! Finance.
The CAPM estimate of the cost of equity capital for IBM is significantly different depending on what source is used for the company’s beta and what value is used for the market risk premium. Using a market risk premium of 5%, the beta of 0.97 provided by MarketWatch, and a risk-free rate of 3% results in a cost of capital of
$re = 0.03 + 0.97 × 0.05 = 0.03 + 0.0485 = 0.0785 = 7.85%re = 0.03 + 0.97 × 0.05 = 0.03 + 0.0485 = 0.0785 = 7.85%$
17.11
If, instead, a market risk premium of 8% and the beta of 1.25 provided by Yahoo! Finance are used, the cost of capital is estimated to be
$re = 0.03 + 1.25 × 0.08 = 0.03 + 0.10 = 0.13 = 13.0%re = 0.03 + 1.25 × 0.08 = 0.03 + 0.10 = 0.13 = 13.0%$
17.12
Concepts In Practice
Estimating the Equity Cost of Capital
Although the calculation of the cost of capital using the CAPM equation is simple and straightforward, there is not one definitive equity cost of capital for a company that all financial managers will agree on. Consider the eight companies spotlighted in Table 17.3.
Four estimates of the equity cost of capital are calculated for each firm. The first two estimates are based on the beta provided by MarketWatch for each of the companies. A risk-free rate of 3% is assumed. Market risk premiums of both 5% and of 8% are considered. A market risk premium of 5% would suggest that long-run investors who hold a well-diversified portfolio, such as one with all of the stocks in the S&P 500, will average a return 5 percentage points higher than the risk-free rate, or 8%. If you assume instead that the average long-run return on the S&P 500 is 11%, then people who purchase a portfolio of those stocks are rewarded by earning 8 percentage points more than the 3% they would earn investing in the risk-free security.
The last two estimates of the cost of equity capital for each company also use the same risk-free rate of 3% and the possible market risk premiums of 5% and 8%. The only difference is that the beta provided by Yahoo! Finance is used in the calculation.
Company Industry MarketWatch Yahoo! Finance
Beta MRP = 5% MRP = 8% Beta MRP = 5% MRP = 8%
Kroger Food retail 0.31 4.55% 5.48% 0.66 6.30% 8.28%
Coca-Cola Nonalcoholic beverages 0.69 6.45% 8.52% 0.62 6.10% 7.96%
AT&T Telecommunications 0.74 6.70% 8.92% 0.74 6.70% 8.92%
Kraft Heinz Food products 0.82 7.10% 9.56% 1.14 8.70% 12.12%
Microsoft Software 1.19 8.95% 12.52% 0.79 6.95% 9.32%
Goodyear Tire and Rubber Tires 1.24 9.20% 12.92% 2.26 14.30% 21.08%
American Airlines Passenger airlines 1.34 9.70% 13.72% 1.93 12.65% 18.44%
KB Homes Residential construction 1.42 10.10% 14.36% 1.83 12.15% 17.64%
Table 17.3 Estimates of Equity Cost of Capital for Eight Companies (source: Yahoo! Finance; MarketWatch)
The range of the equity cost of capital estimates for each of the firms is significant. Consider, for example, Goodyear Tire and Rubber. According to MarketWatch, the beta for the company is 1.24, resulting in an estimated cost of equity capital between 9.20% and 12.92%. The beta provided by Yahoo! Finance is much higher, at 2.26. Using this higher beta results in an estimated equity cost of capital for Goodyear Tire and Rubber between 14.30% and 21.08%. This leaves the financial managers of Goodyear Tire and Rubber with an estimate of the equity cost of capital between 9.20% and 21.08%, using a range of reasonable assumptions.
What is a financial manager to do when one estimate is more than twice as large as another estimate? A financial manager who believes the equity cost of capital is close to 9% is likely to make very different choices from one who believes the cost is closer to 21%. This is why it is important for a financial manager to have a broad understanding of the operations of a particular company. First, the manager must know the historical background from which these numbers were derived. It is not enough for the manager to know that beta is estimated as 1.24 or 2.26; the manager must be able to determine why the estimates are so different. Second, the manager must be familiar enough with the company and the economic environment to draw a conclusion about what set of assumptions will most likely be reasonable going forward. While these numbers are based on historical data, the financial manager’s main concern is what the numbers will be going forward.
It is evident that estimating the equity cost of capital is not a simple task for companies. Although we do see a wide range of estimates in the table, some general principles emerge. First, the average company has a beta of 1. With a risk-free rate of 3% and a market risk premium in the range of 5% to 8%, the cost of equity capital will fall within a range of 8% to 11% for the average company. Companies that have a beta less than 1 will have an equity cost of capital that falls below this range, and companies that have a beta greater than 1 will have an equity cost of capital that rises above this range.
Recall that a company’s beta is heavily influenced by the type of industry. Grocery stores and providers of food products, for example, tend to have betas less than 1. During recessionary times, people still eat, but during expansionary times, people do not significantly increase their spending on these products. Thus, companies such as Kroger, Coca-Cola, and Kraft Heinz will tend to have low betas and a range of equity cost of capital below 8% to 11%.
The sales of companies in other industries tend to be much more volatile. During expansionary periods, people fly to vacation destinations and purchase new homes. During recessionary periods, families cut back on these discretionary expenditures. Thus, companies such as American Airlines and KB Homes will have higher betas and ranges of equity cost of capital that exceed the 8% to 11% average. The higher equity cost of capital is needed to incentivize investors to invest in these companies with riskier cash flows rather than in lower-risk companies.
The CAPM estimate depends on assumptions made, but issues also exist with the constant dividend growth model. First, the constant dividend growth model can be used only for companies that pay dividends. Second, the model assumes that the dividends will grow at a constant rate in the future, an assumption that is not always reasonable. It also assumes that the financial manager accurately forecasts the growth rate of dividends; any error in this forecast results in an error in estimating the cost of equity capital.
Given the differences in assumptions made when using the constant dividend growth model and the CAPM to estimate the equity cost of capital, it is not surprising that the numbers from the two models differ. When estimating the cost of equity capital for a particular firm, financial managers must examine the assumptions made for both approaches and decide which set of assumptions is more realistic for that particular company.
Net Debt
Many practitioners use net debt rather than total debt when calculating the weights for WACC. Net debt is the amount of debt that would remain if a company used all of its liquid assets to pay off as much debt as possible. Net debt is calculated as the firm’s total debt, both short-term and long-term, minus the firm’s cash and cash equivalents. Cash equivalents are current assets that can quickly and easily be converted into cash, such as Treasury bills, commercial paper, and marketable securities.
Consider, for example, Apple, which had \$112.436 billion in total debt in 2020. The company also had \$38.016 billion in cash and cash equivalents. This meant that the net debt for Apple was only \$74.420 billion. If Apple used all of its cash and cash equivalents to pay debt, it would be left with \$74.420 billion in debt.2
Cash and cash equivalents can be viewed as negative debt. For firms with relatively low levels of cash, this adjustment will not have a large impact on the overall WACC estimate. However, the adjustment can be important for firms that hold substantial cash reserves. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/17%3A_How_Firms_Raise_Capital/17.04%3A_Calculating_the_Weighted_Average_Cost_of_Capital.txt |
Learning Objectives
By the end of this section, you will be able to:
• Distinguish between a levered and an unlevered firm.
• Explain why the choice of capital structure does not impact the value of a firm in perfect financial markets.
• Calculate the interest tax shield.
• Explain how the interest tax shield encourages the use of leverage.
So far, we have taken the company’s capital structure as given. Each firm’s capital structure, however, is a result of intentional decisions made by the financial managers of the company. We now turn our attention to the issues that financial managers consider when making these decisions.
The Unlevered Firm
Let’s begin our discussion of capital structure choices by exploring the financing decisions you would face if you were to start a T-shirt business. Suppose that your hometown will host an international cycling competition. The competition itself will last for a month; cyclists will arrive early to train in the local climate. News coverage will be significant, meaning a lot of media personnel will be visiting your area. In addition to fans attending the event, it is expected that tourism will increase over the next year as recreational cyclists will want to ride the route of the professional race. You decide to operate a business for one year that will sell T-shirts highlighting this event.
You will need to make an up-front investment of \$40,000 to start the business. You estimate that you will generate a cash flow of \$52,000, after you cover all of your operating costs, at the end of next year. You know that these profits are risky; you think a 10% risk premium is appropriate for the level of riskiness of the business. If the risk-free rate is 4%, this means that the appropriate discount rate for you to use is 14%. The value of this business opportunity is
$Value of Business=52,0001.14-40,000=45,614-40,000=5614 Value of Business=52,0001.14-40,000=45,614-40,000=5614$
17.13
This looks as if it will be a profitable business that should be undertaken. However, you do not have the \$40,000 for the up-front investment and will need to raise it.
First, consider raising money solely by selling ownership shares to your family and friends. How much would those shares be worth? The value of the stock would be equal to the present value of the expected future cash flows. The potential stockholders would expect to receive \$45,614 in one year. If they agree with you that the riskiness of this T-shirt business warrants a discount rate of 14%, then they will value the stock at
$Present Value=52,0001.14=45,614Present Value=52,0001.14=45,614$
17.14
If you sell all of the equity in the company for \$45,614 and purchase the equipment necessary for the project for \$40,000, you have \$5,614 to keep as the entrepreneur who created the business.
This business would be financed 100% by equity. The lack of any debt in the capital structure means the firm would have no financial leverage. The equity in a firm that has no financial leverage is called unlevered equity.
The Levered Firm
Next, consider borrowing some of the money that you will need to start this T-shirt business. Although the cash flows from the business are uncertain, suppose you are certain that the business will generate at least \$18,000. (Perhaps you have a guaranteed order from the cycling competition sponsors.) If you borrowed \$17,000 at the risk-free interest rate of 4%, you would owe $17,000 × (1.04) = 17,68017,000 × (1.04) = 17,680$ to the lenders at the end of the year. Because you are certain that you will generate at least \$18,000 in cash, which is greater than \$17,680, you can borrow the \$17,000 without any risk of defaulting.
The \$17,000 will not be enough to pay for all the start-up costs. You will also need to raise some capital by selling equity. Because your firm will have some debt, or financial leverage, the equity that you raise will be known as levered equity. The equity holders expect the firm to generate \$52,000 in cash flows. Debt holders must be paid before equity holders, so this will leave $52,000 - 17,680 = 34,32052,000 - 17,680 = 34,320$ for the shareholders.
The expected future cash flows generated by the business are determined by the productivity of its assets, not the manner in which those assets are financed. It is the present value of these expected future cash flows that determines the firm’s value. Thus, the firm’s value in perfect capital markets will not change as a result of the company taking on leverage.
Link to Learning
MM Proposition I
Nobel laureates Franco Modigliani and Merton Miller wrote influential papers exploring capital structure and the cost of a firm’s capital. They began by considering what would occur in a perfectly competitive market. One of the assumptions of this perfect capital market is that there are no taxes. The idea that the market value of the unlevered and levered firm is the same in perfect capital markets is known in the field of finance as MM Proposition I.
Visit Milken Institute’s 5-Minute Finance site to explore more about Modigliani and Miller’s contributions to the understanding of capital structure.
The value of your T-shirt business remains at \$45,614. You can calculate the value of the levered equity as
$Value of Firm = D + E45,614 =17,000 + EE=45,614 - 17,000 = 28,614Value of Firm = D + E45,614 =17,000 + EE=45,614 - 17,000 = 28,614$
17.15
Now, shareholders are willing to pay \$28,614 for ownership in this company. They expect to get \$34,320 in one year in return for purchasing this equity. What discount rate does this imply?
$34,3201 + rE = 28,61434,32028,614 = 1+rErE = 19.94%34,3201 + rE = 28,61434,32028,614 = 1+rErE = 19.94%$
17.16
Notice that the expected return to shareholders has risen from 14% for the unlevered firm to 19.94% for the levered firm. Recall that the expected return to shareholders equals the risk-free rate plus a risk premium. The risk-free rate has remained 4%. With leverage, the risk premium rises from 10% to 15.94%.
Why does this risk premium increase? Recall that debt holders are paid before equity holders. Equity holders are residual claimants; they will only receive payment if there is money left over after the debt holders are fully paid. The business is risky. You are certain that the company will have cash flow of at least \$18,000 at the end of the year and that \$17,680 will be paid to the debt holders. Therefore, if the company performs poorly (perhaps bad weather results in the cancellation of much of the cycling competition) and the cash flows fall way below what you are expecting, there may be only several hundred dollars left for the shareholders.
When the firm was unlevered, if the cash flow at the end of the year was only \$18,000, the shareholders would receive \$18,000. When leverage is used, the same cash flow would result in shareholders receiving only \$320. The risk to the shareholders increases as leverage is used; thus, the risk premium that shareholders require also increases as leverage is used.
Leverage and the WACC
What happens to the WACC as leverage is used? To figure this out, we must calculate the weights of debt and equity in the capital structure:
$D% = 17,00045,614=37.27%E% = 28,61445,614=62.73%D% = 17,00045,614=37.27%E% = 28,61445,614=62.73%$
17.17
In perfect capital markets, an assumption we are making for now, there are no taxes. Because we are using only debt and common stock, the weight of preferred stock is zero, and our WACC can be calculated as
$WACC = 37.27% × 0.041-0 + 62.73% × 0.1994= 1.49% + 12.51%= 14%WACC = 37.27% × 0.041-0 + 62.73% × 0.1994= 1.49% + 12.51%= 14%$
17.18
Notice that the use of leverage does not change the WACC. When only equity was used to finance the business, stockholders required a 14% expected return to encourage them to let the firm use their capital. When leverage was used, the debt holders only required a 4% return. However, the existence of debt holders, who stand in front of shareholders in the order of claimants, puts shareholders in a riskier position. There is a greater chance that the shareholders will not receive payment from this uncertain business. Thus, the shareholders require a higher rate of return to let the leveraged firm use their capital.
The cost-savings benefits of using lower-cost debt in your company’s capital structure are exactly offset by the higher return that shareholders require when leverage is used. Mathematically, the increase in the cost of equity when leverage is used will be proportional to the debt–equity ratio. Financial managers refer to this outcome as MM Proposition II. The relationship is expressed by the formula
$rE = ru + DEru - rDrE = ru + DEru - rD$
17.19
where ru is the required return to equity holders of the unlevered firm.
Table 17.4 shows how the cost of equity increases as the weight of debt in the capital structure increases. As the company uses more debt, the risk to equity holders increases. Because equity holders risk that there will be no residual money after bondholders are paid, the equity holders require a higher rate of return to invest in the company as its use of leverage increases. Although debt holders face less risk than equity holders, the risk that they face increases as the amount of debt the company takes on increases. Once the company’s debt exceeds its guaranteed cash flow, which is \$18,000 in our example, debt holders face some risk that the company will not be able to pay them. At that point, the cost of debt rises above the risk-free rate. As the weight of debt approaches 100%, the cost of debt capital approaches the cost of equity of the unlevered firm. In other words, if you financed the T-shirt business solely through the use of debt, the debt holders would require a 14% return because they would be bearing the entire risk of the business and would demand to be rewarded for doing so.
As the leverage of the firm increases, both the cost of debt capital and the cost of equity capital increase. However, as the firm’s leverage increases, it is using proportionately more of the relatively cheaper source of capital—debt— and proportionately less of the relatively more expensive source of capital—equity. Thus, the WACC remains constant as leverage increases, despite the rising cost of each component.
Amount of Debt Amount of Equity Weight of Debt Weight of Equity Cost of Debt Cost of Equity WACC
\$ - \$45,614 0% 100% 0.0400 0.1400 14%
\$5,000 \$40,614 11% 89% 0.0400 0.1523 14%
\$10,000 \$35,614 22% 78% 0.0400 0.1681 14%
\$15,000 \$30,614 33% 67% 0.0400 0.1890 14%
\$17,000 \$28,614 37% 63% 0.0400 0.1994 14%
\$20,000 \$25,614 44% 56% 0.0600 0.2025 14%
\$30,000 \$15,614 66% 34% 0.0800 0.2553 14%
\$40,000 \$5,614 88% 12% 0.1000 0.4250 14%
Table 17.4 Alternative Capital Structures for Your T-Shirt Business
The Impact of Taxes
In perfect capital markets, the choice of capital structure will not impact the value of the firm or the cost of the firm’s financing. In the real world, however, capital markets are not perfect. One of the important market imperfections is the presence of corporate taxes. Because the choice of capital structure can impact the taxes that a company pays, in the real world, capital structure can impact the cost of capital and the firm’s value.
Assume that your T-shirt business venture will result in earnings before interest and taxes (EBIT) of \$52,000 next year and that the corporate tax rate is 28%. If your company is unlevered, it has no interest expense, and its net income will be \$37,440, as shown in Table 17.5.
Without Leverage With Leverage
EBIT \$52,000.00 \$52,000.00
Interest Expense 0.00 280.00
Income before Taxes 52,000.00 51,720.00
Taxes (28%) 14,560.00 14,481.60
Net Income \$37,440.00 \$37,238.40
Table 17.5 Net Income and Leverage
If your company uses leverage, raising \$7,000 of financing by issuing debt with a 4% interest rate, it will have an interest expense of \$280. This lowers its taxable income to \$51,720 and its taxes to \$14,481.60. Because interest is a tax-deductible expense, using leverage lowers the company’s taxes.
Table 17.6 shows that the company’s net income is lower with leverage than it would be without leverage. In other words, debt obligations will reduce the value of the equity. However, less equity is needed because some of the firm is financed through debt. The important consideration is how the use of leverage changes the total amount of dollars available to all investors. Table 17.6 shows this impact.
Without Leverage With Leverage
Interest Paid to Debt Holders \$0.00 \$280.00
Amount Available to Stockholders 37,440.00 37,238.40
Total Available for All Investors \$37,440.00 \$37,518.40
Table 17.6 Total Dollars Available to Investors
Using leverage allows the firm to generate \$37,518.40 to pay its investors, compared to only \$37,440 that is available if the firm is unlevered. Where does the extra \$78.40 to pay investors come from? It comes from the reduction in taxes that the firm pays due to leverage. If the company uses no debt, it pays \$14,560 in taxes. The levered firm pays only \$14,481.60 in taxes, a savings of \$78.40.
The \$280 that the levered company pays in interest is shielded from the corporate tax, resulting in tax savings of $0.28 × 280 = 78.400.28 × 280 = 78.40$. The additional amount available to investors because of the tax deductibility of interest payments is known as the interest tax shield. The interest tax shield is calculated as
$Interest Tax Shield = Corporate Tax Rate × Interest PaymentsInterest Tax Shield = Corporate Tax Rate × Interest Payments$
17.20
When interest is a tax-deductible expense, the total value of the levered firm will exceed the value of the unlevered firm by the amount of this interest tax shield. The tax-advantage status of debt financing impacts the WACC. The WACC with taxes is calculated as
$WACC = D% × rd1-T + E% × re= DE + D × rd1-T + EE + D × reWACC = D% × rd1-T + E% × re= DE + D × rd1-T + EE + D × re$
17.21
This formula can be written as
$WACC = DE + D × rd - DE + D × rd × T + EE + D × re= DE + D × rd + EE + D × re - DE + D × rd × TWACC = DE + D × rd - DE + D × rd × T + EE + D × re= DE + D × rd + EE + D × re - DE + D × rd × T$
17.22
Thus, the WACC with taxes is lower than the pretax WACC because of the interest tax shield. The more debt the firm has, the greater the dollar amount of this interest tax shield. The presence of the interest tax shield encourages firms to use debt financing in their capital structures. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/17%3A_How_Firms_Raise_Capital/17.05%3A_Capital_Structure_Choices.txt |
Learning Objectives
By the end of this section, you will be able to:
• Explain how increased use of leverage increases the possibility of financial distress.
• Explain how the possibility of financial distress impacts the cost of capital.
• Discuss the trade-offs a firm faces as it increases its leverage.
• Explain the concept of an optimal capital structure.
Debt and Financial Distress
The more debt a company uses in its capital structure, the larger the dollar value of the interest tax shield. Why, then, do we not see firms using a capital structure composed 100% of debt to maximize this interest tax shield?
The answer to this question lies in the fact that as a company increases its debt, there is a greater chance that the firm will be unable to make its required interest payments on the debt. If the firm has difficulty meeting its debt obligations, it is said to be in financial distress.
A firm in financial distress incurs both direct and indirect costs. The direct costs of financial distress include fees paid to lawyers, consultants, appraisers, and auctioneers. The indirect costs include loss of customers and suppliers.
Trade-Off Theory
Trade-off theory weighs the advantages and disadvantages of using debt in the capital structure. The advantage of using debt is the interest tax shield. The disadvantage of using debt is that it increases the risk of financial distress and the costs associated with financial distress.
A company has an incentive to increase leverage to exploit the interest tax shield. However, too much debt will make it more likely that the company will default and incur financial distress costs. Calculating the precise balance between these two is difficult if not impossible.
For companies with a low level of debt, the risk of default is low, and the main impact of an increase in leverage will be an increase in the interest tax shield. At some point, however, the tax savings that result from increasing the amount of debt in the capital structure will be just offset by the increased probability of incurring the costs of financial distress. For firms that have higher costs of financing distress, this point will be reached sooner. Thus, firms that face higher costs of financial distress have a lower optimal level of leverage than firms that face lower costs of financial distress.
Link to Learning
Netflix
Netflix has experienced phenomenal growth in the past 25 years. Starting as a DVD rental company, Netflix quickly shifted its model to content streaming. In recent years, the company has become a major producer of content, and it is currently the largest media/entertainment company by market capitalization. It is expensive for Netflix to fund the production of this content. Netflix has funded much of its content through selling debt.
You can view the company’s explanation of this capital structure choice by looking at the answers the company provides to common investor questions. You can also find the company’s financial statements on its website and see how the level of debt on Netflix’s balance sheet has increased over the past few years.
Figure 17.4 demonstrates how the value of a levered firm varies with the level of debt financing used. Vu is the value of the unlevered firm, or the firm with no debt. As the firm begins to add debt to its capital structure, the value of the firm increases due to the interest tax shield. The more debt the company takes on, the greater the tax benefit it receives, up until the point at which the company’s interest expense exceeds its earnings before interest and taxes (EBIT). Once the interest expense equals EBIT, the firm will have no taxable income. There is no tax benefit from paying more interest after that point.
Figure 17.4 Maximum Value of a Levered Firm
As the firm increases debt and increases the value of the tax benefit of debt, it also increases the probability of facing financial distress. The magnitude of the costs of financial distress increases as the debt level of the company rises. To some degree, these costs offset the benefit of the interest tax shield.
The optimal debt level occurs at the point at which the value of the firm is maximized. A company will use this optimal debt level to determine what the weight of debt should be in its target capital structure. The optimal capital structure is the target. Recall that the market values of a company’s debt and equity are used to determine the costs of capital and the weights in the capital structure. Because market values change daily due to economic conditions, slight variations will occur in the calculations from one day to the next. It is neither practical nor desirable for a firm to recalculate its optimal capital structure each day.
Also, a company will not want to make adjustments for minor differences between its actual capital structure and its optimal capital structure. For example, if a company has determined that its optimal capital structure is 22.5% debt and 77.5% equity but finds that its current capital structure is 23.1% debt and 76.9% equity, it is close to its target. Reducing debt and increasing equity would require transaction costs that might be quite significant.
Table 17.7 shows the average WACC for some common industries. The calculations are based on corporate information at the end of December 2020. A risk-free rate of 3% and a market-risk premium of 5% are assumed in the calculations. You can see that the capital structure used by firms varies widely by industry. Companies in the online retail industry are financed almost entirely through equity capital; on average, less than 7% of the capital comes from debt for those companies. On the other hand, companies in the rubber and tires industry tend to use a heavy amount of debt in their capital structure. With a debt weight of 63.62%, almost two-thirds of the capital for these companies comes from debt.
Industry Name Equity Weight Debt Weight Beta Cost of Equity Tax Rate After-Tax
Cost of Debt
WACC
Retail (online) 93.33% 6.67% 1.16 8.82% 2.93% 2.19% 8.38%
Computers/peripherals 91.45% 8.55% 1.18 8.92% 3.71% 1.88% 8.32%
Household products 87.07% 12.93% 0.73 6.65% 5.06% 2.19% 6.07%
Drugs (pharmaceutical) 84.62% 15.38% 0.91 7.54% 1.88% 2.19% 6.72%
Retail (general) 82.41% 17.59% 0.90 7.49% 12.48% 1.88% 6.51%
Beverages (soft) 82.24% 17.76% 0.79 6.96% 3.32% 2.19% 6.11%
Tobacco 76.74% 23.26% 0.72 6.61% 8.69% 1.88% 5.51%
Homebuilding 75.34% 24.66% 1.46 10.29% 15.91% 2.19% 8.30%
Food processing 75.18% 24.82% 0.64 6.18% 8.56% 2.19% 5.19%
Restaurants/dining 74.79% 25.21% 1.34 9.72% 3.19% 2.19% 7.82%
Apparel 71.74% 28.26% 1.10 8.49% 4.75% 2.19% 6.71%
Farming/agriculture 68.94% 31.06% 0.87 7.37% 6.45% 1.88% 5.67%
Packaging & containers 64.47% 35.53% 0.92 7.61% 15.67% 1.88% 5.57%
Food wholesalers 64.10% 35.90% 1.03 8.17% 0.52% 2.19% 6.02%
Hotels/gaming 63.60% 36.40% 1.56 10.82% 2.02% 2.19% 7.68%
Telecom. services 54.60% 45.40% 0.66 6.30% 3.93% 1.40% 4.07%
Retail (grocery and food) 51.46% 48.54% 0.24 4.21% 13.52% 2.19% 3.23%
Air transport 38.26% 61.74% 1.61 11.04% 6.00% 2.19% 5.58%
Rubber & tires 36.38% 63.62% 1.09 8.47% 5.30% 1.88% 4.28%
Table 17.7 Capital Structure, Cost of Debt, Cost of Equity, and WACC for Selected Industries (data source: Aswath Damodaran Online)
Industries that have high betas, such as hotels/gaming and air transport, have high equity costs of capital. More recession-proof industries, such as food processing and household products, have low betas and low equity costs of capital. The WACC for each industry ends up being influenced by the weights of equity and debt the company chooses, the riskiness of the industry, and the tax rates faced by companies in the industry. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/17%3A_How_Firms_Raise_Capital/17.06%3A_Optimal_Capital_Structure.txt |
Learning Objectives
By the end of this section, you will be able to:
• Calculate the required return to preferred shareholders.
• Calculate the WACC of a firm that issues preferred shares.
• Discuss how issuing new equity impacts the cost of equity capital.
• Explain the functionality of convertible debt.
A company can finance its assets in two ways: through debt financing and through equity financing. Thus far, we have treated these sources as two broad categories, each with a single cost of capital. In reality, a company may have different types of debt or equity, each with its own cost of capital. The same principle would apply: the WACC of the firm would be calculated using the weights of each of these types multiplied by the cost of that particular type of debt or equity capital.
Preferred Shares
Although our calculations of WACC thus far have assumed that companies finance their assets only through debt and common equity, we saw at the beginning of the chapter that the basic WACC formula is
$WACC = D% × rd1-T + P% × rpfd + E% × reWACC = D% × rd1-T + P% × rpfd + E% × re$
17.23
In addition to common stock, a company can raise equity capital by issuing preferred stock. Owners of preferred stock are promised a fixed dividend, which must be paid before any dividends can be paid to common stockholders.
In the order of claimants, preferred shareholders stand in line between bondholders and common shareholders. Bondholders are paid interest before preferred shareholders are paid annual dividends. Preferred shareholders are paid annual dividends before common shareholders are paid dividends. Should the company face bankruptcy, the same priority of claimants is followed in settling claims—first bondholders, then preferred stockholders, with common stockholders standing at the end of the line.
Preferred stock shares some characteristics with debt financing. It has a promised cash flow to its holders. Unlike common equity, the dividend on preferred stock is fixed and known. Also, there are consequences if those preferred dividends are not paid. Common shareholders cannot receive any dividends until preferred dividends are paid, and in some cases, preferred shareholders receive voting rights until they are paid the dividends that are due. However, preferred shareholders cannot force the company into bankruptcy as debt holders can. For tax and legal purposes, preferred stock is treated as equity.
The cost of the preferred equity capital is calculated using the formula
$rpfd=DivpfdPpfdrpfd=DivpfdPpfd$
17.24
Suppose that Greene Building Company has issued preferred stock that pays a dividend of \$2.00 each year. If this preferred stock is selling for \$21.80 per share, then the company’s cost of preferred stock is
$rpfd = 2.0021.80 = 9.17%rpfd = 2.0021.80 = 9.17%$
17.25
Think It Through
Calculating Common Equity Financing
Greene Building Company uses 40% debt, 15% preferred stock, and 45% common stock in its capital structure. The yield to maturity on the company’s bonds is 7.2%. The cost of preferred equity is 9.17%. In the most recent year, Greene paid a dividend of \$3.15 to its common shareholders. This dividend has been growing at a rate of 3.0% per year, which is expected to continue in the future. The company’s common stock is trading for \$32.25 per share. Greene pays a corporate tax rate of 21%. Estimate the WACC for Greene.
The weights and the costs for each component of capital are placed in the WACC formula:
$WACC = D% × rd1-T + P% × rpfd + E% × re= 40% × 0.07021-0.21 + 15% × 0.0917 + 45% × 0.1306= 2.218% + 1.3755% + 5.8770% = 9.4705%WACC = D% × rd1-T + P% × rpfd + E% × re= 40% × 0.07021-0.21 + 15% × 0.0917 + 45% × 0.1306= 2.218% + 1.3755% + 5.8770% = 9.4705%$
17.27
The WACC for Greene Building Company is estimated to be 9.47%. Note that debt financing is the cheapest cost of capital for Greene. The reason for this is twofold. First, because debt holders face the least amount of risk because they are paid first in the order of claimants, they require a lower return. Second, because interest payments are tax-deductible, the interest tax shield lowers the effective cost of debt to the company. Preferred shareholders will require a higher rate of return than debt holders, 9.17%, because they are later in the order of claimants. Common shareholders are the residual claimants, standing at the end of the line to receive payment. After all other claimants are paid, any remaining money belongs to the shareholders. If this residual amount is small, the common shareholders receive a small payment. If there is nothing left after all other claimants have been paid, common shareholders receive nothing. Thus, common shareholders have the greatest amount of risk and require the highest rate of return.
Also, note that the weights for debt, preferred stock, and common stock in the capital structure sum to 100%. The company must finance 100% of its assets.
Issuing New Common Stock
An existing firm can acquire equity capital to expand its assets in two ways: the retention of earnings or the sale of new shares of stock. Thus far in the chapter, the cost of equity capital calculations have assumed that the earnings were being retained for equity capital financing.
The net income that is left after all expenses are paid is the residual income that belongs to the shareholders. Instead of receiving a fixed payment for letting the firm use their capital (like bondholders who receive fixed interest payments), the reward to shareholders for letting the company use their capital varies from year to year. In a good year, net income and the reward to shareholders is high. In a poor year, net income is low or perhaps even negative.
The net income can either be paid immediately and directly to shareholders in the form of dividends or be retained within the company to fund growth. Shareholders are willing to allow the company to retain these earnings because they expect that the money will be used to fund profitable projects, leading to an even larger reward for shareholders in future years.
Although managers do not need to actively solicit the funds that are retained to fund the business, managers cannot view these funds as costless. The shareholders will require a return on those funds to entice them to allow the company to delay paying the dollars to them immediately in terms of a dividend.
Suppose a company has \$1 million in net income one year. If it pays \$250,000 in dividends and retains \$750,000, then it can finance \$750,000 more in assets. If the company has a capital structure of 25% debt and 75% equity and wants to maintain that capital structure, it must increase its debt by \$250,000 to balance the increase in equity. Thus, the company would be increasing its total financing by \$1 million. Of that financing, 25% would be debt financing, and 75% would be equity financing.
To increase its assets by more than \$1 million, the company would need to decide to either change its capital structure or issue new stock. Consider the firm represented by the market-value balance sheet in Figure 17.5. The firm has \$900 million in assets. These assets are financed by \$225 million in debt capital and \$675 million in equity capital, resulting in a capital structure of 25% debt and 75% equity.
Figure 17.5 Market-Value Balance Sheet for a Company with \$900 Million in Assets and a Capital Structure of 25% Debt and 75% Equity
The retained earnings of \$750,000 cause the equity on the balance sheet to increase to \$675.75 million. The company could sell \$250,000 in bonds, increasing its debt to \$225.25 million. Figure 17.6 shows the impact on the balance sheet. The company has increased its financing by \$1,000,000 and can expand assets by \$1,000,000. The capital structure remains 25% debt and 75% equity.
Figure 17.6 Balance Sheet with \$1 Million Growth Financed through Retained Earnings and New Debt
What if the economy is in an expansionary period and this company thinks it has the opportunity to grow at a rate of 5%? The company knows that it will need more assets to be able to grow. If it needs 5% more assets, its assets will need to increase to \$945 million. To increase the left-hand side of its balance sheet, the company will also need to increase the right-hand side of the balance sheet.
Where does the company get the \$45 million in capital? With \$750,000 in retained earnings, the company can increase its equity to \$675.75 million, but if the remainder of \$44.25 million was financed through debt, the company’s capital structure would change. Its weight of debt would increase to $225,000,000 + 44,250,000945,000,000 = 0.2849=28.49%.225,000,000 + 44,250,000945,000,000 = 0.2849=28.49%.$
If the company has determined that its optimal capital structure is 25% debt and 75% equity, financing the majority of the growth through debt would cause it to stray from these levels. Funding the growth while keeping the capital structure the same would require the firm to issue new shares. Figure 17.7 shows how the firm would need to finance \$45 million in growth while maintaining its desirable capital structure. The firm would need to increase equity capital to \$708.75 million; retained earnings could provide \$750,000, but \$33 million of new equity would need to be sold.
Figure 17.7 Balance Sheet with \$45,000,000 in Financing Coming from Debt, Retained Earnings, and New Stock
Investors who are providing common equity financing require a return to entice them to let the company use their money. If this company has paid \$0.50 per share in dividends to shareholders and this dividend is expected to increase by 3% each year, we can use the constant dividend growth model to estimate how much common shareholders require. If the stock is trading for \$8.00 per share, the cost of common equity financing is estimated as
$re = Div1P0 + g =0.5158 + 0.03 = 0.0644 + 0.03 = 0.0944 = 9.44%re = Div1P0 + g =0.5158 + 0.03 = 0.0644 + 0.03 = 0.0944 = 9.44%$
17.28
If, however, the firm must issue more equity, its cost of equity for those additional shares will be higher than 9.44%. Even if shareholders are willing to pay \$8.00 per share for the stock, the firm will incur flotation costs; this means the firm will not receive the entire \$8.00 to use to finance new assets and generate a profit for shareholders. Flotation costs include the costs of filing with the Securities and Exchange Commission (SEC) as well as the fees paid to investment bankers to place the new shares.
When new equity must be issued to finance the company, the flotation costs must be subtracted from the price of the stock to determine the net proceeds the firm will receive. The cost of this new equity capital is calculated as
$re - new = Div1P0-F + gre - new = Div1P0-F + g$
17.29
where F represents the flotation costs of the new stock issue. If, in this example, the flotation cost is \$0.25 per share, then the cost of raising new equity capital is
$re - new = Div1P0-F + g = 0.5158-0.25 + 0.03 = 0.0665 + 0.03 = 0.09 = 9.65%re - new = Div1P0-F + g = 0.5158-0.25 + 0.03 = 0.0665 + 0.03 = 0.09 = 9.65%$
17.30
Issuing new common equity is the most expensive form of raising capital. Equity capital is already expensive because the common shareholders are the residual claimants who will only be paid if all other claimants are paid. Because of this risk, they require a higher rate of return than providers of capital who have precedence in the order of claimants. Flotation costs must be added to this equity cost when new shares are issued to grow the company.
Think It Through
Cost of Issuing New Equity
You are a financial manager for American Motor Works (AMW). The target capital structure for the company is 30% debt and 70% equity. You know that your company’s after-tax cost of debt is 4.6%. Your company paid a dividend of \$3 per share last year, and it has a policy of increasing its dividend at a rate of 1.5% each year. AMW stock is currently trading for \$27.50 per share. You estimate that the company’s retained earnings will be \$10 million this year. If the company needs to issue new shares of stock, flotation costs are expected to be \$0.75 per share.
Given this information, you are tasked with calculating the company’s WACC. You need to provide an estimate of WACC if retained earnings are used and an estimate if new equity must be issued.
Using 12.57% as the equity cost of capital and 4.6% as the after-tax cost of debt, the WACC is calculated as
$WACC = D% × rd1-T + E% × re = 0.300.0460 + 0.70(0.1257) = 0.0138 + 0.0880 = 0.1018 = 10.18%WACC = D% × rd1-T + E% × re = 0.300.0460 + 0.70(0.1257) = 0.0138 + 0.0880 = 0.1018 = 10.18%$
17.32
The WACC for AMW when it is using retained earnings for equity financing is 10.18%. If the company has \$10 million in retained earnings this year, its equity will increase by \$10 million. Given its target capital structure of 30% debt and 70% equity, AMW will be able to increase its overall financing by $10,000,0000.70 = 14,285,71410,000,0000.70 = 14,285,714$ by using its retained earnings and issuing new debt of \$4,285,714.
If AMW wants to expand its assets by more than \$14,285,714 during the next year, it will need to issue new stock or increase the weight of debt in its capital structure. The company will incur flotation costs of \$0.65 per share to issue new stock. The cost of new equity capital will be
$re=Div1P0 + g = 3.00(1.015)27.50-0.75 + 0.015 = 0.1138 + 0.015 = 0.1288 = 12.88%re=Div1P0 + g = 3.00(1.015)27.50-0.75 + 0.015 = 0.1138 + 0.015 = 0.1288 = 12.88%$
17.33
With this more expensive newly issued equity capital, AMW’s WACC will become
$WACC = D% × rd1-T + E% × re = 0.300.0460 + 0.70(0.1288)= 0.0138 + 0.0902 = 0.1040 = 10.40%WACC = D% × rd1-T + E% × re = 0.300.0460 + 0.70(0.1288)= 0.0138 + 0.0902 = 0.1040 = 10.40%$
17.34
If AMW wants to take on a large project that requires investment in more than \$14,285,714 worth of assets, such as building a new production facility, the company will need to issue more equity and will face a higher WACC than when using retained earnings as its equity financing.
Convertible Debt
Some companies issue convertible bonds. These corporate bonds have a provision that gives the bondholder the option of converting each bond held into a fixed number of shares of common stock. The number of common shares the bondholder would receive for each bond is known as the conversion ratio.
Suppose that you own a convertible bond issued by Sheridan Sodas with a face value of \$1,000 and a conversion ratio of 20 shares that matures today. If you convert the bond today, you will receive 20 shares of Sheridan common stock. If you do not convert, you will receive \$1,000. If you convert, you are basically paying \$1,000 for 20 shares of Sheridan stock. The conversion price is $1,00020 = 50.1,00020 = 50.$ If Sheridan is trading for more than \$50 per share, you would want to convert. If Sheridan is trading for less than \$50 per share, you would not want to convert; you would prefer the \$1,000. In other words, you will choose to convert whenever the stock price exceeds the conversion price at maturity.
A convertible bond gives the holder an option; the bondholder is able to choose between the face value cash or receiving shares of stock. Options always have a positive value to holders. It is always preferable to be able to choose \$1,000 or shares of stock than to simply be given \$1,000. There is a possibility that the shares of stock will be more valuable, and there is no way the choice can put you in a worse position.
Because holders of convertible bonds have the valuable option of conversion that holders of nonconvertible bonds do not have, convertible debt can be offered with a lower interest rate. It might seem as if the firm could lower its weighted average cost of capital by issuing convertible debt rather than nonconvertible debt. However, this is not the case. Remember that holders of convertible bonds choose whether they would prefer to convert the bond and become a stockholder or receive the face value of the bond at maturity.
If a bond has a face value of \$1,000, the convertible bond holders will consider whether the stock they can convert to is worth more than \$1,000. Only when the price of the stock has increased enough that the value of the stock received is more than \$1,000 will the bondholders convert. However, this means that instead of paying \$1,000, the firm is paying the bondholder in stock worth more than \$1,000. In essence, the firm (and the current shareholders) would be selling an equity position in the company for less than the market price of that equity position. The lower interest rate compensates for the possibility that conversion will occur. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/17%3A_How_Firms_Raise_Capital/17.07%3A_Alternative_Sources_of_Funds.txt |
17.1 The Concept of Capital Structure
Capital structure refers to how a company finances its assets. The two main sources of capital are debt financing and equity financing. A cost of capital exists because investors want a return equivalent to what they would receive on an investment with an equivalent risk to persuade them to let the company use their funds. The market values of debt and equity are used to calculate the weights of the components of the capital structure.
17.2 The Costs of Debt and Equity Capital
The yield to maturity (YTM) on a company’s outstanding bonds represents the return that debt holders are requiring to lend money to the company. Because interest expenses are tax-deductible, the cost of debt to the company is less than the YTM. The cost of equity capital is not directly observed, so financial managers must estimate this cost. Two common methods for estimating the cost of equity capital are the constant dividend growth model and the capital asset pricing model (CAPM).
17.3 Calculating the Weighted Average Cost of Capital
Calculate the weighted average cost of capital (WACC) using the formula
$WACC = D% × rd1-T + P% × rpfd + E% × reWACC = D% × rd1-T + P% × rpfd + E% × re$
17.35
Remember that the WACC is an estimate; different methods of estimating the cost of equity capital can lead to different estimations of WACC.
17.4 Capital Structure Choices
An unlevered firm uses no debt in its capital structure. A levered firm uses both debt and equity in its capital structure. In perfect financial markets, the value of the firm will be the same regardless of the firm’s decision to use leverage. With the tax deductibility of interest expenses, however, the value of the firm can increase through the use of debt. As the level of debt increases, the value of the interest tax shield increases.
17.5 Optimal Capital Structure
A company wants to choose a capital structure that maximizes its value. Although increasing the level of financial leverage, or debt, in the capital structure increases the value of the interest tax shield, it also increases the probability of financial distress. As the weight of debt in the capital structure increases, the return that providers of both debt and equity capital require to entice them to provide money to the firm increases because their risk increases. Trade-off theory suggests that the value of a company that uses debt equals the value of the unlevered firm plus the value of the interest tax shield minus financial distress costs.
17.6 Alternative Sources of Funds
Preferred stock is a type of equity capital; the owners of preferred stock receive preferential treatment over common stockholders in the order of claimants. A fixed dividend is paid to preferred shareholders and must be paid before common shareholders receive dividends. Equity capital can be raised through either retaining earnings or selling new shares of stock. Significant flotation costs are associated with issuing new shares of stock, making it the most expensive source of financing. Convertible debt allows the debt holders to convert their debt into a fixed number of common shares instead of receiving the face value of the stock at maturity.
17.09: Key Terms
after-tax cost of debt
the net cost of interest on a company’s debt after taxes; the firm’s effective cost of debt
capital
a company’s sources of financing
capital structure
the percentages of a company’s assets that are financed by debt capital, preferred stock capital, and common stock capital
conversion price
the face value of a convertible bond divided by its conversion ratio
conversion ratio
the number of shares of common stock receivable for each convertible bond that is converted
convertible bonds
bonds that can be converted into a fixed number of shares of common stock upon maturity
financial distress
when a firm has trouble meeting debt obligations
financial leverage
the debt used in a company’s capital structure
flotation costs
costs involved in the issuing and placing of new securities
interest tax shield
the reduction in taxes paid because interest payments on debt are a tax-deductible expense; calculated as the corporate tax rate multiplied by interest payments
levered equity
equity in a firm that has debt outstanding
net debt
a company’s total debt minus any cash or risk-free assets the company holds
preferred stock
equity capital that has a fixed dividend; preferred shareholders fall in between debt holders and common stockholders in the order of claimants
trade-off theory
a theory stating that the total value of a levered company is the value of the firm without leverage plus the value of the interest tax shield less financial distress costs
unlevered equity
equity in a firm that has no debt outstanding
weighted average cost of capital (WACC)
the average of a firm’s debt and equity costs of capital, weighted by the fractions of the firm’s value that correspond to debt and equity
17.10: CFA Institute
This chapter supports some of the Learning Outcome Statements (LOS) in this CFA® Level I Study Session. Reference with permission of CFA Institute. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/17%3A_How_Firms_Raise_Capital/17.08%3A_Summary.txt |
1.
Sandage Auto Parts has debt outstanding with a market value of \$2 million. The company’s common stock has a book value of \$3 million and a market value of \$8 million. What weight is equity in Sandage’s capital structure?
1. 11%
2. 20%
3. 60%
4. 80%
2.
The capital structure of a company refers to ________.
1. whether the company purchases assets or liabilities with its equity
2. the proportion of debt and equity the company uses in financing is assets
3. the ability of the company to use its assets to generate equity for the owners
4. whether the company uses short-term assets or long-term assets to create its product
3.
Which of the following should be used when calculating the weights for a company’s capital structure?
1. Book values
2. Current market values
3. Historic accounting values
4. Par and face values
4.
Two methods for estimating a company’s cost of common stock capital are ________.
1. the historic method and the current method
2. the weighted valuation model and the beta model
3. the constant dividend growth model and the CAPM
4. the balance sheet method and the face value method
5.
Which of the following would be the most reasonable approach to calculating the cost of debt for a company?
1. Using the coupon rate on the company’s existing bonds
2. Using the interest amount reported on the income statement
3. Using the yield to maturity on the company’s existing bonds
4. Multiplying the amount of debt on the company’s balance sheet by the risk-free rate
6.
Net debt equals ________.
1. Debt/Equity
2. Debt × (1 – Tax Rate)
3. total debt minus the cash and risk-free assets the company owns
4. the yield to maturity of a company’s bonds divided by the tax rate
7.
Unlevered equity refers to ________.
1. the equity in a firm with no debt
2. a firm’s equity minus the firm’s debt
3. the equity in a firm in the absence of taxation and transaction costs
4. the portion of a firm’s capital structure that is financed by its owners
8.
In perfect capital markets, ________.
1. a company’s WACC does not change as it changes its capital structure
2. a company can lower its WACC by using more debt in its capital structure
3. a company can lower its WACC by using more equity in its capital structure
4. a company’s cost of debt capital is exactly equal to its cost of equity capital when the company uses 50% debt and 50% equity in its capital structure
9.
The interest tax shield occurs because ________.
1. interest payments are a tax-deductible expense
2. interest payments are made from after-tax income
3. investors require a lower rate of return the higher the company’s tax rate
4. investors require a lower rate of return the more debt the company incurs
10.
As a company increases the weight of debt in its capital structure, ________.
1. its cost of debt capital falls
2. the weight of equity capital also increases
3. the value of the interest tax shield decreases
4. its possibility of financial distress increases
11.
A company is said to be in financial distress if ________.
1. it is not fully exploiting the interest tax shield
2. it needs to raise capital to finance a new project
3. it has difficulty meeting its debt obligations
4. its cost of equity capital exceeds its cost of debt capital
12.
Issuing new stock ________.
1. costs the same as retaining earnings
2. will not impact a company’s WACC
3. is the most expensive source of capital because of flotation costs
4. is the cheapest source of capital because dividends do not have to be paid each year
13.
If a bond with a face value of \$1,000 has a conversion ratio of 10 shares, the conversion price is ________.
1. \$0.01
2. \$10
3. \$100
4. \$1,000 | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/17%3A_How_Firms_Raise_Capital/17.11%3A_Multiple_Choice.txt |
1.
Why does a company’s capital have a cost?
2.
Why is the rate that debt holders require to entice them to lend money to a company different from the company’s effective cost of debt capital?
3.
Assume that the corporate tax rate is 21%. Congress is discussing increasing the corporate tax rate to 32%. How might this change the capital structures that companies choose?
4.
Describe the order of claimants and how it impacts the returns that various providers of capital require to entice them to provide funding to a company.
5.
Explain what is meant by trade-off theory.
17.13: Problems
1.
SodaFizz has debt outstanding that has a market value of \$3 million. The company’s stock has a book value of \$2 million and a market value of \$6 million. What are the weights in SodaFizz’s capital structure?
2.
The yield to maturity on SodaFizz’s debt is 7.2%. If the company’s marginal tax rate is 21%, what is SodaFizz’s effective cost of debt?
3.
SodaFizz paid a dividend of \$2 per share last year; its dividend has been growing at a rate of 2% per year, and that growth rate is expected to continue into the future. The stock of SodaFizz is currently trading at \$19.50 per share. According to the constant dividend growth model, what is the cost of equity capital for SodaFizz?
4.
SodaFizz has a beta of 1.1. If the risk-free rate is 3% and the market risk premium is 11%, what is the cost of equity capital for SodaFizz according to the capital asset pricing model?
5.
Given the answers to Problems 1, 2, and 3, what is SodaFizz’s WACC when the constant dividend growth model is used to calculate its equity cost of capital?
6.
Given the answers to Problems 1, 2, and 4, what is SodaFizz’s WACC when the CAPM is used to calculate SodaFizz’s equity cost of capital?
7.
Shirley Manufacturing paid \$1 million in interest payments last year. The company is in the 21% tax bracket and has \$15 million in debt outstanding. How much was the company’s interest tax shield last year?
8.
King Medical Supplies has issued preferred stock that pays a yearly dividend of \$4 per share. This preferred stock is trading at a price of \$47 per share. What is King’s cost of preferred stock capital?
9.
McPherson Pharmaceutical has common stock that is trading for \$75 per share. The company paid a dividend of \$5.25 last year. This dividend is expected to increase at a rate of 3% per year. What is the cost of equity capital for McPherson? If McPherson issues new shares with a flotation cost of \$2 per share, what is the company’s cost of new equity?
17.14: Video Activity
Calculating the Weighted Average Cost of Capital
1.
What is the formula for calculating WACC? What do each of the components of this formula represent?
2.
In the video, the tax rate for Brick and Mortar Co. was 30%. What would your calculation of the company’s WACC be if there was a change in the tax code and the tax rate for Brick and Mortar Co. fell to 15%? Why does the tax rate impact a firm’s WACC? Do you think the managers of Brick and Mortar Co. should consider making any changes to its capital structure if the tax rate falls to 15%? Why or why not?
Capital Structure for Real Estate Companies
Click to view video content
3.
Why doesn’t one optimal capital structure exist for commercial real estate businesses?
4.
How do you think a family that runs a multigenerational commercial real estate business will think about risk compared to a young entrepreneur who is beginning to build a commercial real estate business? How do you think the capital structures of these two entities are likely to compare? How would those capital structures likely be linked to the risk profiles of the two companies? | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/17%3A_How_Firms_Raise_Capital/17.12%3A_Review_Questions.txt |
Figure 18.1 Forecasts are an important financial tool. (credit: modification of “Red Post-It Label, Calculator and Ballpen” by photosteve101/flickr, CC BY 2.0)
Though no one in business has a crystal ball, managers must often do all they can to predict the future as accurately as possible. This is called forecasting. Accounting and finance professionals use past performance along with what they know about the business, its competitors, the economy, and the company’s plans for the future to assemble detailed financial forecasts. Forecasts are useful to many individuals for different reasons. A budget, a type of static forecast, helps accountants and managers see how their plans for the coming year can be achieved. It outlines sales targets and how much can be spent on cost of goods sold and expenses to achieve the company’s bottom-line (net income) targets. Investors use financial forecasts to help guide their decisions to buy, sell or hold stocks or to estimate future potential income through dividends. Perhaps most importantly, for our purposes in finance, forecasts are used to help predict and manage cash flows.
A business can have all the profit in the world at the end of the year, but if it doesn’t raise enough cash (liquidity) to pay the bills and pay its employees halfway through the year, it could still go bankrupt despite being profitable. Forecasting sales and expenses helps assemble a cash forecast—when sales will be collected and when expenses will be paid—so that financial managers can look forward far enough to have enough time to react accordingly and secure short- or long-term financing to meet gaps in cash flow.
18.02: The Importance of Forecasting
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Discuss how to use financial statements in forecasting firm financials.
• Explain why balance sheet items are important in forecasting a firm’s financial result.
• Explain why income statement items are important in forecasting a firm’s financial result.
In this section, we will briefly review some of the basic elements of financial statements and how we can analyze historical statements to help assemble financial forecasts. Financial forecasting is important to short- and long-term firm success. It helps a firm plan for the resources it will need, ensuring it will have enough cash on hand at the right time to cover daily operations and capital expenditures. It helps the firm communicate its future potential and manage its shareholders’ expectations. It also helps management assess future risk and set plans in place to mitigate that risk.
Financial forecasting involves using historical data, analysis tools, and other information we can gather to make an educated guess about the future financial performance of the firm. Historical figures provide a reasonable starting point. We use tools such as ratios, common size, and trend analysis to fine-tune our forecast. And finally, we assess what we know about the firm, its competitors, the economy, and anything else that might impact performance and further fine-tune our forecast from there.
It’s important to take a moment to consider the role of ethics in forecasting. Ethics is a huge issue in the world of accounting and finance in general, and forecasting is no different. There can be tremendous pressure on management to perform, to deliver certain levels of profit, and to meet shareholder expectations.
Forecasting, as you will learn throughout this chapter, is not an exact science. There is a great deal of subjectivity that can come into play when forecasting sales and expenses. Ethical behavior is crucial in this area. Those who create forecasts must have a firm understanding of where their data comes from, how reliable it is, and whether or not their assumptions and projections are reasonably justified.
Financial Statement Foundations
In Financial Statements, you were introduced to a firm called Clear Lake Sporting Goods. You learned about the four key financial statements: the income statement, balance sheet, statement of stockholders’ equity, and statement of cash flows. Each one provides a different view of the firm’s financial health and performance.
Clear Lake Sporting Goods is a small merchandising company (a company that buys finished goods and sells them to consumers) that sells hunting and fishing gear. It uses financial statements to understand its profitability and current financial position, to manage cash flow, and to communicate its finances to outside parties such as investors, governing bodies, and lenders. We will use Clear Lake’s company information and historical financial statements in this chapter as we explore its forecasting process. It’s important to note that in this chapter, we are focusing on just one firm and the one method its managers have chosen to forecast financial performance. There are a variety of types of firms in actual application, and they may choose to forecast their financial performance differently. We are demonstrating just one approach here.
The balance sheet shows all the firm’s assets, liabilities, and equity at one point in time. It also supports the accounting equation in a very clear and transparent way. We find one section of the balance sheet contains all current and noncurrent assets that must total the other section of the balance sheet: total liabilities and equity. In Figure 18.2, we see that Clear Lake Sporting Goods has total assets of \$250,000 in the current year, which balances with its total liabilities and equity of \$250,000.
Figure 18.2 Balance Sheet
The income statement reflects the performance of the firm over a period of time. It includes net sales, cost of goods sold, operating expenses, and net income. In Figure 18.3, we see that Clear Lake had \$120,000 in net sales, \$60,000 in cost of goods sold, and \$35,000 in net income in the current year.
Figure 18.3 Full Income Statement
Finally, the statement of cash flows is used to reconcile net income to cash balances. The statement begins with net income, then reflects adjustments to balance sheet accounts and noncash expenses. The statement of cash flows is broken down into three key categories: operating, investing, and financing. This allows users to clearly see what elements of the business are generating or using cash. In Figure 18.4, we see that Clear Lake had cash flow from operating activities of \$53,600, cash used for investing activities of (\$18,600), and cash used for financing activities of (\$15,000).
Figure 18.4 Statement of Cash Flows
Another key concept to remember about the financial statements is that the statement of cash flows is necessary to truly understand how the firm is using and generating cash. A common misconception is that if a firm reports net income on its income statement, then it must have plenty of cash, and if it reports a loss, it must be short on cash. Although this can be true, it’s not necessarily the case. Historically speaking, we need the statement of cash flows to get the full picture of how cash was used or generated in the past. Looking to the future, we need a cash flow forecast to plan for possible gaps in cash flow and, potentially, how to make the best use of any cash surplus. Throughout this chapter, we will see how to use historical financial statements to help develop the future cash forecast.
It’s also important to remember that the four financial statements are tied together. Net income from the income statement feeds into retained earnings, which live on the balance sheet. Equity balances on the balance sheet feed information to the statement of stockholders’ equity. And information from both the income statement (net income and noncash expenses) and the balance sheet (changes in working capital accounts) all feed into the statement of cash flows. These relationships will be helpful to understand when using historical statements and preparing forecasts.
Balance Sheet Analysis
Fully understanding the items that are on the balance sheet and how they relate to one another and to other financial statements will help you create a financial forecast. In Financial Statements, you learned that on the classified balance sheet, both assets and liabilities are broken down into current and noncurrent categories. You also know that the balance sheet must live up to its name—it must balance. This means that total assets (what the company owns) must equal total liabilities and equity (what the company owes).
You continued your financial statement development in Measures of Financial Health, where you saw how to use elements of the balance sheet to assess financial health. Ratios based on balance sheet accounts can be useful for understanding relationships between balance sheet items—how they related in the past and then, in forecasting, how those relationships might change or remain the same in the future. Examples of balance sheet ratios include the current ratio, quick ratio, cash ratio, debt-to-assets ratio, and debt-to-equity ratio.
In Financial Statements, you also explored common-size analysis. To prepare a common-size analysis of the balance sheet, every item on the statement must be expressed as a percentage of total assets. Seeing each item as a percentage—that is, seeing its relationship to total assets—is also helpful for assessing historical statements and how those percentages or relationships can be used to predict future balances in the forecast. For example, in Figure 18.5, you can see that Clear Lake’s current assets represented 80% of its total assets in both the current and prior years.
Figure 18.5 Common-Size Balance Sheet
Income Statement Analysis
Like balance sheet analysis, income statement analysis is also quite helpful in preparing for the forecasting process. In Financial Statements, you learned that the income statement is commonly broken down into a few sections. Cost of goods sold is deducted from net sales to arrive at gross margin. Gross margin refers to the profits earned solely on the sale of the product itself, without consideration for the expenses incurred to run the business. Next, operating expenses are deducted to reflect operating income. Operating income reflects the profits of the core business function. Finally, other items, such as interest expense, tax expense, and other gains and losses, are deducted to arrive at net income, a.k.a. the bottom line. Each segment of the income statement is helpful for assessing past performance and estimating future expenses for a forecast.
You continued your financial statement development in Measures of Financial Health, where you saw how to use elements of the income statement to assess historical financial performance. Ratios based on the income statement can be useful for understanding relationships between net sales and expenses—how they related in the past and then, in forecasting, how those relationships might change or remain the same. Examples of income statement ratios include gross margin, operating margin, and profit margin. Common ratios that incorporate items from both the balance sheet and the income statement include return on assets (ROA), return on equity (ROE), inventory turnover, accounts receivable turnover, and accounts payable turnover.
Link to Learning
Performance Trends
Review the most recent annual report for Big 5 Sporting Goods. Review the company’s sales and gross margins for the current and past two years. How is their performance? Are their sales trending up or down? Why might the contribution margin have increased or decreased?
In Financial Statements, you also explored common-size analysis. To prepare a common-size analysis of the income statement, every item on the statement must be expressed as a percentage of net assets. Seeing each item as a percentage, in terms of its relationship to total sales, is also helpful for assessing historical statements and how those percentages or relationships can be used to predict future balances in the forecast. For example, in Figure 18.6, you can see that Clear Lake’s cost of goods sold represented 50% of its net sales in both the current and prior years.
Figure 18.6 Common-Size Income Statement | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/18%3A_Financial_Forecasting/18.01%3A_Why_It_Matters.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Explain how sales are the main driver for a financial forecast.
• Determine a past time period to formulate the basis for a financial forecast.
• Explain the advantages and disadvantages of using past data to forecast future financial performance.
• Calculate past sales growth averages.
• Justify adjusting relationships when forecasting future financial performance.
In this section of the chapter, you will begin to explore the first step of creating a forecast: forecasting sales. We will discuss common time frames for sales forecasts and why we use historical data in our forecasts (but only with caution), and we will work through the process of forecasting future sales. We will be using the percent-of-sales method to forecast some expenses for Clear Lake Sporting Goods, the example used throughout the chapter. This method relies on sales data, further highlighting why accuracy in forecasting sales is crucial.
Sales as the Driver
A significant portion of a business’s costs are driven by how much it sells. Thus, the sales forecast is the necessary first step in preparing a financial forecast. Common costs driven by sales include direct product costs, direct labor costs, and other key variable costs (i.e., costs that vary proportionately to sales), such as sales commissions.
Looking to the Past
Forecasting sales is not always an easy task, as no one knows the future. We can, however, use the information we do have to forecast future sales with the greatest accuracy possible. Most firms start by looking at the past. A firm may look at past sales from a variety of prior periods. It’s common to look at the past 12 months to estimate the coming 12 months. Looking at 12 consecutive months helps identify seasonality of sales trends, what time of year sales tend to drop off and when they increase, possible sales spikes that might reoccur, and any other trends that tend to appear over a 12-month period. In Figure 18.7, we see Clear Lake’s sales by month for the past 12 months.
Past data is often used in conjunction with probabilities and weighted average calculations derived from probabilities. Though used in several areas of forecasting, this approach is particularly common in drafting the sales forecast. Using multiple scenarios and the probability of each scenario occurring is a common approach to estimating future sales.
Figure 18.7 Historical Sales Data
We can see at first glance that sales remain fairly steady from January to April. Sales then goes up significantly in April and May, seem to peak in June, taper off a bit in July, then decline steeply from August to the end of the year, with the lowest sales being in November and December. Though not exact, it’s easy to quickly see that sales follow a seasonal pattern. We will focus on just one year of data here to keep things simple. However, it’s important to note that when a firm has a seasonal sales pattern, it normally uses more than one year of data to detect and evaluate the pattern. It’s not uncommon for firms to have a seasonal sales pattern that fluctuates based on an external factor such as weather patterns, patterns in business or demand, or other factors such as holidays. Common examples might include farm-based businesses that function on a weather pattern for harvesting and selling crops or a toy company that fluctuates around gift-giving holidays.
This knowledge is helpful when assembling a first pass at the next year’s sales forecast. Using common-size and horizontal (trend) analyses on sales is also helpful, as shown in Figure 18.8. We can see the exact percentages that sales went up or down each month:
• In January, the company had sales of \$9,000, which was $7.1% (9.000/126,000)7.1% (9.000/126,000)$ of the total annual sales.
• In June, the company had \$19,000 sales, which was $15.1 (19,000/126,000)15.1 (19,000/126,000)$ of the total annual sales and $211% (19,000/9,000)211% (19,000/9,000)$ of January sales.
Figure 18.8 Historical Sales Data as Percentages
Once a baseline in the 12-month period is assessed, it can also be helpful to look for trends in other ways. For example, the past several years might be assessed to see if there is a trend in total growth or decline for those years on a summary basis or by period. Clear Lake Sporting Goods had sales in the current year of \$126,000, in the prior year of \$105,000, and two years ago of \$89,000. This reflects a 20% increase and an 18% increase, respectively. It might be reasonable to expect a roughly 18 to 20% increase in total sales in the future with only this information in mind. Keep in mind that we will learn about many other factors to consider in the forecast, so the 18 to 20% increase is a good general guideline to consider along with other factors.
Think It Through
Sales Forecast for Big 5 Sporting Goods
Review the 2020 annual report for Big 5 Sporting Goods. Locate the consolidated statements of operations on page F-7. Using the company’s net sales figures for the current and prior years, what percentage might you recommend for their sales forecast for the next year?
Looking at Figure 18.9, assume that Clear Lake Sporting Goods decides to take its first pass at a forecast using the more conservative estimate of 18% total sales growth. The company could consider last year’s sales of \$126,000 and increase them by 18% to arrive at total forecasted sales for next year of $148,680 (126,000×118%)148,680 (126,000×118%)$. Next, to get the monthly sales, the company could use the same percent of the total for each month that it did for the previous year. For example, sales in January of last year were 7.1% of the full year’s sales. To find the forecast for the next year, the company would take the forecasted sales of \$148,680 for the year and multiply that by 7.1% to get \$10,620 for January. The process is repeated for each month to get the full year.
Figure 18.9 Forecasted Sales Data
Keep in mind that this is only a starting point. These estimates will be reviewed, assessed, and updated as more information and other factors are taken into consideration.
It can also be helpful to look at a shorter period, perhaps just the last few months, on a more detailed basis (by department, by customer, etc.) to see if there are any possible new trends beginning to develop that might be an indicator of performance in the coming year. For example, Clear Lake Sporting Goods might look at detailed sales records for October, November, and December and see that it had an old product line that was discontinued in early October, which contributed to a 2% reduction in monthly sales. This reduction in monthly sales will likely continue into the new year until the new line the company has signed on begins arriving in stores. Thus, the management team feels they should reduce their first quarter monthly estimates by 2%, as reflected in Figure 18.10. January is now $10,408 (10,620×98%)10,408 (10,620×98%)$, for example.
Figure 18.10 Adjusted Forecasted Sales Data
Changes for the Future
It’s important to note that the past is not always a reliable predictor of the future. Circumstances can often change to make the future quite different from the past. The business itself may change, the economy can change, the customer base may undergo a shift in demographics or a change in buying habits, new competition may emerge, and so on. So while past performance is helpful, it is only one step in the process of forecasting sales.
Link to Learning
Big 5 Sporting Goods MD&A Report
Review the most recent annual report for Big 5 Sporting Goods. Review the management’s discussion and analysis (MD&A) report (Item 7). What information does the report share about the firm, the economy, and other factors that might be useful for forecasting sales growth for next year?
Most firms first look to the past to target some form of baseline estimate for the coming year; then, managers begin making adjustments based on what they know about the future. Assume that Clear Lake Sporting Goods will be adding a new brand to its collection of fishing supplies in March. The manufacturer plans to begin running its commercials in late February, which managers anticipate will increase Clear Lake’s monthly sales by about \$500 in March, \$1,000 in April, \$1,400 in May, and \$2,000 per month in June, July, and August. We see the monthly adjustments to Clear Lake’s latest sales forecast in Figure 18.11. March, for example, is now \$10,908 (\$10,408 prior estimate plus \$500 increase from new brand).
Figure 18.11 Forecasted Sales Data with New Brand
What we have discussed here are only some brief examples of the myriad factors that might impact a sales budget for the coming year. It’s critical that all members of the team take the time and effort to research their customers and the factors that impact their business in order to effectively assess the impact of these factors on future sales. Though only two adjustments were made here, it’s likely that a large firm would have to consider many, many factors that would ultimately impact monthly sales figures before arriving at a conclusion. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/18%3A_Financial_Forecasting/18.03%3A_Forecasting_Sales.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Define pro forma in the context of a financial forecast.
• Describe the factors that impact the length of a financial forecast.
• Explain the risks associated with a financial forecast.
In this section of the chapter, we will move beyond the sales forecast and look at the general nature, length, and timeline of forecasts and the risks associated with using them. We’ll look at why we use them, how long they generally are, what the key variables in a forecast are, and how we pair those variables with common-size analysis to develop the forecast.
Purpose of a Forecast
As mentioned earlier in the chapter, forecasts serve different purposes depending on who is using them. Our focus here, however, is the world of finance. In this realm, the key purpose of pro forma (future-looking) financial statements is to manage a firm’s cash flow and assess the overall value that the firm is generating through future sales growth. Growing just for the sake of growing doesn’t always yield favorable income for the firm. A larger top-line sales figure that results in lower net income doesn’t make sense in the grand scheme of things. The same is true of profitable sales that don’t generate enough cash flows at the right time. The firm may make a profit, but if it doesn’t manage the timing of its cash flows, it could be forced to shut down if it can’t cover the costs of payroll or keep the lights on. Forecasting helps assess both cash flow and the profitability of future growth. Managers can forecast cash flow using data from forecasted financial statements; this allows them to identify potential gaps in cash and plan ahead in order to either alter collection and payment policies or obtain funding to cover the gap in the timing of cash flows.
Link to Learning
Pro Forma Financial Statements
Review the video Business Plan and Pro-Forma Financial Statements to learn about the basics of pro forma financial statements and why they are helpful.
Length of a Forecast
Forecasts can generally be for any length of time. The length generally depends on the user’s needs. A one-year forecast, broken down by month, is quite typical. A firm will often go through a formal budgeting process near the end of its calendar or fiscal year to project financial plans and goals for the coming year. Once that is done, a rolling financial forecast is then done monthly to adjust as time moves on, more information becomes available, and circumstances change.
To be useful, the future forecast for financial planning purposes is almost always calculated as monthly increments rather than one total figure for the next 12 months. Breaking the data down by month allows finance managers to more clearly see fluctuations in cash flows in and out, identify potential gaps in cash flow, and plan ahead for their cash needs.
Forecasts can also be done for several years into the future. In fact, they commonly are. However, once the firm is looking out beyond 12 months, it gets difficult to forecast items with a great degree of accuracy. Often, forecasts beyond a year will be completed only to quarterly or even annual figures rather than monthly. Forecasts that far into the future are often strategic in nature, made more to communicate future plans for the firm than for more detailed decision-making and cash flow planning.
Common-Size Financials
As we saw earlier in the chapter, common-size analysis involves using historical financial statements as a basis for future forecasts. Financial statements provide a great starting point for analysis, as we can see the relationships between sales and costs on the income statement and the relationships between total assets and line items on the balance sheet.
For example, in Figure 18.6, we saw that for the past two years, cost of goods sold has been 50% of sales. Thus, in the first draft of a forecast for Clear Lake, it’s likely that managers would estimate cost of goods sold at 50% of their forecasted sales. We can begin to see why forecasting sales first is crucial and why doing so as accurately as possible is also important.
Select Variables to Use
A simple way to begin a full financial statement forecast might be to simply use the common-size statements and forecast every item using historical percentages. It’s a logical way to begin a very rough draft of the forecast. However, several variables should be taken into consideration. First, managers must address the cost of an account and determine if it’s a variable or fixed item. Variable costs tend to vary directly and proportionally with production or sales volume. Common examples include direct labor and direct materials. Fixed costs, on the other hand, do not change when production or sales volume increases or decreases within the relevant range. Granted, if production were to increase or decrease by a large amount, fixed costs would indeed change. However, in normal month-to-month changes, fixed costs often remain the same. Common examples of fixed costs include rent and managerial salaries.
So, if we were to approach our common-size income statement, for example, we would likely use the percentage of sales as a starting point to forecast variable items such as cost of goods sold. However, fixed costs may not be accurately forecast as a percentage of sales because they won’t actually change with sales. Thus, we would likely look at the history of the dollar values of fixed costs in order to forecast them.
Concepts In Practice
COVID-19 Makes Forecasting Difficult for Big 5 Sporting Goods
Big 5 Sporting Goods announced record earnings in the third quarter of 2020, attributing its huge success that quarter to the impact of people’s reactions to the COVID-19 pandemic. With so many people in quarantine still wanting to make healthy lifestyle choices, sporting goods stores were making record sales. Record-breaking sales, however, are not certain in the future. The impacts of the pandemic are extremely difficult to predict, making it a challenge for Big 5 Sporting Goods and other companies to assemble pro forma financial statements.
Determine Potential Changes in Variables
So far, we have focused on using historical common-size statements to create a draft (not a final version) of the forecast. This is because the past isn’t always a perfect indicator of the future, and our finances don’t always follow a linear pattern. We use the past as a good starting point; then, we must assess what else we know to fine-tune and make adjustments to the forecast.
Many items impact the forecast, and they will vary from one organization to another. The key is to do research, gather data, and look around at the market, the economy, the competition, and any other factors that have the potential to impact the future sales, costs, and financial health of the company. Though certainly not an exhaustive list, here are a few examples of items that may impact Clear Lake Sporting Goods.
• It has an old product line that was discontinued in early October, contributing to a 2% reduction in monthly sales that will likely continue into the new year until a new line begins arriving in stores.
• It will be adding a new brand to its collection of fishing supplies in March. The manufacturer plans to begin running commercials in late February. Managers anticipate that this will increase Clear Lake’s monthly sales by about \$500 in March, \$1,000 in April, \$1,400 in May, and \$2,000 per month in June, July, and August.
• The company has just finished updating its employee compensation package. It goes into effect in January of the new year and will result in an overall 4% increase in the cost of labor.
• The landlord indicated that rent will increase by \$50 per month starting July 1.
• Some fixed assets will be fully depreciated by the end of March. Thus, depreciation expense will go down by \$25 per month beginning in April.
• There are rumors of new regulations that will impact the costs of importing some of the more difficult-to-obtain hunting supplies. Managers aren’t entirely sure of the full impact of the new legislation at this time, but they anticipate that it could increase cost of goods sold for the affected product line when the new legislation goes into effect in the last quarter. Their best estimate is that it could increase the overall cost of goods sold by up to 2%.
We will use all of this data later in the chapter when we are ready to compile a complete forecast for Clear Lake. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/18%3A_Financial_Forecasting/18.04%3A_Pro_Forma_Financials.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Generate a forecasted income statement that incorporates pertinent sales, functional, and policy variables.
• Generate a forecasted balance sheet.
• Connect the balance sheet and income statement forecasts with appropriate feedback linkages.
In this section of the chapter, we will tie together what we have learned so far about forecasting sales, common-size analysis, and using what we know about the company and its environment to create a full set of pro forma (forward-looking or forecasted) financial statements.
Forecast the Income Statement
To arrive at a fully forecasted income statement, we use historical income statements, common-size income statements, and any additional information we have about future sales and costs, such as the effects of the economy and competition. As we saw earlier in the chapter, we begin with forecasted sales because they are the basis for many of the forecasted costs.
Let’s begin with the sales forecast for Clear Lake Sporting Goods that we saw earlier in the chapter, in Figure 18.9, and use it along with the prior year income statement by month shown in Figure 18.12. We will consider other data we have about the business to begin creating a full income statement (see Figure 18.13).
Figure 18.12 Prior Year Monthly Income Statement by Month
The first two key points regarding product lines have already been built into the sales forecast. Notice that the cost of goods sold was 50% in the prior year. However, based on possible future legislation, to be conservative, we should increase the cost of goods sold by 2% in the last quarter of the year. Thus, we will forecast cost of goods sold at 50% of sales in the first nine months and increase it to 52% in the last three months of the year.
Rent is a fixed cost that historically amounts to \$458 per month. However, we know that the landlord is increasing rent by \$50 starting on July 1. Thus, we will forecast rent at the same fixed cost of \$458 per month for the first six months and increase it to \$508 per month for the second half of the year.
Depreciation, also a fixed cost, was historically \$300 per month. However, we know that depreciation expense will go down by \$25 beginning in April. Thus, we forecast depreciation at \$300 for the first three months and at \$275 for the last 9 months.
Salaries expense has historically been \$450 per month. However, we know that the company is implementing a new compensation program on January 1 that will increase salaries expense by 4% (\$18). Thus, we will forecast salaries for the whole year at \$468.
Utilities expense seems to vary somewhat by sales from month to month, as shops are open longer hours during their busy season. However, the total utilities expense is not expected to change for the coming year. Thus, the forecast for utilities expense remains at \$2,500, broken down by month as a percentage of sales.
Interest expense is a fixed cost and isn’t anticipated to change. Thus, the same \$167 interest expense per month is forecast for the coming year.
Finally, income tax expense is forecasted as a percentage of operating income because tax liability is incurred as a direct result of operating income. Figure 18.13 shows the next 12 months’ forecast for Clear Lake Sporting Goods using all of this data.
Figure 18.13 Forecasted Income Statement
Forecast the Balance Sheet
Now that we have a reasonable income statement forecast, we can move on to the balance sheet. The balance sheet, however, is entirely different from the income statement. It requires a bit more research and additional assumptions. Just like the income statement, it’s often a work in progress. A first draft is a good starting point, but adjustments must be made once it is created, and all the interrelationships between the statements, cash flow in particular, are taken into consideration.
The balance sheet is a bit more difficult to forecast because the statement reflects balances at just a given point in time. Account balances change daily, so forecasting just one snapshot in time for each month can be a challenge. A good starting point is to assess general company financial policies or rules of thumb. For example, assume that Clear Lake pays most of its vendors on net 30-day terms. A good way to forecast accounts payable on the balance sheet might be to add up the cost of goods sold from the forecasted income statement for the prior month. For example, in Figure 18.14, we see that Clear Lake has forecasted its accounts payable for March as the cost of goods sold in March from its forecasted income statement.
For accounts receivable, Clear Lake generally receives payment from customers within net 90-day terms. Thus, it uses the sum of the current and prior two months’ forecasted sales to estimate its accounts receivable balance.
Inventory will vary throughout the year. For the first six months, the company tries to build inventory for four months of sales. Once the busy season hits, inventory goes down to three months’ worth of future sales, then finally drops to only two months of sales in December. Thus, managers use their sales forecast by month to estimate their inventory ending balance each month.
The equipment balance is forecasted by reducing the prior month’s balance by the forecasted depreciation expense on the forecasted income statement.
Unearned revenue is historically around 50% of the current month’s sales. Thus, Clear Lake estimates its unearned revenue balance each month by taking the current month’s net sales from the forecasted income statement and multiplying it by 50%.
Short-term investments, notes payable, and common stock are not anticipated to change, so the current balance is forecasted to remain the same for the next 12 months.
To forecast the ending balance for retained earnings for each month, managers add the monthly net income from the forecasted balance sheet to the prior balance and subtract a quarterly \$10,000 dividend.
Once all of these accounts are completed, the balance sheet is out of balance. Given that all of these events are somewhat related but are not tied together dollar for dollar, it’s not surprising when the forecasted balance sheet is finished and does not balance. To complete the first draft (see Figure 18.14), the cash account is used as a variable and plugged in to make the balance sheet balance. Notice that by the end of the year, the company has \$59,905 in cash. However, look at what happens midyear—the cash account falls to only \$8,782. In the next section, we will generate a cash flow forecast, which will allow Clear Lake to update its balance sheet forecast once it estimates what it will do to cover the cash flow gaps.
Figure 18.14 Forecasted Balance Sheet Draft
Linkages between the Forecasted Balance Sheet and the Income Statement
Notice that in the discussion in the prior section on the balance sheet forecast, a lot of the information in the forecasted income statement was used to generate the forecasted balance sheet. The balance sheet accounts generally depend on activity reported in the income statement. For example, for many firms, the balance in their accounts receivable account is tied to their sales. Looking at historical balances in the accounts receivable account and how those relate to historical sales will help determine how to use the forecasted future sales to estimate the future balance of accounts receivable.
The same is true of accounts payable. Looking at past balances, past expenses (normally cost of goods sold), and the firm’s payment terms for its vendors allows managers to use forecasted cost of goods sold or other expenses to estimate the balance in the accounts payable account.
We learned in Financial Statements that net income flows into retained earnings. Thus, the net income from the forecasted income statement can be used to help estimate the ending balance in retained earnings. If the firm intends to issue any dividends in the coming year, managers should also estimate that reduction in their forecast.
It’s also common to find other general policies or procedures that help drive performance and aid in forecasting balances. For example, if the company has a goal of maintaining a certain level of inventory or a minimum balance in its cash account, that information can be used to guide the estimate for those accounts. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/18%3A_Financial_Forecasting/18.05%3A_Generating_the_Complete_Forecast.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Generate a cash flow forecast.
• Assess a cash flow forecast to determine future cash funding needs.
• Use pro forma financial statements and cash flow forecasts to assess the value of growth to the firm.
In this section of the chapter, we will use the forecasted income statement, forecasted balance sheet, and other information we know about the firm’s policies and goals for the coming year to generate and assess a cash flow forecast.
Create a Cash Flow Forecast
A cash flow forecast isn’t overly complex, yet it is not easy to assemble because it requires making many assumptions about the future. A cash forecast begins with the beginning cash balance, adds anticipated cash inflows, and deducts anticipated cash outflows. This identifies cash surpluses and shortages.
For Clear Lake Sporting Goods, for example, we see in Figure 18.15 that the company begins with cash of \$42,581,000 in January of the new year. Next, it lists the cash inflows, or cash received from customers. Given the assumption that customers pay in 90-day terms, the cash flow is filled in by plugging in the sales forecast for the three prior months. For example, the cash flow from customers of \$10,508 for June is the same as the net sales forecast for March (see Figure 18.13).
Figure 18.15 Forecasted Cash Inflows
Next, Clear Lake identifies cash outflows, which include accounts payable, salaries, rent, utilities, dividends, and interest payments. Accounts payable are normally paid within 30 days, so the forecast for cost of goods sold for the prior month is used as an estimate of amount paid for payables. For example, in Figure 18.16, we see that the accounts payable settled in June of \$8,610 is the cost of goods sold for May from the forecasted income statement.
Salaries are paid monthly and thus represent the same recurring monthly cash outflow, as does rent. Utilities, like accounts payable, are assumed to be paid within 30 days. Thus, the cash outflow for utilities is the utilities expense for the prior month from the forecasted income statement.
Management intends to pay a quarterly dividend of \$10,000. Thus, in Figure 18.16, we see \$10,000 cash outflows forecasted for March, June, September, and December. Interest on the long-term liability is paid quarterly. Thus, the \$500 cash outflows in March, June, September, and December are simply the monthly interest expense of \$167 from the income statement, summed for each quarter.
Figure 18.16 Forecasted Cash Inflows and Outflows
Using a Cash Forecast to Determine Additional Funds Needed
Finally, at the end of the cash flow forecast, cash outflows are subtracted from the cash inflows. This identifies whether a cash surplus (extra) or cash deficit (not enough) exists for each month. For example, in Figure 18.17, we see that in March, Clear Lake is forecasting \$4,800 of cash inflows and \$17,800 of total cash outflows, which results in a cash deficit of \$13,000.
Clear Lake has a general policy to not let its cash balance fall below \$35,000. Thus, managers need to assess their monthly balances and potential deficits and identify months when financing is necessary. For example, the deficit of \$13,000 in March is enough to push the cash balance lower than \$35,000. Thus, it’s estimated that the company will need \$5,000 in short-term financing in March. It has an estimated surplus in April, so \$3,000 of the borrowing is returned.
Figure 18.17 Forecasted Cash Surplus or Deficit
Assessing the Value of Growth
It’s a fairly common assumption that most, if not all, businesses want to grow. While it certainly can be good as a firm to grow in size, growth just for the sake of growth isn’t necessarily a good goal. A firm can grow in size based on customers, employees, locations, or simply sales. However, that doesn’t mean that the growth will increase profits. Growth may increase profits, but this is not a safe assumption. Scaling up operations takes careful planning, which includes monitoring the profitability of the sales and, of course, the cash flow it would require. Growing a business can require more inventory, more locations, more equipment, and more manpower, all of which cost money. Even if the forecasted growth is profitable, it may pose problems from a cash flow perspective. It’s important that the firm review not only its forecasted income statement and balance sheet but also its cash forecast, as this can reveal some serious gaps in funding depending on the extent, timing, and nature of the planned growth.
For example, assume that Clear Lake Sporting Goods intends to run a large-scale ad campaign to boost sales in its busy season. Historically, the store relied primarily on its prime location for high volumes of retail foot traffic. Managers felt, however, that given the increase in competition, they could boost sales significantly by running the ad campaign in the first quarter. The campaign would cost \$30,000. Forecasts already reflect a cash deficit at the end of the first quarter of \$13,000, so the additional \$30,000 ad campaign, which would require payment up front, would create a much larger need for funding. It’s also important that managers look at the increased cost of doing business along with the increased cost in advertising to ensure that the move would be profitable. Fortunately, Excel or other forecasting software can be used to create a forecast with formulas that tie together, making scenario analysis such as this a much easier process.
Scenarios in Forecasting
Forecasting is almost never a linear process. In other words, we don’t do one forecast and call it good. The first draft is completed using historical data, and then changes are made a bit at a time as all potential variables are assessed for their impact on the forecast. It’s quite common to then use the work-in-progress forecast to complete scenario analysis. This is particularly true when the forecast is completed in Excel or other budgeting or forecasting software. Elements of the forecast can be changed to see what the overall impact would be to the firm. Assuming the forecast is set up using formulas in Excel or other software, a change to one figure or one variable would then “ripple” through the forecast to reflect the overall impact.
Often, a firm may complete an initial forecast (scenario) under the assumption that the economy is in a “normal state.” The firm can then alter the initial forecast for different scenarios, such as the economy in a recession or the economy in a state of expansion. This helps the firm understand different possible future states and highlights how changes in the economy such as inflation may cause revenue and expenses to increase.
Assume that Clear Lake’s initial forecast is created under the assumption that the economy will remain average. Management also wants to know the worst-case scenario. What will their financial results look like if the economy were in a recession, for example? If management assumes their sales would drop to only 60% of the prior year sales in a recessionary economy, they could alter the formula in Excel driving their sales and variable costs, resulting in a new pro forma income statement. In Figure 18.18, we can see that net income would drop to \$16,391 under this assumption, compared to the net income of \$47,653 forecasted under average economy assumptions in Figure 18.13.
Figure 18.18 Forecasted Cash Surplus or Deficit
Though creating a full forecast in Excel can be a bit complex, it is a powerful tool that is useful for analysis. Elements can be used to vary just about anything, from something small such as a 1% increase in the cost of a product to a company-wide increase in salaries, the introduction of an entire new product line, or the purchase of a new production machine, among other possibilities.
For example, assume that Clear Lake has completed a first pass at its forecast and is reviewing the forecasted profit for the next 12 months. Managers feel the profit is currently low, as they always want to target a certain percentage. They might tinker with variables in the forecast file to see the impact on profits of potential changes they are considering. They may reduce the new salaries package by a percentage point to see if it gets them closer to their goal. They may adjust cost of goods sold by a certain percentage if they feel they can negotiate with vendors to work down their costs. They may adjust rent and see if they can find a better retail location to either reduce costs or increase sales due to increased foot traffic in a new location. They may save an entirely new version of the forecast and change it drastically to see what investing in opening a second retail location would do.
As you can see, the list of possibilities is endless. Though the main goal of financial managers may be cash planning, the power of a well-developed forecast is tremendous. It can help assess potential growth, new opportunities, and even small changes in the business as well.
Sensitivity Analysis in Forecasting
Sensitivity analysis will often look at the change in just one variable rather than the entire scenario. It examines how sensitive a particular output (commonly net income) will be to a change in a particular underlying input (sales or costs, for example). What if sales are 10% more or less than forecasted? What if the prices the firm can charge its customers are 10% more or less? What if the cost of goods sold increases by 10%? The purpose is to see which variables are crucial to “get right.” It isn’t worth spending a lot of research dollars to make sure you are accurately predicting a variable if that variable won’t notably change the outcome. However, a slight change in other variables may have significant impact.
Using pro forma financial statements created in Excel allows management to quickly generate new pro forma financials and see the impact that each possible variable might have on the overall financial results. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/18%3A_Financial_Forecasting/18.06%3A_Forecasting_Cash_Flow_and_Assessing_the_Value_of_Growth.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Generate a financial statement forecast using spreadsheet tools.
• Connect the balance sheet and income statement using appropriate formula referencing.
• Use spreadsheet functions to generate appropriate iterations that balance financial forecasts.
Throughout this chapter, we have seen forecasted financial statements for Clear Lake Sporting Goods along with its forecasted cash flow. These statements could all have been generated by hand, of course, but that wouldn’t be an effective use of time. As mentioned in prior sections, several different types of software can be quite effective in making the forecasting process faster and more flexible. In this section, we will review just one common option, Microsoft Excel.
Download the spreadsheet file containing key Chapter 18 Excel exhibits.
Using the “Sheet”
Creating a budget in Excel can be very simple or extremely complex, depending on the size and complexity of the business and the number of formulas and dependencies that are written into the Excel workbook.
Creating the forecast in Excel follows the same steps and flow we just explored in this chapter but with the power of a software program to do the math for you. We begin with the sales forecast, which uses several key formulas in Excel.
1. First, sales are projected to be 18% higher than the prior year. Thus, a total projection for the year is calculated using a simple link and multiplication function tied to last year’s total sales. In Figure 18.19, you can see the formula in cell O4 is “='Figure 18.12'!N4*1.18”. This formula simply does the math to increase the prior year’s sales by 18%.
2. Next, the sales are distributed by month. In Figure 18.19, we see in cell B5 that the forecasted income statement sheet is linked to the percent of annual sales from the Prior Year Income Statement (Figure 18.12) sheet. Then, in cell B4, January sales are estimated with a formula that multiplies the total forecasted sales in O4 by the percent of annual sales for January of the prior year. Notice that the formula then multiplies that product by 0.98. This is because Clear Lake discontinued a product line in the last quarter of the prior year, and management feels that this will reduce sales in the first quarter of the new year by roughly 2%.
Figure 18.19 Forecasted Sales Formulas in Excel
3. As Clear Lake continues to fill out its forecasted income statement, the next formula we see is a simple sum formula to calculate net sales in B8 (see Figure 18.20). It’s a simple formula that subtracts sales returns and allowances in B7 from gross sales in B4. Similar formulas are also found in B10 for gross margin and B18 for net income.
4. In cell B9, we see a multiplication formula that multiplies sales from B4 by 0.5, or 50%. This is because management feels that cost of goods sold will remain the same as last year, in most quarters at least, and last year’s percentage was 50%.
5. Rent, depreciation, and salaries are all simply typed in, as they are fixed expenses that remain the same as last year.
6. The utilities calculation, found in cell B14, is somewhat similar to the sales calculation. The total utilities expense from O14 is multiplied by the current month’s sales in B4 divided by the total annual sales in O4. This spreads out the utility cost by month based on the percentage of annual sales.
Figure 18.20 Forecasted Income Statement Formulas
Clear Lake’s forecasted balance sheet ties very closely to both the forecasted income statement and the prior year’s income statement. In Figure 18.21, we see in C7 an addition formula using the sum of the current month and three months of prior sales as an estimate of the ending accounts receivable balance. The formula for inventory is similar but forward looking. In C8, inventory is estimated by adding the cost of goods sold for the current month and next three months from the forecasted income statement.
Total current assets in C10 is calculated with a SUM formula that adds together the values in all the selected cells. Amounts such as short-term investments and common stock that are not anticipated to change are simply typed as a number in the cell. Much like in the income statement, subtotals are found in C13 for total assets, C17 for current liabilities, C24 for total equity, and C25 for total liabilities and equity. Retained earnings in C23 pulls the ending retained earnings balance from the end of last year (hidden in column B) and adds the net income for January in the forecasted income statement to get the current month’s ending balance.
Figure 18.21 Forecasted Balance Sheet Formulas
Much like the balance sheet, the cash forecast also relies heavily on data from the forecasted income statement as well as the forecasted balance sheet. To begin the year, in Figure 18.22, we see that the formula in B4 pulls the cash balance from the forecasted balance sheet. In B6, the formula pulls the sales for the three months prior from the previous year’s income statement. This is because it’s assumed that cash is collected from customers 90 days after the sale. The same approach is used for accounts payable, rent, salaries, and utilities. The formulas pull the expenses from a prior month depending on the assumed timing for payment. Utilities, for example, are assumed to be paid within 30 days, so the cash outflow in February is assumed to be the utilities expense for January from the forecasted income statement. Note that interest payments are assumed to be zero in January and February, but in March, the formula in D14 sums the interest expenses on the forecasted income statement for January, February, and March. This is because interest is paid quarterly.
Finally, note the formula in C4. The beginning cash balance for a given month is the same as the ending cash balance from the prior month; thus, the figure in B18 is linked to C4 to start the new month.
Figure 18.22 Cash Forecast Formulas
Using Excel Functions to Balance
Once we get a draft of the forecasts outlined, then the tinkering starts. Additional information can be used to adjust the formulas, as we saw with the 2% reduction in January sales for the forecasted income statement. Because we have linked most (though not all) of our expenses, subtotals, and statements together using formulas, management can also use the forecast workbook to perform scenario and sensitivity analyses, essentially asking “what if?” and looking at the results. When completed, however, before finalizing the forecast, it’s important that the financial statements are in balance (particularly the balance sheet, just as the name implies).
Notice that throughout, we used formulas to calculate subtotals to ensure they are correct and change as needed. We also linked figures, such as the ending and beginning cash balances, to ensure they are in balance. Perhaps the easiest but most important thing to do is to ensure that the balance sheet balances. We can do this with a simple formula that compares total assets to total liabilities and equity. We can see in Figure 18.23 that subtracting one from the other in cell C27 should result in \$0. If there is a difference, the formula will highlight it, forcing us to investigate and correct the sheet so that it balances.
Figure 18.23 Forecasted Balancing Formula | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/18%3A_Financial_Forecasting/18.07%3A_Using_Excel_to_Create_the_Long-Term_Forecast.txt |
18.1 The Importance of Forecasting
Forecasting financial statements is important to different users for different reasons. In finance, it’s most important for assessing the value of future growth plans and planning for future cash flow needs.
18.2 Forecasting Sales
The sales forecast is the foundation on which much of the rest of the forecast is built. Thus, the sales forecast is completed first. Historical sales data and any other information on the firm, its products, the economy, its customers, and its competitors are all used to create the most accurate sales forecast possible.
18.3 Pro Forma Financials
Pro forma financial statements are forward looking in nature. They use the sales forecast, historical data, financial statement analyses, relationships between accounts and statements, and any other information known about the firm, the environment, and the future to create the most accurate financial statement forecast possible.
18.4 Generating the Complete Forecast
Interrelationships among historical data, the forecasted income statement, and the forecasted balance sheet are all used to estimate each line item in the financial statements.
18.5 Forecasting Cash Flow and Assessing the Value of Growth
Once the income statement and balance sheet forecasts are complete, data from those statements, information on company policies, and account relationships are used to generate a cash forecast. The cash forecast is important for identifying any gaps in cash flow so that financial managers can plan for cash needs. It’s also important to review not only the cash forecast but all forecasted financial statements to assess the overall impact and value of proposed firm growth.
18.6 Using Excel to Create the Long-Term Forecast
Excel can be a powerful tool for creating financial forecasts. Formulas that complete mathematical functions and tie accounts and financial statements together are used to create the statements, ensure that they balance, and facilitate scenario and sensitivity analyses.
18.09: Key Terms
balance sheet
a financial statement that reflects a firm’s asset, liability, and equity account balances at a given point in time
cash deficit
an excess of cash outflows over cash inflows for a given period
cash forecast
a financial statement that estimates a firm’s future cash inflows and outflows
cash surplus
an excess of cash inflows over cash outflows for a given period
common-size
describes a financial statement in which each element is expressed as a percentage of a base amount
financing activities
cash business transactions reported on the statement of cash flows that reflect the use of financed funds
forecast
an estimate of future performance based on historical performance and other contextual information
income statement
a financial statement that measures a firm’s financial performance over a given period of time
investing activities
cash business transactions reported on the statement of cash flows that reflect the acquisition or disposal of long-term assets
operating activities
cash business transactions reported on the statement of cash flows that relate to ongoing day-to-day operations
pro forma
in the context of financial statements, forward-looking
scenario analysis
analysis of how various situations and circumstances would impact the financial forecast
sensitivity analysis
analysis of the sensitivity of an output variable to a change in an input variable
statement of cash flows
a financial statement that lists a firm’s cash inflows and outflows over a given period of time
statement of stockholders’ equity
a financial statement that reports the difference between the beginning and ending balances of each of the stockholders’ equity accounts during a given period | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/18%3A_Financial_Forecasting/18.08%3A_Summary.txt |
1.
Which type of financial statement analysis is most commonly used to create a baseline estimate for a financial forecast?
1. Trend analysis
2. Common-size analysis
3. Ratio analysis
4. Liquidity analysis
2.
What key element of the income statement is used to estimate several other key income statement lines?
1. Cost of goods sold
2. Gross margin
3. Sales
4. Fixed costs
3.
Jamal wants to forecast sales for the first quarter of next year. His first assumption is that sales will likely grow by 3% in the coming year. If Jamal’s monthly sales were \$10,000, \$9,000, and \$11,000 in the first quarter of this year, what should his sales forecast be for the first quarter of next year?
1. \$30,000
2. \$30,900
3. \$33,000
4. \$33,500
4.
In the context of a firm’s financial statements, what does pro forma mean?
1. Forward looking
2. Historical
3. Board approved
4. Audited
5.
What is the most common length of a forecast if the goal is to forecast cash and assess possible short-term growth?
1. 3 months
2. 12 months
3. 3 years
4. 5 years
6.
When completing a first pass at a forecasted income statement, which type of costs are assumed to be tied directly to sales?
1. Fixed costs
2. Period costs
3. Variable costs
4. Sunk costs
7.
In the cash forecast, if cash inflows exceed cash outflows, what does this create?
1. A cash surplus
2. A cash deficit
3. A long-term liability
4. An undeclared dividend
8.
Amelia wants to use a formula in Excel to estimate her utilities expense for each month. She normally pays her utilities within 30 days. What formula or link might she use in Excel to estimate her cash outflow for utilities?
1. Sum the past three months’ cost of goods sold from the forecasted income statement
2. Link to the prior month’s accounts payable from the forecasted balance sheet
3. Link to the prior month’s utilities expense from the forecasted balance sheet
4. Link to the prior month’s ending cash balance from the cash flow forecast
9.
Amelia wants to use a formula in Excel to estimate her sales for each month. She believes her sales for the next year will be about 7% higher than this year’s. She also has a big new ad campaign running late this year that she thinks will add another \$5,000 to January sales. Which of the following is an appropriate Excel formula for Amelia’s January sales?
1. =(lastyearsalesA2*1.07)+5000
2. =(lastyearsales+5000*1.07)
3. =lastyearsalesA2+5000*.07
4. =lastyearsalesA2*5000*1.07
18.11: Review Questions
1.
Javier’s firm has created a forecasted income statement that shows the firm with a net profit of \$25,000 for the coming year. What can we assume about Javier’s cash flows?
2.
Lulu’s firm’s sales grew by 9%, 11%, and 10% over the past three years, respectively. Lulu wants to take her first pass at forecasting sales for next year. What percent sales growth would you recommend she use, and why?
3.
Aria wants to create a set of pro forma financial statements. Her goal is to plan for future cash flows and operations as well as help envision her long-term strategy. What time frames should Aria consider for her operations and cash flows versus her long-term strategy?
4.
What information might you use to calculate the ending balance for retained earnings on a forecasted balance sheet?
5.
Damon estimates his beginning cash balance for June to be \$10,000, with cash inflows of \$4,000 and cash outflows of \$6,000 for the month. What is Damon’s forecasted ending balance for June?
6.
Tanneh wants to use an Excel formula to help her estimate sales for January in her forecasted income statement. She already has her sales estimate for the full year. Assuming she wants to use the past year’s income statement percentages to forecast next year’s sales, how would she calculate estimated sales for January?
18.12: Problems
1.
ABC Company has the following data for its monthly sales. Complete the % of Annual Sales row.
2.
Using the same data as in Problem 1, assume that ABC Company expects a 10% increase in sales in the coming year (10% more than the \$575,000 it had in the past year). Prepare its sales forecast, assuming the company breaks its sales down by month using the same percentages as the actual sales from the past year, which you calculated in the first problem.
3.
ABC Company anticipates its sales being a bit lower than normal in January and February of the coming year due to major road construction on the street where it is located, which will draw away foot traffic from the store. The company anticipates that this will reduce its sales in these two months by 5%. Use the information from Problems 1–2 to update the sales forecast.
4.
ABC Company’s cost of goods sold last year was 60%. It anticipates that this will be the same in the coming year. Its sales returns and allowances are small, normally 1% of sales. Use the information from Problems 1–3 to estimate the company’s sales returns and allowances, net sales, and cost of goods sold and calculate its gross margin.
5.
Use the partial income statement generated in Problem 4 along with the following additional information to complete ABC Company’s forecasted income statement in Excel.
1. Rent expense is \$1,000 per month. However, the landlord has indicated that rent will go up to \$1,250 in the fourth quarter.
2. Depreciation expense is \$2,250 per month and does not change throughout the year.
3. Salaries expense is \$1,500 per month and is expected to go up by 10% in the second half of the year, when a new compensation plan will be implemented.
4. Utilities expense is \$5,000 for the entire year and should be allocated to each month based on that month’s percentage of annual sales.
5. Interest expense is \$500 per month.
6. Income tax is 25% of operating income less interest expense.
18.13: Video Activity
What Is a Pro Forma?
1.
What is a pro forma financial statement? What are some scenarios in which you might find a pro forma financial statement helpful?
2.
Why might someone compile a pro forma financial statement that is intentionally inaccurate? What factors contribute to the accuracy of a pro forma?
Cash Flow Forecasting Explained: How to Complete a Cash Flow Forecast Example
3.
Assume you are the financial manager for a large electronics retailer. What benefits could you gain from preparing a cash forecast?
4.
Assume you are the financial manager for a large electronics retailer. You are going to prepare a cash forecast. What key cash inflows and outflows do you anticipate will be in your forecast? | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/18%3A_Financial_Forecasting/18.10%3A_Multiple_Choice.txt |
Figure 19.1 Working capital describes the resources that are needed to meet the daily, weekly, and monthly operating cash flow needs. (credit: modification of "Sealey Power Products Warehouse" by Mark Hunter/flickr, CC BY 2.0)
During the COVID-19 pandemic, many families and small businesses realized the importance of financial resiliency. In personal finance, financial resiliency is the ability to overcome financial difficulties such as sudden job loss or significant unexpected expenses—to spring back quickly.
To help promote resiliency, personal financial planners advise clients to maintain liquid assets equal to three to six months of living expenses, keep debt levels low, manage the household budget, keep insurance in force (health, property, and life), establish a solid credit history, and make wise use of credit cards and home equity lines of credit.
In business finance, financial resiliency is not important only during pandemics but is important through the ups and downs of seasonal cycles and economic downturns. Managing cash, accounts receivable, and inventory while making optimal use of trade credit (accounts payable) makes for a business that meets its operating needs and pays its debts when due.
Working capital management is also critical during good times. Even though profits might be rising, a business with growing demand for its products and services still needs to have working capital management tools to pay its bills. Growth in sales and profits do not immediately mean sufficient cash flow, so planning ahead with tools such as a cash budget is key.
19.02: What Is Working Capital
Learning Objectives
By the end of this section, you will be able to:
• Define working capital.
• Calculate a firm’s operating cycle and cash cycle.
• Compute inventory days, accounts receivable days, and accounts payable days.
The concept of business capital is often associated with the cash and assets (such as land and equipment) that the owners contributed to the business. Early political economists like Adam Smith and Karl Marx identified this concept of capital, along with labor and entrepreneurship, to be the factors of production.
That general idea of capital is important and critical to a company’s productive capacity. This chapter is about a specific type of capital— working capital—that is just as important as long-term capital. Working capital describes the resources that are needed to meet the daily, weekly, and monthly operating cash flow needs. Employees are paid out of working capital as well as cash from operations, the fulfillment of merchandise orders is possible because of working capital, and the liquidity of a company hinges upon how well management plans and controls working capital.
Understanding working capital begins with the concept of current assets—those resources of a business that are cash, near cash, or expected to be turned into cash within a year through the normal operations of the business. Current assets are necessary for the everyday operation of the firm, and they are synonymous with term gross working capital.
Cash is needed to pay the bills and meet the payroll. Excess cash is invested in cash alternatives such as marketable securities, creating liquidity that can be tapped when operating cash flow needs exceed the amount of cash on hand (checking account balances). Investment in inventory is necessary to meet the demand for products (sales), and if the firm extends credit to its customers so that a sale can be made, the balance sheet will also show accounts receivable—a very common current asset that derives its value from the probability that customers will pay their bills.
Working capital is often spoken about in two versions: gross working capital and net working capital. As was previously stated, gross working capital is equivalent to current assets, particularly those that are cash, cash-like, or will be converted to cash within a short period of time (i.e., in less than one year).
Net working capital (NWC) is a more refined concept of working capital. It is best understood by examining its formula:
$CurrentAssets-CurrentLiabilities=NetWorkingCapitalCurrentAssets-CurrentLiabilities=NetWorkingCapital$
19.1
Goal of Working Capital Management
The goal of working capital management is to maintain adequate working capital to
• meet the operational needs of the company;
• satisfy obligations (current liabilities) as they come due; and
• maintain an optimal level of current assets such as cash (provides no return), accounts receivable, and inventory.
Working capital management encompasses all decisions involving a company’s current assets and current liabilities. One very important aspect of working capital management is to provide enough cash to satisfy both maturing short-term obligations and operational expenditures—keeping the company sufficiently liquid.
In summary, working capital management helps a company run smoothly and mitigates the risk of illiquidity. Well-run companies make effective use of current liabilities to finance an optimal level of current assets and maintain sufficient cash balances to meet short-term operating goals and to satisfy short-term obligations. Working capital management is accomplished through
• cash management;
• credit and receivables management;
• inventory management; and
• accounts payable management.
Components of Working Capital Management
In contrast to net working capital, gross working capital is synonymous with current assets, particularly those current assets that are either cash or cash equivalents or that will be converted to cash within a short period of time (i.e., in less than one year).
Below is a list of the components of gross working capital.
• Cash and cash equivalents
• Marketable securities
• Accounts receivable
• Inventory
Here is an example. On December 31, a company has the following balances and gross working capital:
Think of the \$1,105,000 of gross working capital as a source of funds for the most pressing obligations (i.e., current liabilities) of the company. Gross working capital is available to pay the bills. However, some of the current assets would need to be converted to cash first. Accounts receivable need to be collected, and inventory would need to be sold before it too can become cash. What if the company had \$600,000 of current liabilities? That amount of current obligations could not be paid out of cash until the marketable securities were sold and a significant portion of accounts receivable were collected.
The second, more refined and useful concept of working capital is net working capital:
$NWC=Current Assets-Current LiabilitiesNWC=Current Assets-Current Liabilities$
19.2
For example, if a company has \$1,000,000 of current assets and \$750,000 of current liabilities, its net working capital would be \$250,000 (\$1,000,000 less \$750,000).
NWC provides a better picture because it takes into account the liability “coverage” provided by the current assets. As the above example shows, the current assets would “cover” the current liabilities with an excess of \$250,000. Think of it this way: if the current assets could be converted to cash, they could be used to meet the current obligations with another \$250,000 of cash leftover.
Current liabilities include
• accounts payable;
• dividends payable;
• notes payable (due within a year);
• current portion of deferred revenue;
• current maturities of long-term debt;
• interest payable;
• income taxes payable; and
• accrued expenses such as compensation owed to employees.
Net working capital possibilities can be thought of as a spectrum from negative working capital to positive, as explained in Table 19.1.
Negative Net Working CapitalZero Net Working CapitalPositive Net Working Capital
Current liabilities are greater than current assets.Current assets equal current liabilities.Current assets are greater than current liabilities.
Could indicate a liquidity problem. There is difficulty satisfying current obligations.Indicates that current assets could cover current obligations. However, there is no positive margin (safety cushion) or “liquid reserve” to satisfy unexpected cash needs.Indicates that the company can meet its current obligations. However, excessively high net working capital could mean too little cash and therefore an opportunity cost (forgoing rates of return on alternative investment).
Table 19.1 Spectrum of Net Working Capital
Measures of Financial Health provides information on a variety of financial ratios to help users of financial statements understand the strengths and weakness of companies’ financial statements. Three of the financial ratios covered in that chapter are brought back into this chapter’s discussion to demonstrate how financial managers examine working capital and liquidity. Liquidity is the ease with which an asset can be converted into cash. Those ratios are the current ratio, the quick ratio, and the cash ratio. A higher ratio indicates a greater level of liquidity.
The formulas for the three liquidity ratios are:
$Current Ratio=Current AssetsCurrent LiabilitiesQuick Ratio (Acid Test)=Current Assets - InventoryCurrent LiabilitiesCash Ratio=(Cash + Marketable Securities)Current LiabilitiesCurrent Ratio=Current AssetsCurrent LiabilitiesQuick Ratio (Acid Test)=Current Assets - InventoryCurrent LiabilitiesCash Ratio=(Cash + Marketable Securities)Current Liabilities$
19.3
Notice how the current ratio includes the two elements of net working capital—current assets and current liabilities. It makes for a quick comparison of relative size or proportion.
Think It Through
Current Ratio
A company has \$2,000,000 of current assets, while its current liabilities are \$1,000,000. What is the current ratio, and what does it mean?
There are two drawbacks to the current ratio: (1) it is a working capital analytic as of a point in time but is not indicative of future liquidity or future cash flows and (2) as an indicator of liquidity, it can be deceptive if a significant proportion of the current assets are inventory, supplies, or prepaid expenses. Inventory is not very liquid as it can take an extended time period to convert to cash, and assets such as supplies and prepaid expenses never become cash and therefore are not a source of funds to pay bills.
The quick ratio is considered a more conservative indication of liquidity since it does not include a firm’s inventory: $(Current Assets-Inventory)/Current Liabilities(Current Assets-Inventory)/Current Liabilities$.
Think It Through
Quick Ratio
A company’s current assets total \$2,000,000, but \$500,000 of that is inventory and the current liabilities total \$1,000,000. What is the quick ratio, and what does it mean?
Think It Through
Cash Ratio
The cash ratio is even more conservative in that it presents a picture of liquidity by excluding all current assets except cash and marketable securities.
A company’s total current assets are \$2,000,000, but only \$1,100,000 of the current assets consist of cash and marketable securities. Assuming \$1,000,000 of current liabilities, what would be the cash ratio and what does it mean?
Working capital ratios, like any financial ratio, are most valuable when examined in light of trends and in comparison to industry/peer averages. For example, a deteriorating current ratio over several quarters (a decline in the company’s current ratio) could indicate a reduced ability to pay bills.
Working capital ratios are also compared to industry averages, which are available in databases produced by such financial publishers as Dun & Bradstreet, Dow Jones Company, and the Risk Management Association (RMA). These information services are available via subscriptions and through many libraries. For example, if a company’s current ratio is 0.9 while the industry average is 2.0, then the company is less liquid than the average company in its industry and strategies, and techniques need to be considered to change things and to better compete with peer groups. Industry averages can be aspirational, motivating management to set liquidity goals and best practices for working capital management.
It is common to think about working capital with a simple assumption: current assets are being “financed” by current liabilities. However, such an assumption may be an oversimplification. Some level of current assets is often necessary to meet longer-term obligations, and in that way, you could think of some amount of current assets as a permanent based of working capital that may need to be financed with longer-term sources of capital.
Think of a company with seasonal business. During busy times, more working capital will be needed than during certain other portions of the year, such as less busy times. But there will always be some level—a permanent base—of working capital needed. Think of it this way: the total working capital of many companies will ebb and flow depending on many variables such as the operating cycle, production needs, and the growth of revenue. Therefore, working capital can be thought of as having a permanent base that is always needed and a total working capital amount that increases when activity levels (i.e., production and sales volume) are higher (see Figure 19.2).
The Cash Cycle
The cash cycle, also called the cash conversion cycle, is the time period between when a business begins production and acquires resources from its suppliers (for example, acquisition of materials and other forms of inventory) and when it receives cash from its customers. This is offset by the time it takes to pay suppliers (called the payables deferral period).
Figure 19.2 Working Capital Needs Can Vary: Temporary and Permanent Working Capital
The cash cycle is measured in days, and it is best understood by examining its formula:
$Cash Cycle = Inventory Conversion Period + Receivables Collection Period - Payables Deferral PeriodCash Cycle = Inventory Conversion Period + Receivables Collection Period - Payables Deferral Period$
19.4
The inventory conversion period is also called the days of inventory. It is the time (days) it takes to convert inventory to sales and is calculated by following these steps:
1. First, calculate the Inventory Turnover Ratio using this formula:
$Inventory Turnover Ratio =Cost of Goods SoldAverage InventoryInventory Turnover Ratio =Cost of Goods SoldAverage Inventory$
19.5
The Average Inventory is arrived at as follows:
$Average Inventory = (Beginning Inventory + Ending Inventory)2Average Inventory = (Beginning Inventory + Ending Inventory)2$
19.6
2. Then, use the Inventory Turnover Ratio to calculate the Inventory Conversion Period:
$Inventory Conversion Period = 365Inventory TurnoverInventory Conversion Period = 365Inventory Turnover$
19.7
The receivables collection period, also called the days sales outstanding (DSO) or the average collection period, is the number of days it typically takes to collect cash from a credit sale. It is calculated by following these steps:
1. First, calculate the Accounts Receivable Turnover using this formula:
$Accounts Receivable Turnover=Credit SalesAverage Accounts ReceivableAccounts Receivable Turnover=Credit SalesAverage Accounts Receivable$
19.8
The Average Accounts Receivable is arrived at as follows:
$Average Accounts Receivable = (Beginning Accounts Receivable + Ending Accounts Receivable )2Average Accounts Receivable = (Beginning Accounts Receivable + Ending Accounts Receivable )2$
19.9
2. Then, use the Accounts Receivable Turnover to calculate the Receivables Collection Period:
$Receivables Collection Period=365Accounts Receivable TurnoverReceivables Collection Period=365Accounts Receivable Turnover$
19.10
The payables deferral period, also known as days in payables, is the average number of days its takes for a company to pay its suppliers. It is calculated by following these steps:
1. First, calculate the Accounts Payable Turnover using this formula:
$AccountsPayableTurnover=CostofGoodsSoldAverageAccountsPayableAccountsPayableTurnover=CostofGoodsSoldAverageAccountsPayable$
19.11
The Average Accounts Payable is arrived at as follows:
$Average Accounts Payable=(BeginningAccountsPayable+EndingAccountsPayable)2Average Accounts Payable=(BeginningAccountsPayable+EndingAccountsPayable)2$
19.12
2. Then, use the Accounts Payable Turnover to calculate the Payables Deferral Period:
$PayablesDeferralPeriod=365AccountsPayableTurnoverPayablesDeferralPeriod=365AccountsPayableTurnover$
19.13
Think It Through
Periods of the Cash Cycle
Scenario 1: King Sized Products (KSP) Inc. has annual credit sales of \$40,000,000. The average inventory is \$3,000,000, and the company has average accounts receivable of \$6,000,000 and average accounts payable of \$2,800,000. The cost of goods sold for KSP Inc. is \$30,000,000. The cash cycle for the company is 57.2 days. Calculate the inventory conversion, receivables collection, and payable deferral periods.
The solution (the entire cash conversion cycle) is also illustrated in a chart, Figure 19.3.
Figure 19.3 Cash Conversion Cycle
Shortening the inventory conversion period and the receivables collection period or lengthening the payables deferral period shortens the cash conversion cycle. Financial managers monitor and analyze each component of the cash conversion cycle. Ideally, a company’s management should minimize the number of days it takes to convert inventory to cash while maximizing the amount of time it takes to pay suppliers.
Quickly converting inventory to sales speeds up cash inflows and shortens the cash cycle, but it also could help reduce inventory losses as a result of obsolescence. Inventory becomes obsolete because of a variety of factors including time—inventory that has not been sold for a long period of time and is not expected to be sold in the future has to be written down or written off according to accounting rules. Write-offs of inventory can result in significant losses for a company. In the food business, inventory conversion periods take on great importance because of spoilage of perishable goods; in retailing, seasonal items lose value the longer they stay on the shelves.
Various inventory management techniques are used to shorten production time in manufacturing, and in retailing, strategies are used to reduce the amount of time a product sits on the shelf or is stored in the warehouse. Production techniques such as just-in-time inventory systems and marketing and pricing strategies can have an impact on the number of days in the inventory conversion cycle.
For the receivables collection period, a relatively long receivables collection period means that the company is having trouble collecting cash from its customers and so whatever can be done to speed up collections while still offering competitive credit terms should be pursued by financial managers. For example, companies that converted paper invoicing to e-invoicing most likely reduce the average collection period by some number of days, as it makes sense that if a bill is transmitted electronically, lag time is cut (no delays because of “snail mail”) and collections (payments back to the company from customers) may happen sooner. Other credit management techniques, some of which are explained in subsequent sections, can help minimize and control the receivables collection period.
The payables deferral period is the one element that probably cannot be optimized without violating credit terms. Certainly, cash balances can be conserved by delaying payments to vendors for as long as possible; however, payments on trade credit need to be made on time or the company’s relationship with the supplier can suffer. In a worst-case scenario, the company’s credit rating could also deteriorate.
A credit rating, also called a credit score, is a measure produced by an independent agency indicating the likelihood that a company will meet its financial obligations as they come due; it is an indication of the company’s ability to pay its creditors. Three business credit rating services are Equifax Small Business, Experian Business, and Dun & Bradstreet.
Think It Through
The Cash Conversion Cycle
Considering the previous Think It Through (Scenario 1), what if you could reduce inventory levels, hold lower accounts receivable balances, and rely more heavily on accounts payable while maintaining the same sales level?
Here’s Scenario 2. Because of better inventory management, credit and collections management, and negotiation of longer payment periods with vendors, King Sized Products (KSP) Inc. needs less investment in inventory and accounts receivable and is able to utilize a greater amount of trade credit financing.
Annual credit sales are \$40,000,000, average inventory is \$2,800,000, average accounts receivable are \$5,500,000, average accounts payable are \$3,300,000, and cost of goods sold is \$30,000,000. What is the cash conversion cycle?
Notice that the investment in inventory and accounts receivable is less and the average accounts payable is more with no change in credit sales and cost of goods sold—you would certainly anticipate a reduction in the cash conversion cycle. The improvement would be about 13 days (from 57.2 in Scenario 1 to 44.1 days in Scenario 2). Figure 19.4 shows a bar chart comparison of the two scenarios.
Scenario 1Scenario 2
Inventory Conversion Period36.5034.07
Receivables Collection Period54.7550.21
Payables Deferral Period34.0740.15
Cash Conversion Cycle57.1844.10
Table 19.2
Figure 19.4 Scenario 1 and 2 Comparison: Shortening the Cash Conversion Cycle
Link to Learning
A Harvard Business School blog post, How Amazon Survived the Dot-Com Bubble, discusses how Amazon managed its cash conversion cycle to the point where it was receiving payment for the things it sold before Amazon had to pay for them. In that way, Amazon had a negative cash conversion cycle (which is really a huge positive for a company trying to manage positive cash flow!).
Working Capital Needs by Industry
When comparing working capital needs by industry, you can see some variation. For example, some companies in the grocery business can have very low cash conversion cycles, while construction companies can have very high cash conversion cycles. And some companies, like those in the restaurant business, can have very low numbers and even have negative cash conversion cycles.
Working capital can also differ from one industry to another. An often cited general rule is that a current ratio of 2 is considered optimal. However, general rules of thumb must be treated with caution. A better benchmarking approach is to compare a firm’s ratios—current ratio and quick ratios—to the average of the industry in which the subject company operates.
Take, for example, a home construction company. Such as firm has a long operating cycle because of the production process (building homes), and the “storage of finished goods” can result in very high current ratios—such as 11 or 12 times current liabilities—whereas a retailer like Walmart or Target would have much lower current ratios.
In recent years, Walmart Stores Inc. (NYSE: WMT) has had a current ratio of around 0.9 and has been able to manage its working capital needs by efficient management of its supply chain, quick turnover of inventory, and a very small investment in accounts receivables.1 Big retailers like Walmart are effective at negotiating favorable payment terms with their vendors. The ability to generate consistent positive cash flow from operations allows a retailer like Walmart to operate with relatively low amounts of working capital.
The credit policies of a company also affect working capital. A company with a liberal credit policy will require a greater amount of working capital, as collection periods of accounts receivable are longer and therefore tie up more dollars in receivables.
Almost all businesses will have times when additional working capital is needed to pay bills, meet the payroll (salaries and wages), and plan for accrued expenses. The wait for the cash to flow into the company’s treasury from the collection of receivables and cash sales can be longer during tough times.
During the COVID-19 pandemic, the US government made paycheck protection program (PPP) loans available to help alleviate working capital problems for small and large business when the economy slowed because of shutdowns and social distancing. And although 60 percent of the PPP loan proceeds were to go to cover payroll-related costs, 40 percent could be used to bolster working capital to meet rent, utilities costs, and some interest expense while companies were “treading water”—waiting for positive cash flow to pick up under a recovery.2
It isn’t just during downturns that working capital is strained. Growing companies, even if they are extremely profitable, need additional working capital as they ramp up operations by acquiring raw materials, component parts, supplies, or other forms of inventory; hiring temporary or additional employees; and taking on new projects. Whenever additional resources are needed, working capital is also needed.
Some of the current assets and expenditures needed in a growing company may need to be financed from sources that are not spontaneous financing—trade credit (accounts payable). Such forms of external financing such as lines of credit, short-term bank loans, inventory-based loans (also called floor planning), and the factoring of accounts receivables might have to be relied upon. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/19%3A_The_Importance_of_Trade_Credit_and_Working_Capital_in_Planning/19.01%3A_Why_It_Matters.txt |
Learning Objectives
By the end of this section, you will be able to:
• Compute the cost of trade credit.
• Define cash discount.
• Define discount period.
• Define credit period.
Trade credit, also known as accounts payable, is a critical part of a business’s working capital management strategy. Trade credit is granted by vendors to creditworthy companies when those companies purchase materials, inventory, and services.
A company’s purchasing system is usually integrated with other functions such production planning and sales forecasting. Purchasing managers search for and evaluate vendors, negotiate order quantities, and prepare purchase orders. In carrying out the purchasing process, credit terms are granted by the company’s vendors and purchases of inventory and services can be made on trade credit accounts—allowing the purchaser time to pay. The purchaser carries an accounts payable balance until the account is paid.
Trade credit is referred to as spontaneous financing, as it occurs spontaneously with the gearing up of operations and the additional investment in current assets. Think of it this way: If sales are increasing, so too is production. Increased sales mean more current assets (accounts receivable and inventory), and increased sales mean increases in accounts payable (financing happening spontaneously with increased sales and inventory purchases). Compared to other financing arrangements, such as lines of credit and bank loans, trade credit is convenient, simple, and easy to use.
Once a company is approved for trade credit, there is no paperwork or contracts to sign, as is the case with various forms of bank financing. Invoices specify the credit terms, and there is usually no interest expense associated with trade credit. Accounts payable is a type of obligation that is interest-free and is distinguished from debt obligations, such as notes payable, that require the creditor to pay back principal and interest.
How Trade Credit Works
Trade credit is common in B2B (business to business) transactions and is analogous to consumer spending using a credit card. With a credit card, a consumer opens an account with a credit limit. Most trade credit is offered to a company with an open account that has a credit limit up to which the company can purchase goods or services without having to pay the cash up front. As long as the payments are made in accordance with the terms of the agreement (also called credit terms), no interest or additional fees are charged on the credit balance except possibly for a fee for late payment.
Initially, the vendor’s credit department approves both a trade credit limit and credit payment terms (i.e., number of days after the invoice date that payment is due). Timely payments on accounts payable (trade credit) helps create a credit history for the purchasing firm.
Trade Credit Terms
Trade credit arrangements often carry credit terms that offer an incentive, called a discount, for a company (the buyer) to pay its bill within a relatively short period of time. Net terms, also referred to as the full credit period, are the number of days that a business (purchaser) has before they must pay their invoice. A common net term is Net 30, with payment due in full within 30 days of the invoice.
Many vendors also offer cash discounts to customers that pay their bill early. A company’s invoice that specifies payment terms of “2/10 n/30” (stated as: “two ten net 30”) would allow a 2 percent discount if the buyer’s account balance is paid within 10 days of the invoice date; otherwise, the net amount owed would be due in 30 days. The “10 days” in the example is the discount period—the number of days the buyer has to take advantage of the cash discount for an early payment, also known as quick payment.
For example, Jackson’s Premium Jams Inc. received a \$10,500 invoice for the purchase of jelly jars. The invoice has payment terms of 2/10 n/30. Jackson’s pays the bill within 10 days of the invoice date. Jackson’s payment would be $10,290 = (10,500 × (100% – 2%))10,290 = (10,500 × (100% – 2%))$. The effect of taking a discount because of a quick payment is a lowering of the cost of inventory in the case of purchases of materials (for a manufacturer), merchandise (for a retailer or wholesaler), and operating expenses (for any company that “buys” services using trade credit). In Cost of Trade Credit, there is an example that shows the high annualize opportunity cost (36.73 percent) of not taking advantage of cash discounts.
Concepts In Practice
Trade Credit of International Trade
When international trade occurs, two important documents are commonly required: a letter of credit and a bill of lading. A letter of credit is issued by a financial institution on behalf of the foreign buyer (importer). The bill of lading is a legal document that gives proof of a contract between a transportation company and the buyer and is one important piece of documentation that allows the buyer to draw on the letter of credit. A bill of lading serves as a document of title and proof of receipt of goods by the shipper.
The letter of credit secures a promise of payment to the seller (exporter) provided that the terms of the sale are met. For an international trade transaction, the letter of credit is the main mechanism that establishes a liability for the buyer. Instead of a trade payable, the buyer uses a line of credit from a bank.
Cost of Trade Credit
Trade credit is often referred to as a no-cost type of financing. Unlike with other credit arrangements (e.g., bank loans, lines of credit, and commercial paper), there is usually no interest expense associated with trade credit, and as long as your account does not become delinquent, there are no special fees. Some accounts payable arrangements specify an interest penalty or a late fee when the account goes delinquent, but as long as payments are made on time, trade credit is thought of as a low-cost source of working capital.
However, there is one possible cost associated with trade credit for companies that don’t take advantage of cash discounts when offered by sellers. Using accounts payable to purchase goods and services can involve an opportunity cost—a cost of the forgone opportunity of making a quick payment and benefiting from a cash discount. A business that does not take advantage of a cash discount for early payment of trade credit will pay more for goods and services than a business that routinely takes advantage of discounts.
The annual percentage rate of forgoing quick payment discounts can be estimated with the following formula:
$APR of Forgoing Quick Payment Discounts = 360Full Credit Period - Discount Period ×Discount100 - Discount %APR of Forgoing Quick Payment Discounts = 360Full Credit Period - Discount Period ×Discount100 - Discount %$
19.16
Example: Novelty Accessories Inc. (NAI) purchases products from a vendor that offers credit payment terms of 2/10, net 30. The annual cost to NAI of not taking advantage of the discount for quick payment is 36.73 percent.
$APR of Quick Payment Discounts=36030 -10×2%100%-2%= 36020 × 2%98%=36.73%APR of Quick Payment Discounts=36030 -10×2%100%-2%= 36020 × 2%98%=36.73%$
19.17 | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/19%3A_The_Importance_of_Trade_Credit_and_Working_Capital_in_Planning/19.03%3A_What_Is_Trade_Credit.txt |
Learning Objectives
By the end of this section, you will be able to:
• Explain why firms hold cash.
• List instruments available to a financial manager for investing cash balances.
Cash management means efficiently collecting cash from customers and managing cash outflows. To manage cash, the cash budget—a forward-looking document—is an important planning tool. To understand cash management, you must first understand what is meant by cash holdings and the motivations (reasons) for holding cash. A cash budget example is covered in Using Excel to Create the Short-Term Plan.
Cash Holdings
The cash holdings of a company are more than the currency and coins in the cash registers or the treasury vault. Cash includes currency and coins, but usually those amounts are insignificant compared to the cash holdings of checks to be deposited in the company’s bank account and the balances in the company’s checking accounts.
Motivations for Holding Cash
The initial answer to the question of why companies hold cash is pretty obvious: because cash is how we pay the bills—it is the medium of exchange. The transactional motive of holding cash means that checks and electronic funds transfers are necessary to meet the payroll (pay the employees), pay the vendors, satisfy creditors (principal and interest payments on loans), and reward stockholders with dividend payments. Cash for transaction is one reason to hold cash, but there is another reason—one that stems from uncertainty and the precautions you might take to be ready for the unexpected.
Just as you keep cash balances in your checking and savings accounts and even a few dollars in your wallet or purse for unexpected expenditures, cash balances are also necessary for a business to provide for unexpected events. Emergencies might require a company to write a check for repairs, for an unexpected breakdown of equipment, or for hiring temporary workers. This motive of holding cash is called the precautionary motive.
Some companies maintain a certain amount of cash instead of investing it in marketable securities or in upgrades or expansion of operations. This is called the speculative motive. Companies that want to quickly take advantage of unexpected opportunities want to be quick to purchase assets or to acquire a business, and a certain amount of cash or quick access to cash is necessary to jump on an opportunity.
Sometimes cash balances may be required by a bank with which a company conducts significant business. These balances are called compensating balances and are typically a minimum amount to be maintained in the company’s checking account.
For example, Jack’s Outback Restaurant Group borrowed \$500,000 from First National Bank and Trust. As part of the loan agreement, First National Bank required Jack’s to keep at least \$50,000 in its company checking account as a way of compensating the bank for other corporate services it provides to Jack’s Outback Restaurant Group.
Cash Alternatives
Cash that a company has that is in excess of projected financial needs is often invested in short-term investments, also known as cash equivalents (cash alternatives). The reason for this is that cash does not earn a rate of return; therefore, too much idle cash can affect the profitability of a business.
Table 19.3 shows a list of typical investment vehicles used by corporations to earn interest on excess cash. Financial managers search for opportunities that are safe and highly liquid and that will provide a positive rate of return. Cash alternatives, because of their short-term maturities, have low interest rate risk (the risk that an investment’s value will decrease because of changes in market interest rates). In that way, prudent investment of excess cash follows the risk/return trade-off; in order to achieve safe returns, the returns will be lower than the possible returns achieved with risky investments. Cash alternative investments are not committed to the stock market.
SecurityDescription
US Treasury billsObligations of the US government with maturities of 3 and 6 months
Federal agency securitiesObligations of federal government agencies such as the Federal Home Loan Bank and the Federal National Mortgage Association
Certificates of depositIssued by banks, a type of savings deposit that pays interest
Commercial paperShort-term promissory notes issued by large corporations with maturities ranging from a few days to a maximum of 270 days
Table 19.3 Typical Cash Equivalents
Figure 19.5 shows a note within the 2021 Annual Report (Form 10-K) of Target Corporation. The note discloses the amount of Target’s cash and cash equivalent balances of \$8,511,000,000 for January 30, 2021, and \$2,577,000,000 for February 1, 2020.
Figure 19.5 Note from Target Corporation 2021 10-K Filing (source: US Securities and Exchange Commission/EDGAR)
In that note, which is a supplement to the company’s balance sheet, receivables from third-party financial institutions is also considered a cash equivalent. That is because purchases by Target’s customers who use their credit cards (e.g., VISA or MasterCard) create very short-term receivables—amounts that Target is waiting to collect but are very close to a cash sale. So instead of being reported as accounts receivable—a line item on the Target balance sheet that is separate from cash and cash equivalents—these amounts receivable from third-party financial institutions are considered part of the cash and cash equivalents and are a very liquid asst. For example, the amount of \$560,000,000 for January 30, 2021, is considered a cash equivalent since the settlement of these accounts will happen in a day or two with cash deposited in Target’s bank accounts. When a retailer sells product and accepts a credit card such as VISA, MasterCard, or American Express, the cash collection happens very soon after the credit card sale—typically within 24 to 72 hours.3
Companies also invest excess funds in marketable securities. These are debt and equity investments such as corporate and government bonds, preferred stock, and common stock of other entities that can be readily sold on a stock or bond exchange. Ford Motor Company has this definition of marketable securities in its 2019 Annual Report (Form 10-K):
“Investments in securities with a maturity date greater than three months at the date of purchase and other securities for which there is more than an insignificant risk of change in value due to interest rate, quoted price, or penalty on withdrawal are classified as Marketable securities.”4 | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/19%3A_The_Importance_of_Trade_Credit_and_Working_Capital_in_Planning/19.04%3A_Cash_Management.txt |
Learning Objectives
By the end of this section, you will be able to:
• Discuss how decisions on extending credit are made.
• Explain how to monitor accounts receivables.
For any business that sells goods or services on credit, effective accounts receivable management is critical for cash flow and profitability planning and for the long-term viability of the company. Receivables management begins before the sale is made when a number of factors must be considered.
• Can the customer be approved for a credit sale?
• If the credit is approved, what will be the credit terms (i.e., how long do we give customers to pay their bills)?
• Will there be a cash discount for quick payment?
• How much credit should be extended to each customer (credit limit)?
Accounts receivable is not about accepting credit cards. Credit card sales are not technically accounts receivable. When a credit card is accepted, it means that the credit card company (e.g., VISA, MasterCard, or American Express) will guarantee the payment. The cash will be deposited in the merchant’s bank account in a very short period of time.
When a business makes a sale on account, management (e.g., a credit manager or analyst) does its best to distinguish between customers who have a high likelihood of paying and customers who have a low likelihood. Customers with low credit risk are approved; the decision is based on an effective analysis of creditworthiness.
Creditworthiness is judged by looking at a number of factors including an evaluation of the customer’s financial statements, financial ratios, and credit reports (credit scores) based on a customer’s payment history on credits owed to other firms. If a company has a prior relationship with a customer seeking trade credit, the customer’s payment history with the firm is also carefully evaluated before additional credit is granted.
Link to Learning
Corporate Finance Institute
Credit managers use various tools and techniques to evaluate creditworthiness of customers. The Corporate Finance Institute’s website states that “the ‘5 Cs of Credit’ is a common phrase used to describe the five major factors [character, capacity, collateral, capital, and conditions] used to determine a potential borrower’s creditworthiness.” It goes on to say that “a credit report provides a comprehensive account of the borrower’s total debt, current balances, credit limits, and history of defaults and bankruptcies, if any.”5 More on the 5 Cs of credit can be found on the Corporate Finance Institute’s website.
Determining the Credit Policy
A company’s credit policy encompasses rules of credit granting and procedures for the collections of accounts. It’s how a company will process credit applications, utilize credit scoring and credit bureaus, analyze financial statements, make credit limit decisions, and conduct collection efforts when accounts become delinquent (still outstanding after their due date).
Establishing Credit Terms
Trade credit terms were discussed earlier. Recall that part of the terms and conditions of a sale are the credit terms—elements of a sales agreement (contract) that indicate when payment is due, possible discounts (for quick payments), and any late fee charges.
If open credit is for a sales transaction, an agreement is made as to the length of time for which credit is to be granted (payment period) and a discount for early payment. Although companies are free to establish credit terms as they see fit, most companies look to the practice of the particular industry in which they operate. The credit terms offered by the competition are a factor. Net terms usually range between 30 days and 90 days, depending on the industry. Discounts for early payments also differ and are typically from 1 to 3 percent.
Establishing credit terms offered can be thought of as a decision process similar to setting a price for products and services. Just as a price is the result of a market forces, so too are credit terms. If credit terms are not competitive within the industry, sales can suffer. Typically, companies follow standard industry credit terms. If most companies in an industry offer a discount for early payments, then most companies will follow suit and also offer an equal discount.
Once credit terms are established, they can be changed based on both marketing strategies and financial management goals. For example, discounts for early payments can be more generous, or the full credit period can be extended to stimulate additional sales. Both discount periods and full credit periods can be tightened to try to speed up collections. The establishment of and changes to credit terms are usually made in consultation with the sales and financial management departments.
Monitoring Accounts Receivables
Financial managers monitor accounts receivables using some basic tools. One of those tools is the accounts receivable aging schedule (report). To prepare the aging schedule, a classifying of customer account balances is performed with age as the sorting attribute.
An account receivable begins its life as a credit sale. The age of a receivable is the number of days that have transpired since the credit sale was made (the date of the invoice). For example, if a credit sale was made on June 1 and is still unpaid on July 15, that receivable is 45 days old. Aging of accounts is thought to be a useful tool because of the idea that the longer the time owed, the greater the possibility that individual accounts receivable will prove to be uncollectible.
An aging schedule is a report that organizes the outstanding (unpaid) receivable balances into age categories. The receivables are grouped by the length of time they have been outstanding, and an uncollectible percentage is assigned to each category. The length of uncollectible time increases the percentage assigned. For example, a category might consist of accounts receivable that are 0–30 days past due and is assigned an uncollectible percentage of 6 percent. Another category might be 31–60 days past due and is assigned an uncollectible percentage of 15 percent. All categories of estimated uncollectible amounts are summed to get a total estimated uncollectible balance.
The aging of accounts is useful to the credit and collection managers, both from a global view—estimating how much of the accounts receivable asset might be bad debts—and on a micro basis—being able to drill down to see which specific customers are slow paying or delinquent so as to implement collection tactics.
Accountants and auditors also find the aging of accounts to determine a reasonable amount to be reported as bad debt expense and to establish a sufficient balance in the allowance for doubtful accounts. Bad debt expense is the cost of doing business because some customers will not pay the amounts they owe (accounts receivable), while the allowance for doubtful accounts is a contra-asset (it will be deducted from accounts receivable on the balance sheet) that contains management’s best guess (management’s estimate) as to how much of its accounts receivable will never be collected.
In Figure 19.6, Foodinia Inc.’s accounts receivable aging report shows that the total receivables balance is \$189,000. The company splits its accounts into four age categories: not due, 30 to 60 days past due, 61 to 90 days past due, and more than 90 days past due. Of the \$189,000 owed to Foodinia by its customers, \$75,500 (\$189,000 less \$113,500) of invoices have been outstanding (not paid yet) beyond their due dates.
Figure 19.6 Foodinia Inc. Aging of Accounts Receivable Schedule
In addition to preparing aging schedules, financial managers also use financial ratios to monitor receivables. The accounts receivable turnover ratio determines how many times (i.e., how often) accounts receivable are collected during an operating period and converted to cash. A higher number of times indicates that receivables are collected quickly. In contrast, a lower accounts receivable turnover indicates that receivables are collected at a slower rate, taking more days to collect from a customer.
Another receivables ratio is the number of days’ sales in receivables ratio, also called the receivables collection period—the expected days it will take to convert accounts receivable into cash. A comparison of a company’s receivables collection period to the credit terms granted to customers can alert management to collection problems. Both the accounts receivable turnover ratio and receivables collection period are covered, including the formulas for calculating the ratios, in the previous section of this chapter.
Accounts Receivables and Notes Receivable
An accounts receivable is an informal arrangement between a seller (a company) and customer. Accounts receivable are usually paid within a month or two. Accounts receivable don’t require any complex paperwork, are evidenced by an invoice, and do not involve interest payments. In contrast, a note receivable is a more formal arrangement that is evidence by a legal contract called a promissory note specifying the payment amount and date and interest.
The length of a note receivable can be for any time period including a term longer than the typical account receivable. Some notes receivable have a term greater than a year. The assets of a bank include many notes receivable (a loan made by a bank is an asset for the bank).
A note receivable can be used in exchange for products and services or in exchange for cash (usually in the case of a financial lender). Sometimes a company might request that a slow-paying customer sign a note promissory note to further secure the receivable, charge interest, or add some type of collateral to the arrangement, in which case the receivable would be called a secured promissory note. Several characteristics of notes receivable further define the contract elements and scope of use (see Table 19.4).
Accounts Receivable Notes Receivable
• An informal agreement between customer and company
• Receivable in less than one year or within a company’s operating cycle
• Does not include interest
• A legal contract with established payment terms
• Receivable beyond one year and outside of a company’s operating cycle
• Includes interest
• Could stipulate collateral
Table 19.4 Key Feature Comparison of Accounts Receivable and Notes Receivable | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/19%3A_The_Importance_of_Trade_Credit_and_Working_Capital_in_Planning/19.05%3A_Receivables_Management.txt |
Learning Objectives
By the end of this section, you will be able to:
• Outline the costs of holding inventory.
• Outline the benefits of holding inventory.
Financial managers must consider the impact of inventory management on working capital. Earlier in the chapter, the concept of the inventory conversion cycle was covered. The number of days that goods are held by a business is one of the focal points of inventory management.
Managers look to minimize inventory balances and raise inventory turnover ratios while trying to balance the needs of operations and sales. Purchasing personnel need to order enough inventory to “feed” production or to stock the shelves. The sales force wants to meet or surpass their sales budgets, and the operations people need inventory for the factories, warehouses, and e-commerce sites.
The days in inventory ratio measures the average number of days between acquiring inventory (i.e., purchasing merchandise) and its sale. This ratio is a metric to be watched and monitored by inventory managers and, if possible, minimized. A high days in inventory ratio could mean “aging” inventory. Old inventory could mean obsolesce or, in the case of perishable goods, spoilage. In either case, old inventory means losses.
Imagine a company selling high-tech products such as consumer electronics. A high days in inventory ratio could mean that technologically obsolete products will be sold at a discount. There are similar issues with older inventory in the fashion industry. Last year’s styles are not as appealing to the fashion-conscious consumer and are usually sold at significant discounts. In the accounting world, lower of cost or market value is a test of inventory value to determine if inventory needs to be “written down,” meaning that the company takes an expense for inventory that has lost significant value. Lower of cost or market is required by Generally Accepted Accounting Principles (GAAP) to state inventory valuations at realistic and conservative values.
Inventory is a very significant working capital component for many companies, such as manufacturers, wholesalers, and retailers. For those companies, inventory management involves management of the entire supply chain: sourcing, storing, and selling inventory. At its very basic level, inventory management means having the right amount of stock at the right place and at the right time while also minimizing the cost of inventory. This concept is explained in the next section.
Inventory Cost
Controlling inventory costs minimizes working capital needs and, ultimately, the cost of goods sold. Inventory management impacts profitability; minimizing cost of goods sold means maximizing gross profit (Gross Profit = Net Sales Less Cost of Goods Sold).
There are four components to inventory cost:
• Purchasing costs: the invoice amount (after discounts) for inventory; the initial investment in inventory
• Carrying costs: all costs of having inventory in stock, which includes storage costs (i.e., the cost of the space to store the inventory, such as a warehouse), insurance, inventory obsolescence and spoilage, and even the opportunity cost of the investment in inventory
• Ordering costs: the costs of placing an order with a vendor; the cost of a purchase and managing the payment process
• Stockout costs: an opportunity cost incurred when a customer order cannot be filled and the customer goes elsewhere for the product; lost revenue
Minimizing total inventory costs is a combination of many strategies, the scope and complexity of which are beyond the scope of this text. Concepts such as just-in-time (JIT) inventory practices and economic order quantity (EOQ) are tools used by inventory managers, both of which help keep a company lean (minimizing inventory) while making sure the inventory resources are in place in time to complete the sale.
Benefit of Holding Inventory
Brick-and-mortar stores need goods in stock so that the customer can see and touch the product and be able to acquire it when they need it. Customers are disappointed if they cannot see and touch the item or if they find out upon arrival at the store that it is out of stock.
Customers of all kinds don’t want to wait for the delivery of a purchase. We have become accustomed to Amazon orders being delivered to the door the next day. Product fulfillment and availability is important. Inventory must be in stock, or sales will be lost.
In manufacturing, the inventory of materials and component parts must be in place at the start of the value chain (the conversion process), and finished goods need to be ready to meet scheduled shipments. Holding sufficient inventory meets customer demand, whether it is products on the shelves or in the warehouse that are ready to move through the supply chain and into the hands of the customer. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/19%3A_The_Importance_of_Trade_Credit_and_Working_Capital_in_Planning/19.06%3A_Inventory_Management.txt |
Learning Objectives
By the end of this section, you will be able to:
• Create a one-year budget.
• Create a cash budget.
A cash budget is a tool of cash management and therefore assists financial managers in the planning and control of a critical asset. The cash budget, like any other budget, looks to the future. It projects the cash flows into and out of the company. The budgeting process of a company is a really an integrated process—it links a series of budgets together so that company objectives can be achieved. For example, in a manufacturing company, a series of budgets such as those for sales, production, purchases, materials, overhead, selling and administrative costs, and planned capital expenditures would need to be prepared before cash needs (cash budget) can be predicted.
Just as you might budget your earnings (salary, business income, investment income, etc.) to see if you will be able to cover your expected living expenses and planned savings amounts, to be successful and to increase the odds that sufficient cash will be available in the months ahead, financial managers prepare cash budgets to
• meet payrolls;
• allocate dollars for contingencies and emergencies;
• analyze if planned collections and disbursements policies and procedures result in adequate cash balances; and
• plan for borrowings on lines of credit and short-term loans that might be needed to balance the cash budget.
A cash budget is a model that often goes through several iterations before managers can approve it as the plan going forward. Changes in any of the “upstream” budgets—budgets that are prepared before the cash budget, such as the sales, purchases, and production budgets—may need to be revised because of changing assumptions. New economic forecasts and even cost-cutting measures will require a revision of the cash budget.
Although a budget might be prepared for each month of a future 12-month period, such as the upcoming fiscal year, a rolling budget is often used. A rolling budget changes often as the planning period (e.g., a fiscal year) plays out. When one month ends, another month is added to the end (the next column) of the budget. For example, if in your budget January is the first month of the planning period, once January is over, next January’s cash budget column would be added—right after December’s column (at the far right of the budget).
Sample One-Year (Annual) Operating Budget
Preparing an annual operating budget can be a complex task. In essence, a company budget is a series of budgets, many of which are interrelated.
The sales budget is prepared first and has an impact on many other budgets. Take the example of a production budget of a manufacturer. The sales budget impacts what needs to be produced (production budget), and the production budget influences planned purchases of material (purchases budget), overhead resources (overhead budget), and the amount of labor costs for the year ahead (direct labor budget.)
For a merchant (such as a wholesaler or retailer), the annual budget would be less complex than that of a manufacturing firm but would still require an inventory purchases budget and an operating expense budget (such as selling and administrative expenses). For a service firm, a purchase budget for inventory would not be necessary, but an operating budget would be. All businesses need a cash budget, which is the topic of the next section of this chapter.
The example operating budget presented here is of a merchandising company. Budgets are prepared following a process that begins with a sales (or revenue) forecast. The sales forecast is normally based on information obtained from both internal and external sources and predicts the amount of units to be sold in the planning period—usually one year into the future.
A company’s management, in consultation with its marketing and sales executives, would prepare a sales budget by making assumptions about the number of units that are expected to be sold and the prices that will be charged. From the sales budget, projections are made as to cash receipts each month, and therefore assumptions have to be made as to how much of each month’s sales will be cash sales and how much cash will flow into the company from the collection credit sales (including cash flow in from the prior month’s sales). Figure 19.7 provides an example of a sales budget and projected accounts receivable collections and cash sales for the months of January through December. Keep in mind that projected monthly sales amounts are not equal to cash collected from sales. Because of sales on credit, some cash from sales lags credit sales as collections can extend beyond the month of sale. Credit terms such terms such as net 30 (net amount owed to be paid in 30 days) have to be considered when developing a forecasted cash collection pattern.
Figure 19.7 Example of a Sales and Collections Budget
Download the spreadsheet file containing key Chapter 19 Excel exhibits.
Sales budgets “drive” the preparation of other budgets. If sales are expected to increase, purchases of inventory and some operating expenses would also increase. To meet the demand for goods and services (as defined in the sales budget), a purchases (inventory) budget would be prepared. In this example (Figure 19.8), the purchases budget shows projected purchases of inventory (merchandise) and the projected payments (also called disbursements) for each month.
Cash outflows as a result of purchases often do not equal the projected purchase amount. That is because payments for purchases are usually on credit (accounts payable), and so purchases for one month typically get spread out over a period of time that encompasses the current month and the month (or months) thereafter. To keep this example simple, the assumption is that the purchases are paid for in the following month (an average days payable outstanding of 30 days). However, in other cases, payment patterns may be based on other payment periods such as 45, 60, or even 90 days, depending on the trade credit terms.
Figure 19.8 Purchases Budget
An operating expense budget is prepared next and is basically a prediction of the selling and administrative expenditures of the company. Notice in Figure 19.9 that in the operating expense budget, cost of goods sold (an expense) is not included, nor are noncash expenses such as depreciation. The cash outlays related to goods sold, at least in a merchandising operation, are accounted for in the purchases budget (payments for purchases of inventory.)
With the sales, purchases, and operating expense budgets prepared, the cash budget can be prepared. Some of the “inputs” to the cash budget are from the sales (collections of cash), purchases (payments), and the operating expense budget (cash expenditures for selling and administrative expenses). A sample cash budget and a discussion of its preparation follows in the next section of this chapter.
Sample Cash Budget
A cash budget is the last budget to be prepared and is often part of the financial budget (cash budget, budgeted income statement, and budgeted balance sheet). The purpose of the cash budget is to estimate cash flows, to help ensure sufficient cash balances are maintained during the planning period, and to plan for external financing during periods of cash deficits.
When a budget is prepared in Excel, cash budget analysts can play “what if” with different scenarios to see when cash surpluses and deficits are expected. Cash surpluses means that funds can be invested in marketable securities to earn a rate of return, while cash deficits mean that financing, such as a line of credit, will be necessary (assuming forecasts are accurate).
Although the example shown in Figure 19.10 is a monthly cash budget, a cash budget could be prepared using any useful time elements: weekly, monthly, or quarterly.
One common practice is to use a rolling cash budget. A rolling cash budget is continually updated to add a new budget period, such as a month’s amount of cash flow activity, as the most recent budgeted month expires. For example, assume that a 12-month cash budget is prepared for a period covering January 20X1 to December 20X1. Once the month of January 20X1 has concluded, a 12-month planning period continues by add January 20X2 to the last column of the budget. The rolling cash monthly budget is an extension of the initial cash budget model, adding one month and thereby always extending cash flow projections one year into the future.
Figure 19.9 Operating Expenses Budget
Figure 19.10 Sample Cash Budget
Using Figure 19.9 as an example, Table 19.5 shows the formulas that form the skeleton of a monthly cash budget.
Beginning Cash BalanceThis is the amount of cash the company expects to have on the first day of the month. For example, in Figure 19.10, cell B2 is the amount of cash on Jan. 1 to start the year (the planning period). The remaining beginning cash balances for the months February through December are the ending cash balances of the previous month. For example, February’s beginning cash balance (C2) is referenced from cell B9 (ending cash balance for January).
Cash CollectionsThese are the projected cash inflows from collections from customers (accounts receivable), cash sales, and any other significant cash inflows, such as dividends and interest on investments or sale of fixed assets. For example, the Cash Collections shown in the Sample Cash Budget (Figure 19.10) are referenced from the Sales and Collections Budget (Figure 19.7). January’s Cash Collections (cell B3) in the Sample Cash Budget are from cell B12 of the Sales and Collection Budget.
Cash DisbursementsCash disbursements are the projected cash outflows, such as those for operating expenses and payment of payables. For example, Cash Disbursements in the Sample Cash Budget for January (Figure 19.10) are the sum of January’s payments for purchases in the Purchases Budget (Figure 19.8, cell B3) and the January operating expenses (Operating Expenses Budget, Figure 19.9, cell B12).
Net Cash FlowThe formula for net cash flow is For example, in Figure 19.10, the January Net Cash Flow is calculated in cell B5.
Preliminary Ending Cash BalanceBeginning Cash Balance + or - Net Cash Flow. This is the projected cash balance before taking into account the target cash balance to be maintained (minimum cash balance). In the Sample Cash Budget (Figure 19.10), the preliminary ending cash balance formula for January is =B2+B5 (B2 is the Beginning Cash Balance and B5 is the Net Cash Flow for the month).
Less: Minimum Cash BalanceThis is a target cash balance that management sets; it is the minimum amount of cash that should be maintained by the company (in Figure 19.10, cells B2:G7 and B17:G17).
Cash Surplus (Deficiency)A cash surplus means that cash can be invested in marketable securities. A cash deficiency means that some type of financing, such as a line of credit or bank loan, will be needed to provide enough cash for operations and to maintain a minimum cash balance. This number is found by subtracting the minimum cash balance from the preliminary ending cash balance. For example, the cash surplus for January in Figure 19.10 is calculated with this formula: =B6-B7. Notice that all months in the Sample Cash Budget show a surplus except for August’s forecast of a deficit, which may require drawing on a line of credit to provide enough cash to meet obligations in August.
Table 19.5 Excel Formulas for Monthly Cash Budget | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/19%3A_The_Importance_of_Trade_Credit_and_Working_Capital_in_Planning/19.07%3A_Using_Excel_to_Create_the_Short-Term_Plan.txt |
19.1 What Is Working Capital?
Working capital is not only necessary to run a business; it is a resource that will expand and contract with business cycles and must be carefully managed and monitored. The daily, weekly, and monthly needs of business operations are met by cash. Financial managers understand the significance of net working capital (current assets – current liabilities) and various liquidity ratios as they attempt to ensure that bills can be paid. The cash conversion cycle and the cash budget provide additional working capital management tools.
19.2 What Is Trade Credit?
Trade credit is very prevalent in the business world, especially in B2B (business-to-business) transactions. Many business exchanges (sales) could not take place without trade credit and the credit terms that are offered. Like any component of working capital, trade credit must be planned and managed. The creditor (the company granting the credit) does so based on an analysis of creditworthiness and must monitor payments and manage slow-paying accounts. The debtor (accounts payable) needs to make payments on time to keep a clean credit history and to take advantage of discounts.
19.3 Cash Management
Cash management is simply making sure you have enough cash to meet expected obligations and for contingencies (unexpected or emergency cash needs). Excess cash should be invested low-risk and highly liquid marketable securities. The cash budget is a critical tool of cash management.
19.4 Receivables Management
Accounts receivables are monitored by management with tools such as the ratios accounts receivable turnover and average collection period and the aging of receivables. The credit managers’ mantra rings true: “The older the receivable, the greater the likelihood that the account will not be collected.”
19.5 Inventory Management
Inventory, usually the least liquid of the current assets, presents its own set of management challenges. Finding the optimal level of inventory is probably more of an art than a science. JIT helps to reduce the investment in inventory and lower the costs of storage, but stockout costs can be very damaging to profitability.
19.6 Using Excel to Create the Short-Term Plan
Short-term plans of a business are funded with cash, with cash budgets being a critical tool of planning. The cash budget takes into account a target amount of cash, factoring in all the motives for holding cash. A cash budget looks ahead—predicting cash inflows and outflows, allocating for minimum cash balances to be maintained, and helping management determine short-term financing needed. Although it is the last budget prepared, the preparation of the cash budget is an important financial planning exercise of companies small and large.
19.09: Key Terms
accounts receivable aging schedule
a report that shows amounts owed by customers by the age of the account, as measured by the number of days since the sale
allowance for doubtful accounts
an account that contains the estimated amount of accounts receivable that will not be collected
bad debt expense
an expense that a business incurs as a result of uncollectible accounts receivables
bankruptcies
federal court procedures that protect distressed businesses from creditor collection efforts while allowing the debtor firm to liquidate its assets or devise a reorganization plan
benchmarking
the process of performance analysis that involves comparing financial condition and operating results against a standard, called a benchmark
bill of lading
a document that is a detailed list of a goods that have been shipped; a receipt given by the carrier (shipping company) to the seller as evidence that the goods have been shipped to the buyer
carrying costs
all costs associated with having inventory in stock including storage costs, insurance, inventory obsolescence, and spoilage
cash budget
a report that shows an estimation of cash inflows, outflows, and cash balances over a specific period of time, such as monthly, quarterly, or annually
cash cycle or cash conversion cycle
the time period (measured in days) between when a business begins production and acquires resources from its suppliers (for example, acquisition of materials and other forms of inventory) and when it receives cash from its customers; offset by the time it takes to pay suppliers (called the payables deferral period)
cash discount
discount granted to a customer who has purchased goods or services on account (credit) and pays the invoice within a certain number of days as specified by credit terms
compensating balance
minimum balance of cash that a business must deposit and maintain in a bank account to obtain a loan
contra-asset
an account with a balance that is used to offset (reduce) its related asset on the balance sheet (for example, allowance for doubtful accounts reduces the value of accounts receivable reported on the balance sheet)
credit period
the number of days that a business purchaser has before they must pay their invoice
credit rating
a type of score that indicates a business’s creditworthiness
credit terms
the terms that are part of a sales credit agreement that indicate when payment is due, possible discounts, and any fees that will be charged for a late payment
current assets
assets that are cash or cash equivalents or are expected to be converted to cash in a short period of time and will be consumed, used, or expire through business operations within one year or the business’s operating cycle, whichever is shorter
discount period
the number of days the buyer has to take advantage of the cash discount for an early payment
factoring
the process of selling accounts receivables to a financial institution or, in some cases, using the accounts receivables as security for a loan from a financial institution
floor planning
a type of inventory financing whereby a financial institution provides a loan so that the company can acquire inventory with proceeds from the sale of inventory used to pay down the loan; a common method of financing inventory for automobile dealers and sellers of other big-ticket (high-priced) items
gross working capital
synonymous with the current assets of a company, those assets that include cash and other assets that can be converted into cash within a period of 12 months
just-in-time inventory
inventory management method in which a company maintains as little inventory on hand as possible while still being able to satisfy the demands of its customers
letter of credit
a letter issued by a bank that is evidence of a guarantee for payments made to a specified entity (such as a supplier) under specified conditions; common in international trade transactions
liquidity
ability to convert assets into cash in order to meet primarily short-term cash needs or emergencies
marketable securities
investments that can be converted to cash quickly; short-term liquid securities that can be bought or sold on a public exchange (market) and tend to mature in a year or less
net terms
also referred to as the full credit period; the number of days that a business purchaser has before they must pay their invoice
net working capital
the difference between current assets and current liabilities (Current Assets – Current Liabilities = Net Working Capital)
operating cycle
the time it takes a company to acquire inventory, sell inventory, and collect the cash from the sale of said goods; synonymous with cash cycle
opportunity cost
the cost of a forgone opportunity
ordering costs
costs associated with placing an order with a vendor or supplier
precautionary motive
a reason to hold cash balances for unexpected expenditures such as repairs, costs associated with unexpected breakdown of equipment, and hiring temporary workers to meet unexpected production demands
quick payment
a payment made on an account payable during a period of time that falls within the discount period
ratios
numerical values taken from financial statements that are used in formulas to examine financial relationships and create metrics of performance, strengths, weaknesses; help analysts gain insight and meaning
speculative motive
a reason for holding an amount of cash—to be able to take advantage of investment opportunities
stockout costs
an opportunity cost (lost revenue) incurred when a customer order cannot be filled because the item is out of stock and the customer goes elsewhere for the product
supply chain
the network of participants and activities between a company and its suppliers and the company and its customers; exists to distribute a product or to provide a service to the final buyer
trade credit
credit granted to a business, also called accounts payable; allows a business to buy goods and services on account and pay the cash at some point in the future
transactional motive
holding an amount of cash to meet operational expenditures such as payroll, payments to vendors, and loan payments
working capital
the resources that are needed to meet the daily, weekly, and monthly operating cash flow needs | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/19%3A_The_Importance_of_Trade_Credit_and_Working_Capital_in_Planning/19.08%3A_Summary.txt |
1.
The term working capital is synonymous with ________.
1. accounts payable
2. current assets
3. equity
4. current liabilities
2.
The formula for net working capital is ________.
1. Current Assets – Current Liabilities
2. Fixed Assets – Current Assets
3. Assets – Liabilities
4. Current Assets – Liabilities
3.
When sales are made on credit, which current assets typically increase at the time of the sale?
1. cash
2. notes receivable
3. accounts receivable
4. marketable securities
4.
Which of the following is NOT a goal of working capital management?
1. meet the operational needs of the company
2. satisfy obligations (current liabilities) as they come due
3. maintain an optimal level of current assets
4. maximize the investment in current assets
5.
Accelerated Growth Inc. has the following account balances at year-end.
What is the cash ratio?
1. 1.29
2. 1.43
3. 1.71
4. .088
6.
Which of the following is true of these credit terms: 3/15, n/30?
1. 15 percent discount if the payable is paid within 3 days of the invoice date
2. 3 percent discount if the payable is paid in the period between 15 days and 30 days after the invoice date
3. 3 percent discount if the payable is paid within 15 days of the invoice date
4. 30 percent discount if cash is paid on the sale date and 15% discount if paid 3 days after the invoice date
7.
When reviewing its budgets, including the cash budget, management of Transcend Inc. have considered best-case and worst-case scenarios. As they completed their analysis, it was decided because of the possibility of unexpected repairs and unanticipated higher labor costs to add another \$30,000 to the amount of the target cash balance to maintain throughout the year. The reason for this action would be which of these motives for holding cash?
1. transaction motive
2. opportunity cost mitigation motive
3. precautionary motive
4. speculative motive
8.
A large retailer has more than \$100 million of cash and cash equivalents on its balance sheet. Which of the following would not be part of the cash equivalents?
1. cash in banks (checking account balances)
2. US Treasury bond maturing in two years
3. receivables from a bank that processes credit card payments
4. commercial paper
9.
An account receivable is created when ________.
1. a customer pays its bill
2. a company accepts a credit card, such as VISA or MasterCard
3. a company sells to a customer on an open account
4. a company sells to a customer only on a cash basis
10.
Jackson’s Moonshine LLC has a receivables collection period of 47 days. Which the following would be reasonable conclusions?
1. Jackson’s Moonshine LLC is most likely experiencing serious liquidity issues.
2. Jackson’s Moonshine LLC is most overinvested in marketable securities.
3. If the industry average is 31 days, Jackson’s management should attempt strategies that will lower their receivables collection period.
4. If the industry average is 53 days, Jackson’s management should attempt strategies that will raise their receivables collection period.
11.
Two Way Power Ltd. (2WP) stocks an inventory item, BB3, that is projected to be in great demand over the next 12 months. In discussing its sales forecasts with its suppliers, a reasonable estimate shows that 2WP could lose about \$30,000 of sales in month 3 due to inventory financing difficulties. Which, if any, of the following inventory costs would be affected by this development?
1. purchase cost
2. carrying costs
3. ordering costs
4. stockout costs
5. none of these costs because loss of sales is not an inventory cost
12.
If a company has significant inventory in each element of the value chain, it most likely is descriptive of ________.
1. the cost of goods sold of a retailer
2. the inventory balances held by wholesalers and service firms
3. the materials, work in process, and finished goods of a manufacturer
4. the inventory on the shelves of an e-commerce retailer | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/19%3A_The_Importance_of_Trade_Credit_and_Working_Capital_in_Planning/19.10%3A_Multiple_Choice.txt |
1.
Intelligent Cookies Inc. (ICI) sold \$30,000,000 of product in a year that had a cost of goods sold of \$10,000,000. The average inventory carried by ICI was \$500,000. On average, it takes 35 days for ICI’s customers, such as grocery stores and restaurants, to pay on their accounts. ICI buys ingredients, including flour, spices, and eggs, from its vendors on credit, and ICI takes about 40 days to pay its suppliers. How many days is ICI’s cash conversion cycle?
2.
Shown below are account balances for Electra Engines Inc., a manufacturer. The accounts are shown in a random order. What is the amount of net working capital?
3.
Shown below are account balances for Electra Engines Inc., a manufacturer. The accounts are shown in a random order. What is the current ratio and the quick ratio?
4.
Imagine that these are the cash collection cycles for some well-run companies:
What types of conclusions can you reach when you see this kind of variability?
Imagine that those cash conversion cycles are based on this information:
What would be your analysis of the cash conversion cycles based on the above information (inventory turnover, accounts receivable turnover, and accounts payable turnover)? Use the worksheet below to summarize your conclusions.
Worksheet
5.
What is the estimated annual percentage rate (APR) of not taking advantage of the early payment discount based on these terms: 4/15, n/45?
6.
If you were a credit manager reviewing a potential customer’s request for a \$20,000 line of credit, what would you analyze? Generally, how would the 5Cs of Credit guide your analysis and help lead you to a prudent decision to accept or reject the request?
7.
Aspire Excellent Inc. is a book publisher. On March 1, Aspire sells \$25,000 of books to Get Your Books Inc. (YBI), a large bookstore chain. The sale is made on account with terms net 60. Aspire’s customers usually take the full 60 days to pay their invoices. The books cost Aspire \$10,000 to manufacture. Below, summarize the effect on the accounts on March 1 from the standpoint of the seller, Aspire Excellent Inc., and the buyer, YBI.
8.
The financial manager of New England Blissful Dairies, a distributor of milk, cream, and ice cream products, has finished the 12-month operating budget. For the month of June, the following projections were made:
June 1 Cash Balance \$90,000
Cash Receipts \$300,000
Cash Disbursement \$350,000
Taking into account an amount of cash that the firm likes to maintain as a target (minimum cash balance) of \$75,000, prepare the cash budget for June using the format below. Assume that, if necessary, the company will draw upon a preestablished line of credit with their bank to be able to maintain the target cash balance.
Will the company need short-term financing?
9.
The sales for Re-Works Inc., a company that fabricates iron fencing from recycled metals, are all on account. For the first three months of the year, Re-Works management expects the following sales:
Based on past collection patterns, management expects the following:
Also, based on past experience, management forecasts that 5 percent of accounts receivable will be uncollectible and will eventually be written off.
What are the expected cash receipts for March?
10.
With the same sales forecasts as in question 9, Re-Works Inc. management would like to implement some changes to credit policy and credit terms that they believe would change the collection pattern going forward and would lower the uncollectible accounts prediction to 3 percent.
What would be the expected cash receipts for March?
19.12: Video Activity
How Companies Report Cash Flow
1.
Why isn’t the net income reported on a corporate balance sheet a good estimate of the increase in cash that occurred during the year?
2.
What is the difference between a corporate cash budget and a projected statement of cash flows?
Trade Credit and Interest Rates on Short-Term Borrowing
3.
Explain this statement: Accounts payable and accounts receivables are essentially financial opposites.
4.
Accounts payable is often called “interest-free financing.” As such, explain why a company would choose to pay the amount owed on its purchases of inventory 50 days early. Base your answer on these facts:
• The annualized cost of forgoing an early payment discount is approximately 16 percent.
• The company’s cost of borrowing short-term on a bank line of credit is 9 percent. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/19%3A_The_Importance_of_Trade_Credit_and_Working_Capital_in_Planning/19.11%3A_Review_Questions.txt |
Figure 20.1 Financial managers must consider prudent ways to manage economic volatility and the risk it poses to a company. (credit: modification of “Risk text on Dollar banknotes” by Marco Verch/flickr CC BY 2.0)
Each year, American Airlines consumes approximately four billion gallons of jet fuel.1 In the spring of 2018, jet fuel prices rose from an average of \$2.07 per gallon to a price of \$2.19 per gallon.2 A \$0.12-per-gallon increase in the price of jet fuel may not seem significant, but on an annualized basis, a price increase of this magnitude would increase the company’s jet fuel bill by approximately \$500 million.
That added cost cuts into the profits of the company, leaving less money available to provide a return to the company’s investors. Rising costs could even cause the business to become unprofitable and close, causing many employees to lose their jobs. The financial managers of American Airlines are not able to control the price of jet fuel. However, they must be aware of the risk that price volatility poses to the company and consider prudent ways to manage this risk.
20.02: The Importance of Risk Management
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Describe risk in the context of financial management.
• Explain how risk can impact firm value.
• Distinguish between hedging and speculating.
What Is Risk?
The job of the financial manager is to maximize the value of the firm for the owners, or shareholders, of the company. The three major areas of focus for the financial manager are the size, the timing, and the riskiness of the cash flows of the company. Broadly, the financial manager should work to
• increase cash coming into the company and decrease cash going out of the company;
• speed up cash coming into the company and slow down cash going out of the company; and
• decrease the riskiness of both money coming in and money going out of the company.
The first item in this list is obvious. The more revenue a company has, the more profitable it will be. Businesspeople talk about “top line” growth when discussing this objective because revenue appears at the top of the company’s income statement. Also, the lower the company’s expenses, the more profitable the company will be. When businesspeople talk about the “bottom line,” they are focused on what will happen to a company’s net income. The net income appears at the bottom of the income statement and reflects the amount of revenue left over after all of the company’s expenses have been paid.
The second item in the list—the speed at which money enters and exits the company—has been addressed throughout this book. One of the basic principles of finance is the time value of money—the idea that a dollar received today is more valuable than a dollar received tomorrow. Many of the topics explored in this book revolve around the issue of the time value of money.
The focus of this chapter is on the third item in the list: risk. In finance, risk is defined as uncertainty. Risk occurs because you cannot predict the future. Compared to other business decisions, financial decisions are generally associated with contracts in which the parties of the contract fulfill their obligations at different points in time. If you choose to purchase a loaf of bread, you pay the baker for the bread as you receive the bread; no future obligation arises for either you or the baker because of this purchase. If you choose to buy a bond, you pay the issuer of the bond money today, and in return, the issuer promises to pay you money in the future. The value of this bond depends on the likelihood that the promise will be fulfilled.
Because financial agreements often represent promises of future payment, they entail risk. Even if the party that is promising to make a payment in the future is ethical and has every intention of honoring the promise, things can happen that can make it impossible for them to do so. Thus, much of financial management hinges on managing this risk.
Risk and Firm Value
You would expect the managers of Starbucks Corporation to know a lot about coffee. They must also know a lot about risk. It is not surprising that the term coffee appears in the text of the company’s 2020 annual report 179 times, given that the company’s core business is coffee. It might be surprising, however, that the term risk appears in the report 99 times.3 Given that the text of the annual report is less than 100 pages long, the word risk appears, on average, more than once per page.
Starbucks faces a number of different types of risk. In 2020, corporations experienced an unprecedented risk because of COVID-19. Coffee shops were forced to remain closed as communities experienced government-mandated lockdowns. Locations that were able to service customers through drive-up windows were not immune to declining revenue due to the pandemic. As fewer people gathered in the workplace, Starbucks experienced a declining number of to-go orders from meeting attendees. In addition, Starbucks locations faced the risk of illness spreading as baristas gathered in their buildings to fill to-go orders.
While COVID-19 brought discussions of risk to the forefront of everyday conversations, risk was an important focus of companies such as Starbucks before the pandemic began. (The term risk appeared in the company’s 2019 annual report 82 times.4) Starbucks’s business model revolves around turning coffee beans into a pleasurable drink. Anything that impacts the company’s ability to procure coffee beans, produce a drink, and sell that drink to the customer will impact the company’s profitability.
The investors in the company have allowed Starbucks to use its capital to lease storefronts, purchase espresso machines, and obtain all of the assets necessary for the company to operate. Debt holders expect interest to be paid and their principal to be returned. Stockholders expect a return on their investment. Because investors are risk averse, the riskier they perceive the cash flows they will receive from the business to be, the higher the expected return they will require to let the company use their money. This required return is a cost of doing business. Thus, the riskier the cash flows of a company, the higher the cost of obtaining capital. As any cost of operating a business increases, the value of the firm declines.
Link to Learning
Starbucks
The most recent annual report for Starbucks Corp., along with the reports from recent years, is available on the company’s investor relations website under the Financial Data section. Go to the most recent annual report for the company. Search for the word risk in the annual report, and read the discussions surrounding this topic. Note the major types of risk the company discusses. Pay attention to the types of risk that Starbucks categorizes as uncontrollable and which types of risk the company attempts to mitigate.
In the following sections, you will learn about some of the types of risk that firms commonly face. You will also learn about ways in which firms can reduce their exposure to these risks. When firms take actions to reduce their exposures to risk, they are said to be hedging. Firms hedge to try to protect themselves from losses. Thus, in finance, hedging is a risk management tool.
Certain strategies are commonly used by firms to hedge risk, which is part of corporate financial management. Many of these same strategies can be used by economic players who wish to speculate. Speculating occurs when someone bets on a future outcome. It involves trying to predict the future and profit off of that prediction, knowing that there is some risk that an incorrect prediction will lead to a loss. Speculators bet on the future direction of an asset price. Thus, speculation involves directional bets.
If you are concerned that the price of hand sanitizer is going to rise because people are concerned about a new virus and you purchase a few extra bottles to keep on your shelf “just in case,” you are hedging. If you see this situation as a business opportunity and purchase bottles of hand sanitizer, hoping that you can sell them on eBay in a few weeks at twice what you paid for them, you are speculating.
In the popular press, you will often hear of some of the strategies in this chapter discussed in terms of people using them to speculate. In upper-level finance courses, these strategies are discussed in more depth, including how they might be used to speculate. In this chapter, however, the focus is on the perspective of a financial manager using these strategies to manage risk. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/20%3A_Risk_Management_and_the_Financial_Manager/20.01%3A_Why_It_Matters.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Describe commodity price risk.
• Explain the use of long-term contracts as a hedge.
• Explain the use vertical integration as a hedge.
• Explain the use of futures contracts as a hedge.
One of the most significant risks that many companies face arises from normal business operations. Companies purchase raw materials to produce the products and provide the services they sell. A change in the market price of these raw materials can significantly impact the profitability of a company.
For example, Starbucks must purchase coffee beans in order to make its coffee drinks. The price of coffee beans is highly volatile. Sample prices of a pound of Arabica coffee beans over the past couple of decades are shown in Table 20.1. Over this period, the price of coffee beans ranged from a low of \$0.52 per pound in the summer of 2002 to a high of over \$3.00 per pound in the spring of 2011. The costs, and thus the profits, of Starbucks will vary greatly depending on if the company is paying less than \$1.00 per pound for coffee or if it is paying three times that much.
Date Price per Pound (\$)
January 1, 2000 1.09
January 1, 2004 0.74
January 1, 2008 1.39
January 1, 2012 2.41
January 1, 2016 1.46
January 1, 2020 1.50
Table 20.1 Price of Coffee in Select Years, 2000-20205
Long-Term Contracts
One method of hedging the risk of volatile input prices is for a firm to enter into long-term contracts with its suppliers. Starbucks, for example, could enter into an agreement with a coffee farmer to purchase a particular quantity of coffee beans at a predetermined price over the next several years.
These long-term contracts can benefit both the buyer and the seller. The buyer is concerned that rising commodity prices will increase its cost of goods sold. The seller, however, is concerned that falling commodity prices will mean lower revenue. By entering into a long-term contract, the buyer is able to lock in a price for its raw materials and the seller is able to lock in its sales price. Thus, both parties are able to reduce uncertainty.
While long-term contracts reduce uncertainty about the commodity price, and thus reduce risk, there are several possible disadvantages to these types of contracts. First, both parties are exposed to the risk that the other party may default and fail to live up to the terms of the contract. Second, these contracts cannot be entered into anonymously; the parties to the contract know each other’s identity. This lack of anonymity may have strategic disadvantages for some firms. Third, the value of this contract cannot be easily determined, making it difficult to track gains and losses. Fourth, canceling the contract may be difficult or even impossible.
Vertical Integration
A common method of handling the risk associated with volatile input prices is vertical integration, which involves the merger of a company and its supplier. For Starbucks, a vertical integration would involve Starbucks owning a coffee bean farm. If the price of coffee beans rises, the firm’s costs increase and the supplier’s revenues rise. The two companies can offset these risks by merging.
Although vertical integration can reduce commodity price risk, it is not a perfect hedge. Starbucks may decrease its commodity price risk by purchasing a coffee farm, but that action may expose it to other risks, such as land ownership and employment risk.
Futures Contracts
Another method of hedging commodity price risk is the use of a futures contract. A commodity futures contract is designed to avoid some of the disadvantages of entering into a long-term contract with a supplier. A futures contract is an agreement to trade an asset on some future date at a price locked in today. Futures exist for a range of commodities, including natural resources such as oil, natural gas, coal, silver, and gold and agricultural products such as soybeans, corn, wheat, rice, sugar, and cocoa.
Futures contracts are traded anonymously on an exchange; the market price is publicly observable, and the market is highly liquid. The company can get out of the contract at any time by selling it to a third party at the current market price.
A futures contract does not have the credit risk that a long-term contract has. Futures exchanges require traders to post margin when buying or selling commodities futures contracts. The margin, or collateral, serves as a guarantee that traders will honor their obligations. Additionally, through a procedure known as marking to market, cash flows are exchanged daily rather than only at the end of the contract. Because gains and losses are computed each day based on the change in the price of the futures contract, there is not the same risk as with a long-term contract that the counterparty to the contract will not be able to fulfill their obligation.
Think It Through
The CME Group
In 2007, the Chicago Mercantile Exchange merged with the Chicago Board of Trade to form CME Group Inc. CME Group provides trading in futures as well as other types of contracts that companies can use to hedge risk.
You can watch the video Getting Started with Your Broker to learn how futures contracts for agricultural products such as coffee beans, corn, wheat, and soybeans are traded. You will also see other types of futures contracts traded, including futures for silver, crude oil, natural gas, Japanese yen, and Russian rubles. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/20%3A_Risk_Management_and_the_Financial_Manager/20.03%3A_Commodity_Price_Risk.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Describe exchange rate risk.
• Identify transaction, translation, and economic risks.
• Describe a natural hedge.
• Explain the use of forward contracts as a hedge.
• List the characteristics of an option contract.
• Describe the payoff to the holder and writer of a call option.
• Describe the payoff to the holder and writer of a put option.
The managers of companies that operate in the global marketplace face additional complications when managing the riskiness of their cash flows compared to domestic companies. Managers must be aware of differing business climates and customs and operate under multiple legal systems. Often, business must be conducted in multiple languages. Geopolitical events can impact business relationships. In addition, the company may receive cash flows and make payments in multiple currencies.
Exchange Rates
The costs to companies are impacted when the prices of the raw materials they use change. Very little coffee is grown in the United States. This means that all of those coffee beans that Starbucks uses in its espresso machines in Seattle, New York, Miami, and Houston were bought from suppliers outside of the United States. Brazil is the largest coffee-producing country, exporting about one-third of the world’s coffee.6 When a company purchases raw materials from a supplier in another country, the company needs not just money but the money that is used in that country to make the purchase. Thus, the company is concerned about the exchange rate, or the price of the foreign currency.
Figure 20.2 Brazilian Reals to One US Dollar7
The currency used in Brazil is called the Brazilian real. Figure 20.2 shows how many Brazilian reals could be purchased for \$1.00 from 2010 through the first quarter of 2021. In March 2021, 5.4377 Brazilian reals could be purchased for \$1.00. This will often be written in the form of
$USD 1=BRL 5.4377USD 1=BRL 5.4377$
20.1
BRL is an abbreviation for Brazilian real, and USD is an abbreviation for the US dollar. This price is known as a currency exchange rate, or the rate at which you can exchange one currency for another currency.
If you know the price of \$1.00 is 5.4377 Brazilian reals, you can easily find the price of Brazilian reals in US dollars. Simply divide both sides of the equation by 5.4377, or the price of the US dollar:
$USD 1 = BRL 5.4377USD 15.4377=BRL 5.43775.4377USD 0.1839=BRL 1USD 1 = BRL 5.4377USD 15.4377=BRL 5.43775.4377USD 0.1839=BRL 1$
20.2
If you have US dollars and want to purchase Brazilian reals, it will cost you \$0.1839 for each Brazilian real you want to buy.
The foreign exchange rate changes in response to demand for and supply of the currency. In early 2020, the exchange rate was $USD 1=BRL 4USD 1=BRL 4$. In other words, \$1 purchased fewer reals in early 2020 than in it did a year later. Because you receive more reals for each dollar in 2021 than you would have a year earlier, the dollar is said to have appreciated relative to the Brazilian real. Likewise, because it takes more Brazilian reals to purchase \$1.00, the real is said to have depreciated relative to the US dollar.
Exchange Rate Risks
Starbucks, like other firms that are engaged in international business, faces currency exchange rate risk. Changes in exchange rates can impact a business in several ways. These risks are often classified as transaction, translation, or economic risk.
Transaction Risk
Transaction risk is the risk that the value of a business’s expected receipts or expenses will change as a result of a change in currency exchange rates. If Starbucks agrees to pay a Brazilian coffee grower seven million Brazilian reals for an order of one million pounds of coffee beans, Starbucks will need to purchase Brazilian reals to pay the bill. How much it will cost Starbucks to purchase these Brazilian reals depends on the exchange rate at the time Starbucks makes the purchase.
In March 2021, with an exchange rate of $USD 0.1839=BRL 1USD 0.1839=BRL 1$, it would have cost Starbucks $0.1839 × 7,000,000=1,287,3000.1839 × 7,000,000=1,287,300$ to purchase the reals needed to receive the one million pounds of coffee beans. If, however, Starbucks agreed in March to purchase the coffee beans several months later, in July, Starbucks would not have known then what the exchange rate would be when it came time to complete the transaction. Although Starbucks would have locked in a price of BRL 7,000,000 for one million pounds of coffee beans, it would not have known what the coffee beans would cost the company in terms of US dollars.
If the US dollar appreciated so that it cost less to purchase each Brazilian real in July, Starbucks would find that it was paying less than \$1,287,300 for the coffee beans. For example, suppose the dollar appreciated so that the exchange rate was $USD 0.1800=BRL 1USD 0.1800=BRL 1$ in July 2021. Then the coffee beans would only cost Starbucks $0.1800 × 7,000,000=1,260,0000.1800 × 7,000,000=1,260,000$.
On the other hand, if the US dollar depreciated and it cost more to purchase each Brazilian real, then Starbucks would find that its dollar cost for the coffee beans was higher than it expected. If the US dollar depreciated (and the Brazilian real appreciated) so that the exchange rate was $USD 0.2000=BRL 1USD 0.2000=BRL 1$ in July 2021, then the coffee beans would cost Starbucks $0.2000 × 7,000,000=1,000,4000.2000 × 7,000,000=1,000,400$. This uncertainty regarding the dollar cost of the coffee beans Starbucks would purchase to make its lattes is an example of transaction risk.
A global company such as Starbucks has transaction risk not only because it is purchasing raw materials in foreign countries but also because it is selling its product—and thus collecting revenue—in foreign countries. Customers in Japan, for example, spend Japanese yen when they purchase a Starbucks cappuccino, coffee mug, or bag of coffee beans. Starbucks must then convert these Japanese yen to US dollars to pay the expenses that it incurs in the United States to produce and distribute these products.
The Japanese yen–US dollar foreign exchange rates from 2011 through the first quarter of 2021 are shown in Figure 20.3. In 2012, \$1.00 could be purchased with fewer than 80 Japanese yen. In 2015, it took over 120 yen to purchase \$1.00.
Figure 20.3 Japanese Yen to One US Dollar8
If a company is receiving yen from customers and paying expenses in dollars, the company is harmed when the yen depreciates relative to the dollar, meaning that the yen the company receives from its customers can be exchanged for fewer dollars. Conversely, when the yen appreciates, it takes fewer yen to purchase each dollar; this appreciation of the yen benefits companies with revenues in yen and expenses in dollars.
Think It Through
Projecting Sales in US Dollars
The managers of a firm think that the exchange rate of Japanese yen to US dollars will be $JPY 100=USD 1JPY 100=USD 1$ next year. If the company thinks that it will have sales of 50 million yen next year, how much will it project these sales will be worth in dollars? What happens if the actual exchange rate over the next year is $JPY 120=USD 1JPY 120=USD 1$?
Translation Risk
In addition to the transaction risk, if Starbucks holds assets in a foreign country, it faces translation risk. Translation risk is an accounting risk. Starbucks might purchase a coffee plantation in Costa Rica for 120 million Costa Rican colones. This land is an asset for Starbucks, and as such, the value of it should appear on the company’s balance sheet.
The balance sheet for Starbucks is created using US dollar values. Thus, the value of the coffee plantation has to be translated to dollars. Because exchange rates are volatile, the dollar value of the asset will vary depending on the day on which the translation takes place. If the exchange rate is 500 colones to the dollar, then this coffee plantation is an asset with a value of \$240,000. If the Costa Rican colón depreciates to 600 colones to the dollar, then the asset has a value of only \$200,000 when translated using this exchange rate.
Although it is the same piece of land with the same productive capacity, the value of the asset, as reported on the balance sheet, falls as the Costa Rican colón depreciates. This decrease in the value of the company’s assets must be offset by a decrease in the stockholders’ equity for the balance sheet to balance. The loss is due simply to changes in exchange rates and not the underlying profitability of the company.
Economic Risk
Economic risk is the risk that a change in exchange rates will impact a business’s number of customers or its sales. Even a company that is not involved in international transactions can face this type of risk. Consider a company located in Mississippi that makes shirts using 100% US-grown cotton. All of the shirts are made in the United States and sold to retail outlets in the United States. Thus, all of the company’s expenses and revenues are in US dollars, and the company holds no assets outside of the United States.
Although this firm has no financial transactions involving international currency, it can be impacted by changes in exchange rates. Suppose the US dollar strengthens relative to the Vietnamese dong. This will allow US retail outlets to purchase more Vietnamese dong, and thus more shirts from Vietnamese suppliers, for the same amount of US dollars. Because of this, the retail outlets experience a drop in the cost of procuring the Vietnamese shirts relative to the shirts produced by the firm in Mississippi. The Mississippi company will lose some of its customers to these Vietnamese producers simply because of a change in the exchange rate.
Hedging
Just as companies may practice hedging techniques to reduce their commodity risk exposure, they may choose to hedge to reduce their currency risk exposure. The types of futures contracts that we discussed earlier in this chapter exist for currencies as well as for commodities. A company that knows that it will need Korean won later this year to purchase raw materials from a South Korean supplier, for example, can purchase a futures contract for Korean won.
While futures contracts allow companies to lock in prices today for a future commitment, these contracts are not flexible enough to meet the risk management needs of all companies. Futures contracts are standardized contracts. This means that the contracts have set sizes and maturity dates. Futures contracts for Korean won, for example, have a contract size of 125 million won. A company that needs 200 million won later this year would need to either purchase one futures contract, hedging only a portion of its needs, or purchase two futures contracts, hedging more than it needs. Either way, the company has remaining currency risk.
In this next section, we will explore some additional hedging techniques.
Forward Contracts
Suppose a company needs access to 200 million Korean won on March 1. In addition to a specified contract size, currency futures contracts have specified days on which the contracts are settled. For most currency futures contracts, this occurs on the third Wednesday of the month. If the company needed 125 million Korean won (the basic contract size) on the third Wednesday of March (the standard settlement date), the futures contract could be useful. Because the company needs a different number of Korean won on a different date from those specified in the standard contract, the futures contract is not going to meet the specific risk management needs of the company.
Another type of contract, the forward contract, can be used by this company to meet its specific needs. A forward contract is simply a contractual agreement between two parties to exchange a specified amount of currencies at a future date. A company can approach its bank, for example, saying that it will need to purchase 200 million Korean won on March 1. The bank will quote a forward rate, which is a rate specified today for the sale of currency on a future date, and the company and the bank can enter into a forward contract to exchange dollars for 200 million Korean won at the quoted rate on March 1.
Because a forward contract is a contract between two parties, those two parties can specify the amount that will be traded and the date the trade will occur. This contract is similar to your agreeing with a hotel that you will arrive on March 1 and rent a room for three nights at \$200 per night. You are agreeing today to show up at the hotel on a future (specified) date and pay the quoted price when you arrive. The hotel agrees to provide you the room on March 1 and cannot change the price of the room when you arrive. With a forward contract, you are also agreeing that you will indeed make the purchase and you cannot change your mind; so, using the hotel room analogy, this would mean that the hotel will definitely charge your credit card for the agreed-upon \$200 per night on March 1.
The forward contract is an individualized contract between the buyer and the seller; they are both under a contractual obligation to honor the contract. Because this contract is not standardized like the futures contract (so that it can be traded on an exchange), it can be tailored to the needs of the two parties. While the forward contract has the advantage of being fine-tuned to meet the company’s needs, it has a risk, known as counterparty risk, that the futures contract does not have. The forward contract is only as good as the promise of the counterparty. If the company enters into a forward contract to purchase 200 million Korean won on March 1 from its bank and the bank goes out of business before March 1, the company will not be able to make the exchange with a nonexistent bank. The exchanges on which futures contracts are traded guard the purchaser of a futures contract from this type of risk by guaranteeing the contract.
Natural Hedges
A hedge simply refers to a reduction in the risk or exposure that a company has to volatility and uncertainty. We have been focusing on how a company might use financial market instruments to hedge, but sometimes a company can use a natural hedge to mitigate risk. A natural hedge occurs when a business can offset its risk simply through its own operations. With a natural hedge, when a risk occurs that would decrease the value of a company, an offsetting event occurs within the firm that increases the value of the company.
As an example, consider a British-based travel agency. One of the major tours the company offers is a tour of Italy. The company arranges for transportation, lodging, meals, and sightseeing for Brits to visit the highlights of Rome, Florence, and Venice. Because the company charges customers in British pounds but must pay the bus companies, hotels, and other service providers in Italy in euros, the travel agency faces significant transaction exposure. If the value of the British pound depreciates after the company sets the price it will charge for the tour but before it pays the Italian suppliers, the company will be harmed. In fact, if the British pound depreciates by a great deal, the company could end up in a situation in which the British pounds it collects are not enough to purchase the euros it needs to pay its suppliers.
The company could create a natural hedge by offering tours of London to individuals living in the European Union. The travel agency could charge people who live in Germany, Italy, Spain, or any other country that has the euro as its currency for a travel package to London. Then the agency would pay British restaurants, tour guides, hotels, and bus companies in British pounds. This segment of the business also has currency risk. If the British pound depreciates, the company gains because the euros it collects from its EU customers will purchase more British pounds than before.
Thus, the company has created a situation in which if the British pound depreciates, the decrease in value of its tours of Italy is exactly offset by the increase in value of its tours of London. If the British pound appreciates, the opposite occurs: the company experiences a gain in its division that charges British pounds for tourists traveling to Italy and an offsetting loss in its division that charges euros for tourists traveling to London.
Options
A financial option gives the owner the right, but not the obligation, to purchase or sell an asset for a specified price at some future date. Options are considered derivative securities because the value of a derivative is derived from, or comes from, the value of another asset.
Options Terminology
Specific terminology is used in the finance industry to describe the details of an options contract. If the owner of an option decides to purchase or sell the asset according to the terms of the options contract, the owner is said to be exercising the option. The price the option holder pays if purchasing the asset or receives if selling the asset is known as the strike price or exercise price. The price the owner of the option paid for the option is known as the premium.
An option contract will have an expiration date. The most common kinds of options are American options, which allow the holder to exercise the option at any time up to and including the expiration date. Holders of European options may exercise their options only on the expiration date. The labels American option and European option can be confusing as they have nothing to do with the location where the options are traded. Both American and European options are traded worldwide.
Option contracts are written for a variety of assets. The most common option contracts are options on shares of stock. Options are traded for US Treasury securities, currencies, gold, and oil. There are also options on agricultural products such as wheat, soybeans, cotton, and orange juice. Thus, options can be used by financial managers to hedge many types of risk, including currency risk, interest rate risk, and the risk that arises from fluctuations in the prices of raw materials.
Options are divided into two main categories, call options and put options. A call option gives the owner of the option the right, but not the obligation, to buy the underlying asset. A put option gives the owner the right, but not the obligation, to sell the underlying asset.
Call Options
If a Korean company knows that it will need pay a \$100,000 bill to a US supplier in six months, it knows how many US dollars it will need to pay the bill. As a Korean company, however, its bank account is denominated in Korean won. In six months, it will need to use its Korean won to purchase 100,000 US dollars.
The company can determine how many Korean won it would take to purchase \$100,000 today. If the current exchange rate is $KWN 1,100 = USD 1KWN 1,100 = USD 1$, then it will need KWN 110,000,000 to pay the bill. The current exchange rate is known as the spot rate.
The company, however, does not need the US dollars for another six months. The company can purchase a call option, which is a contract that will allow it to purchase the needed US dollars in six months at a price stated in the contract. This allows the company to guarantee a price for dollars in six months, but it does not obligate the company to purchase the dollars at that price if it can find a better price when it needs the dollars in six months.
The price that is in the contract is called the strike price (exercise price). Suppose the company purchases a call contract for US dollars with a strike price of KWN 1,200/USD. While this contract would be for a set size, or a certain number of US dollars, we will talk about this transaction as if it were per one US dollar to highlight how options contracts work.
The company must pay a price, known as the premium, to purchase this call option contract. For our example, let’s assume the premium for the call option contract is KWN 50. In other words, the company has paid KWN 50 for the right to buy US dollars in six months for a price of KWN 1,200/USD.
In six months, the company makes a choice to either (1) pay the strike price of KWN 1,200/USD or (2) let the option expire. If the company chooses to pay the strike price and purchase the US dollars, it is exercising the option. How does the company choose which to do? It simply compares the strike price of KWN 1,200/USD to the market, or spot, exchange rate at the time the option is expiring.
If, six months from now, the spot exchange rate is $KWN 1,150=USD 1KWN 1,150=USD 1$, it will be cheaper for the company to buy the US dollars it needs at the spot price than it would be to buy the dollars with the option. In fact, if the spot rate is anything below $KWN 1,200=USD 1KWN 1,200=USD 1$, the company will not choose to exercise the option. If, however, the spot exchange rate in six months is $KWN 1,300=USD 1KWN 1,300=USD 1$, the company will exercise the option and purchase each US dollar for only KWN 1,200.
The profitability, or the payoff, to the owner of a call option is represented by the chart in Figure 20.4 below. Possible spot prices are measured from left to right, and the financial gain or loss to the company of the option contract is measured vertically. If the spot price is anything less than KWN 1,200/USD, the option expires without being exercised. The company paid KWN 50 for something that ended up being worthless.
Figure 20.4 The Payoff to the Holder of a Call Option
If, in six months, the spot exchange rate is $KWN 1,225=USD 1KWN 1,225=USD 1$, then the company will choose to exercise the option. The company will be saving KWN 25 for each dollar purchased, but the company originally paid 50 KWN for the contract. So, the company will be 25 KWN worse off than if it had never purchased the call option.
If the spot exchange rate is $KWN 1,250=USD 1KWN 1,250=USD 1$, the company will be in exactly the same position having purchased and exercised the call option as it would have been if it had not purchased the option. At any spot price higher than KWN 1,250/USD, the firm will be in a better financial position, or will have a positive payoff, because it purchased the call option. The more the Korean won depreciates over the next six months, the higher the payoff to the firm of owning the call contract. Purchasing the call contract is a way that the company can protect itself from the currency exposure it faces.
For any transaction, there must be two parties—a buyer and a seller. For the company to have purchased the call option, another party must have sold the call option. The seller of a call option is called the option writer. Let’s consider the potential benefits and risks to the writer of the call option.
When the company purchases the call option, it pays the premium to the writer. The writer of the option does not have a choice regarding whether the option will be exercised. The purchaser of the option has the right to make the choice; in essence, the writer of the option sold the right to make that decision to the purchasers of the call option.
Figure 20.5 shows the payoff to the writer of the call option. Recall that the buyer of the call option will let the option expire if the spot rate is less than $KWN 1,200=USD 1KWN 1,200=USD 1$ when the call option matures in six months. If this occurs, the writer of the option collected the KWN 50 option premium when the contract was sold and then never hears from the purchaser again. This is what the writer of the option is hoping for; the writer of the call option profits when the options contract is not exercised
Figure 20.5 The Payoff to the Writer of a Call Option
If the spot rate is above $KWN 1,200=USD 1KWN 1,200=USD 1$, then the holder of the option will choose to exercise the right to purchase the won at the option strike price. Then the writer of the option will be obligated to sell the Korean won at a price of KWN 1,200/USD. If the spot rate is $KWN 1,250=USD 1KWN 1,250=USD 1$, the option writer will be obligated to sell the dollars for KWN 50 less than what they are worth; because the option writer was initially paid a KWN 50 premium for taking on that obligation, the option writer will just break even. For any exchange rate higher than $KWN 1,250=USD 1KWN 1,250=USD 1$, the writer of the call option will have a loss.
The option contract is a zero-sum game. Any payoff the owner of the option receives is exactly equal to the loss the writer of the option has. Any loss the owner of the option has is exactly equal to the payoff the writer of the option receives.
Put Options
While the call option you just considered gives the owner the right to buy an underlying asset, the put option gives the owner to right to sell an underlying asset. Take, for example, an Indian company that has a contract to provide graphic artwork for a US company. The US company will pay the Indian company 200,000 US dollars in three months.
While the Indian company receives US dollars, it must pay its workers in Indian rupees. Because the company does not know what the spot exchange rate will be in three months, it faces transaction risk and may be interested in hedging this exposure using a put option.
The company knows that the current spot rate is $INR 75=USD 1INR 75=USD 1$, meaning that the company would be able to use \$200,000 to purchase $USD 200,000 × INR 75USD=INR 15,000,000USD 200,000 × INR 75USD=INR 15,000,000$ if it possessed the \$200,000 today. If the Indian rupee appreciates relative to the US dollar over the next three months, however, the company will receive fewer rupees when it makes the exchange; perhaps the company will not be able to purchase enough rupees to cover the wages of its employees.
Assume the company can purchase a put option that gives it the right to sell US dollars in three months at a strike price of INR 75/USD; the premium for this put option is INR 5. By purchasing this put option, the company is spending INR 5 to guarantee that it can sell its US dollars for rupees in three months at a price of INR 75/USD.
If, in three months, when the company receives payment in US dollars, the spot exchange rate is higher than $INR 75=USD 1INR 75=USD 1$, the company will simply exchange the US dollars for rupees at that exchange rate, allowing the put option to expire without exercising it. The payoff to the company for the option is INR -5, the premium that was paid for the option that was never used (see Figure 20.6).
Figure 20.6 The Payoff to the Holder of a Put Option
If, however, in three months, the spot exchange rate is anything less than $INR 75=USD 1INR 75=USD 1$, then the company will choose to exercise the option. If the spot rate is between $INR 70=USD 1INR 70=USD 1$ and $INR 75=USD 1INR 75=USD 1$, the payoff for the option is negative. For example, if the spot exchange rate is $INR 72=USD 1INR 72=USD 1$, the company will exercise the option and receive three more Indian rupees per dollar than it would in the spot market. However, the company had to spend INR 5 for the option, so the payoff is INR -2. At a spot exchange rate of $INR 70=USD 1INR 70=USD 1$, the company has a zero payoff; the benefit of exercising the option, INR 5, is exactly equal to the price of purchasing the option, the premium of INR 5.
If, in three months, the spot exchange rate is anything below $INR 70=USD 1INR 70=USD 1$, the payoff of the put option is positive. At the theoretical extreme, if the USD became worthless and would purchase no rupees in the spot market when the company received the dollars, the company could exercise its option and receive INR 75/USD, and its payoff would be INR 70.
Now that we have considered the payoff to a purchaser of a put contract, let’s consider the opposite side of the contract: the seller, or writer, of the put option. The writer of a put option is selling the right to sell dollars to the purchaser of the put option. The writer of the put option collects a premium for this. The writer of the put has no choice as to whether the put option will be exercised; the writer only has an obligation to honor the contract if the owner of the put option chooses to exercise it.
The owner of the option will choose to let the option expire if the spot exchange rate is anything above $INR 75=USD 1INR 75=USD 1$. If that is the case, the writer of the put option collects the INR 5 premium for writing the put, as shown by the horizontal line in Figure 20.7. This is what the writer of the put is hoping will occur.
Figure 20.7 The Payoff to the Writer of a Put Option
The owner of the option will choose to exercise the option if the exchange rate is less than $INR 75=USD 1INR 75=USD 1$. If the spot exchange rate is between $INR 70=USD 1INR 70=USD 1$ and $INR 75=USD 1INR 75=USD 1$, the writer of the put option has a positive payoff. Although the writer must now purchase US dollars for a price higher than what the dollars are worth, the INR 5 premium that the writer received when entering into the position is more than enough to offset that loss.
If the spot exchange rate drops below $INR 70=USD 1INR 70=USD 1$, however, the writer of the put option is losing more than INR 5 when the option is exercised, leaving the writer with a negative payoff. In the extreme, the writer of the put will have to purchase worthless US dollars for INR 75/USD, resulting in a loss of INR 70.
Notice that the payoff to the writer of the put is the negative of the payoff to the holder of the put at every spot price. The highest payoff occurs to the writer of the put when the option is never exercised. In that instance, the payoff to the writer is the premium that the holder of the put paid when purchasing the option (see Figure 20.7).
Table 20.2 provides a summary of the positions that the parties who enter into options contract are in. Remember that the buyer of an option is always the one purchasing the right to do something. The seller or writer of an option is selling the right to make a decision; the seller has the obligation to fulfill the contract should the buyer of the option choose to exercise the option. The most the seller of an option can ever profit is by the premium that was paid for the option; this occurs when the option is not exercised.
Benefits Harm
Party to an Option Contract Right of the Party Obligation of the Party When Maximum Profit When Maximum Loss
Buyer of a call To buy Price of underlying rises Unlimited Price of underlying falls Premium paid
Seller of a call To sell Price of underlying falls Premium received Price of underlying rises Unlimited
Buyer of a put To sell Price of underlying falls Strike price minus premium Price of underlying rises Premium paid
Seller of a put To buy Price of underlying rises Premium received Price of underlying falls Strike price minus premium
Table 20.2 Summary of Option Contracts | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/20%3A_Risk_Management_and_the_Financial_Manager/20.04%3A_Exchange_Rates_and_Risk.txt |
Learning Outcomes
Learning Objectives
By the end of this section, you will be able to:
• Describe interest rate risk.
• Explain how a change in interest rates changes the value of cash flows.
• Describe the use of an interest rate swap.
An interest rate is simply the price of borrowing money. Just as other prices are volatile, interest rates are also volatile. Just as volatility in other prices leads to uncertain cash flows for a company, volatility in interest rates can also lead to uncertain cash flows.
Measuring Interest Rate Risk
Suppose that a company is supposed to pay a bill of \$1,000 in 10 years. The present value of this bill depends on the level of interest rates. If the interest rate is 5%, the present value of the bill is $1,0001 + 0.0510=613.911,0001 + 0.0510=613.91$. If the interest rate rises to 6%, the present value of the bill is $1,0001 + 0.0610=558.391,0001 + 0.0610=558.39$. The increase in the interest rate by 1% causes the present value of the expected cash flow to fall by $613.91 - 558.39613.91=0.0904=9.04%613.91 - 558.39613.91=0.0904=9.04%$.
Interest rate risk can be highlighted by looking at bonds. Consider two \$1,000 face value bonds with a 5% coupon rate, paid semiannually. One of the bonds matures in five years, and the other bond matures in 30 years. If the market interest rate is 5%, each of these bonds will sell for face value, or \$1,000. If, instead, the market interest rate is 6%, the five-year bond will sell for \$957.35 and the 30-year bond will sell for \$861.62.
Notice that as the interest rate rises, the price of both of these bonds will fall. However, the price of the longer-term bond will fall by more than the price of the shorter-term bond. The longer-term bond price will fall by 1.38%; the shorter-term bond price will fall by only 0.43%.
Consider two additional \$1,000 face value bonds. The difference is that these bonds have a 6% coupon rate, paid semiannually. If a bond has a 6% coupon rate and matures in five years, it will sell for \$1,043.76 when the market interest rate is 5%. A 30-year bond that matures in 30 years and has a 6% coupon rate will sell for \$1,154.54 when the market interest rate is 5%. However, if the interest rate in the economy is 6%, both of these bonds will sell for a price of \$1,000. The price of the five-year bond will drop by 4.19%; the price of the 30-year bond will drop by 13.39%.
Think It Through
Calculating Bond Prices as the Interest Rates Changes
You are considering purchasing a \$10,000 face value bond with a 4% coupon rate, paid semiannually, that matures in 20 years. If you require a 5% return to purchase this bond, what is the maximum price you would be willing to pay for the bond? If, instead, you require an 8% return to purchase this bond, what is the maximum price you would be willing to pay for the bond?
The sensitivity of bond prices to changes in the interest rate is known as interest rate risk. Duration is an important measure of interest rate risk that incorporates the maturity and coupon rate of a bond as well as the level of current market interest rates. Calculating duration is a complex topic that is beyond the scope of this introductory textbook, but it is useful to note that
• the higher the duration of a bond, the more sensitive the price of the bond will be to interest rate changes;
• the duration of a bond will be higher when market yields are lower, all else being equal;
• the duration of a bond will be higher the longer the maturity of the bond, all else being equal; and
• the duration of a bond will be higher the lower the coupon rate on the bond, all else being equal.
Swap-Based Hedging
As the name suggests, a swap involves two parties agreeing to swap, or exchange, something. Generally, the two parties, known as counterparties, are swapping obligations to make specified payment streams.
To illustrate the basics of how an interest rate swap works, let’s consider two hypothetical companies, Alpha and Beta. Alpha is a strong, well-established company with a AAA (triple-A) bond rating. This means that Alpha has the highest rating a company can have. With this high rating, Alpha can borrow at relatively low interest rates. Often, companies in this situation will borrow at a floating rate. This means that their interest rate goes up and down as interest rates in the overall economy vary. The floating rate will be tied to a benchmark rate that is widely quoted in the financial press. Historically, companies have often used the London Interbank Offered Rate (LIBOR) as the benchmark rate. Because published quotes for LIBOR will be phased out by 2023, firms are beginning to use alternative rates. As of yet, no single alternative has emerged as the most commonly used rate; therefore, LIBOR will be used in our example. Suppose that Alpha finds that it can borrow money at rate equal to $LIBOR+0.25%LIBOR+0.25%$; thus, if LIBOR is 2.75%, the company will pay 3.0% to borrow. If the company wants to borrow at a long-term fixed rate, its cost of borrowing will be 5.0%.
Link to Learning
LIBOR Transition
Although the basic principles of financial transactions remain the same over time, the particular financial instruments used change from time to time. Innovation, regulation, and technological advances lead to these changes in financial instruments. The use of LIBOR as a benchmark rate is winding down in the early 2020s. To find out more about this transition and how it impacts companies, visit the About LIBOR Transition website.
Beta has a BBB bond rating. Although this is considered a good, investment-grade rating, it is lower than the rating of Alpha. Because Beta is less creditworthy and a bit riskier than Alpha, it will have to pay a higher interest rate to borrow money. If Beta wants to borrow money at a floating rate, it will need to pay $LIBOR+0.75%.LIBOR+0.75%.$ If LIBOR is 2.75%, Beta must pay 3.5% on its floating rate debt. In order for Beta to borrow at a long-term fixed rate, its cost of borrowing will be 6.75%.
Let’s consider how these two companies can enter into a swap in which both parties benefit. Table 20.5 summarizes the situation and the rates at which Alpha and Beta can borrow. It also illustrates a way in which an interest rate swap can benefit both Alpha and Beta.
Alpha Beta
Bond rating AAA BBB
Floating rate $LIBOR+0.25LIBOR+0.25$ $LIBOR+0.75LIBOR+0.75$
Fixed rate 5 6.75
Rate company chooses Fixed at 5.0 Floating at $LIBOR+0.75LIBOR+0.75$
Swap N/A N/A
Beta pays Alpha fixed rate 5.5 -5.5
Alpha pays Beta floating rate -LIBOR +LIBOR
Payments and receipts $-5.0 + 5.5 – LIBOR-5.0 + 5.5 – LIBOR$ $-(LIBOR + 0.75) – 5.5 + LIBOR-(LIBOR + 0.75) – 5.5 + LIBOR$
Net amount $0.5 – LIBOR0.5 – LIBOR$ -6.25
Benefit 0.75 0.5
Table 20.5 Example of a Swap Agreement
Alpha borrows in the capital markets at a fixed rate of 5%. Beta chooses to borrow at a floating rate that equals $LIBOR + 0.75%. LIBOR + 0.75%.$ Beta also agrees to pay Alpha a fixed rate of 5.5%. In essence, Beta is paying 5.5% to Alpha, 0.75% to its lender, and LIBOR to its lender.
In return, Alpha promises to pay Beta LIBOR. The exact amount that Alpha will pay to Beta fluctuates as LIBOR fluctuates. However, from Beta’s perspective, the payment of LIBOR it receives from Alpha exactly offsets the payment of LIBOR it makes to its lender. When LIBOR increases, the rate of $LIBOR+0.75%LIBOR+0.75%$ that Beta is paying to its lender increases, but the LIBOR rate it receives from Alpha also increases. When LIBOR decreases, Beta receives less from Alpha, but it also pays less to its lender. Because the LIBOR it receives from Alpha is exactly equal to the LIBOR it pays to its lender, Beta’s net amount of interest paid is 6.25%—the 5.5% it pays to Alpha plus the 0.75% it pays to its lender.
Alpha is in the position of paying 5.0% to its lender and LIBOR to Beta while receiving 5.5% from Beta. This means that Alpha’s net interest paid is $LIBOR – 0.5%.LIBOR – 0.5%.$ Alpha is said to have swapped its fixed interest rate for a floating rate. Because it is paying $LIBOR – 0.5%LIBOR – 0.5%$, it will experience fluctuating interest rates; however, as a company with a AAA bond rating, it is a strong, creditworthy company that can withstand that interest rate exposure. It would have cost Alpha $LIBOR + 0.25%LIBOR + 0.25%$ to borrow the money from its lenders at a variable rate. By participating in this swap arrangement, Alpha has been able to lower its interest rate by 0.75%.
Through this swap arrangement, Beta has been able to fix its interest rate at 6.25% rather than having a variable rate. This predictability is a benefit for a company, especially one that is in a bit more precarious position as far as its creditworthiness and stability. The 6.25% Beta pays as a result of this arrangement is 0.5% below the 6.75% it would have paid if it simply borrowed from its lenders at a fixed rate.
Footnotes
• 9The specific financial calculator in these examples is the Texas Instruments BA II PlusTM Professional model, but you can use other financial calculators for these types of calculations. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/20%3A_Risk_Management_and_the_Financial_Manager/20.05%3A_Interest_Rate_Risk.txt |
20.1 The Importance of Risk Management
Risk arises due to uncertainty. The future is unpredictable. One job of the financial manager is to manage the risks of both cash inflows and cash outflows. Investors are risk-averse. The riskier a firm’s cash flows are, the higher the rate of return investors require to provide capital to the company.
20.2 Commodity Price Risk
Companies do not know how much they will have to pay for raw materials in future months. The price of raw materials will change as economic conditions change, impacting a company’s cost of goods sold and profits. Some ways that a company can hedge this risk are through vertical integration, long-term contracts, and futures contracts.
20.3 Exchange Rates and Risk
Exchange rates are unpredictable. This leads to transaction risk, translation risk, and economic risk as currency values change. A forward contract is an agreement between two parties to make an exchange at a particular rate on a given date in the future. Companies can use options to mitigate the risks. A call option gives the holder the right, but not the obligation, to purchase an underlying asset. A put option give the holder the right, but not the obligation, to sell an underlying asset.
20.4 Interest Rate Risk
When interest rates increase, the present value of future cash flows decreases. Duration is a measure of interest rate risk. A swap involves two parties agreeing to exchange something, often specified payment streams.
20.07: Key Terms
American option
an option that the holder can exercise at any time up to and including the exercise date
appreciate
when one unit of a currency will purchase more of a foreign currency than it did previously
call option
an option that gives the owner the right, but not the obligation, to buy the underlying asset at a specified price on some future date
depreciate
when one unit of a currency will purchase less of a foreign currency than it did previously
derivative
a security that derives its value from another asset
duration
a measure of interest rate risk
economic risk
the risk that a change in exchange rates will impact the number of customers a business has or its sales
European option
an option that the holder can exercise only on the expiration date
exchange rate
the price of one currency in terms of another currency
exercise price (strike price)
the price the option holder pays for the underlying asset when exercising an option
exercising
choosing to purchase or sell the asset underlying a held option according to the terms of the option contract
expiration date
the date an option contract expires
forward contract
a contractual agreement between two parties to exchange a specified amount of assets on a specified future date
futures contract
a standardized contract to trade an asset on some future date at a price locked in today
hedging
taking an action to reduce exposure to a risk
margin
the collateral that must be posted to guarantee that a trader will honor a futures contract
marking to market
a procedure by which cash flows are exchanged daily for a futures contract, rather than at the end of the contract
natural hedge
when a company offsets the risk that something will decrease in value by having a company activity that would increase in value at the same time
option
an agreement that gives the owner the right, but not the obligation, to purchase or sell an asset at a specified price on some future date
option writer
seller of a call or put option
premium
the price a buyer of an option pays for the option contract
put option
an option that gives the owner the right, but not the obligation, to sell the underlying asset at a specified price on some future date
speculating
attempting to profit by betting on the uncertain future, knowing that a risk of loss is involved
spot rate
the current market exchange rate
strike price (exercise price)
the price an option holder pays for the underlying asset when exercising the option
swap
an agreement between two parties to exchange something, such as their obligations to make specified payment streams
transaction risk
the risk that a change in exchange rates will impact the value of a business’s expected receipts or expenses
translation risk
the risk that a change in exchange rates will impact the value of items on a company’s financial statements
vertical integration
the merger of a company with its supplier
20.08: CFA Institute
This chapter supports some of the Learning Outcome Statements (LOS) in this CFA® Level I Study Session. Reference with permission of CFA Institute. | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/20%3A_Risk_Management_and_the_Financial_Manager/20.06%3A_Summary.txt |
1.
Which of the following does a financial manager want to do to maximize the value of the firm?
1. Decrease the speed of money coming into the firm
2. Speed up cash going out and slow down cash coming in
3. Decrease the riskiness of cash inflows and cash outflows
4. Increase the volatility and speed of cash going out of the firm
2.
In finance, risk is ________.
1. the same thing as profit
2. ignored because it is inevitable
3. thought of as uncertainty or unpredictability
4. something that financial managers should strive to increase and maximize
3.
American Jeans Corp. purchases a cotton farm. The cotton grown on the farm will be used to make denim cloth for the company’s jeans. This is an example of ________.
1. striking a price
2. vertical integration
3. a forward contract
4. an American option
4.
The price that a holder of an option pays to buy the underlying asset when exercising a call option is known as ________.
1. the strike price
2. the maturity price
3. the exchange price
4. the underlying premium
5.
Which of the following gives the holder the right, but not the obligation, to purchase an underlying asset?
1. A call option
2. A forward contract
3. A European put option
4. An American put option
6.
An American option allows the holder to ________.
1. exercise the option only on the expiration date
2. exercise the option at any time up to and including the expiration date
3. sell stocks, and a European option allows the holder to purchase stocks
4. purchase stocks, and a European option allows the holder to purchase bonds
7.
The holder of a(n) ________ has the right to buy and the holder of a(n) ________ has the right to sell an underlying asset.
1. call option; put option
2. put option; call option
3. American option; European option
4. European option; American option
8.
The three main categories of foreign exchange risk a company faces are ________.
1. economic risk, business risk, and exposure risk
2. exposure risk, fluctuation risk, and forward risk
3. transaction risk, translation risk, and economic risk
4. appreciation risk, depreciation risk, and duplication risk
9.
In January, the exchange rate between the South Korean won and the US dollar was /**/{\rm{KWN\;1,100 = USD\;1}}/**/. Three months later, the exchange rate was /**/{\rm{KWN\;1,200 = USD\;1}}/**/. This means that ________.
1. the Korean won appreciated relative to the US dollar
2. the Korean won depreciated relative to the US dollar
3. the US dollar depreciated relative to the Korean won
4. both the Korean won and the US dollar appreciated
10.
In January, the exchange rate between the South Korean won and the US dollar was /**/{\rm{KWN\;1,100 = USD\;1}}/**/. Three months later, the exchange rate was /**/{\rm{KWN\;1,200 = USD\;1}}/**/. This means that ________.
1. it will cost US companies more to purchase raw materials from South Korea
2. it will cost Korean companies more to purchase raw materials from the United States
3. US companies that sell their products in South Korea will find their revenue has increased
4. Korean companies that sell their products in the United States will find that their revenue has decreased
11.
A foreign exchange forward contract ________.
1. is a standardized contract that is inflexible
2. occurs when a company swaps its translation exposure for transaction exposure
3. is a contractual agreement between two parties to exchange a specified amount of currencies on a future date
4. states the date on which a trade will take place, but the price for the trade will be determined at the time the trade occurs
12.
Which of the following is a measure of interest rate risk?
1. LIBOR
2. Duration
3. Translation exposure
4. Contract inflexibility
13.
A swap occurs when ________.
1. a company exchanges obligations with another company to make specified payment streams
2. a company purchases commodities from a company in another country, exposing it to both commodity and currency risk
3. a company chooses a local supplier over an international supplier to avoid currency exposure
4. a company chooses a foreign supplier so that its commodity risk will be offset by its currency risk | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/20%3A_Risk_Management_and_the_Financial_Manager/20.09%3A_Multiple_Choice.txt |
1.
What is the difference between someone using a derivative security to hedge risk and someone using a derivative security to speculate?
2.
Explain how vertical integration may be used as a method of hedging against commodity price risk.
3.
What is the difference between a forward contract and a futures contract?
4.
You are considering purchasing a call option to purchase Mexican pesos in three months with a strike price of MXN 20/USD. The premium for this call option is MXN 2. Show the payoff you will receive at various prices in a diagram.
5.
You are considering writing a call option to purchase Mexican pesos in three months with a strike price of MXN 20/USD. The premium for this call option is MXN 2. Show the payoff you will receive at various prices in a diagram.
6.
Why are options considered to be a “zero-sum game”?
20.11: Problems
1.
The Olive Orchard is a US retail outlet for high-quality olive oils. One of the major suppliers of olive oil for the company is a farm in Greece. The Olive Orchard must pay the Greek farm 5.00 euros per liter of olive oil it purchases. The Olive Orchard would like to purchase 7,000 liters of the Greek farm’s olive oil next year. Currently, it costs 0.900 euros to purchase 1 US dollar. If the exchange rate remains constant, how much will it cost the Olive Orchard (in US dollars) to purchase the 7,000 liters? If the exchange rate changes so that it costs 0.8599 euros to purchase 1 US dollar, how much will it cost to purchase the 7,000 liters of olive oil?
2.
International Automobile Parts (IAP) holds a call option to purchase US dollars. The strike price on the call option is JPY 115/USD. IAP paid JPY 10 for the option. The spot price is JPY 120/USD, and the option expires today. Should IAP exercise the option? What is IAP’s payoff?
3.
Global Producers (GP) holds a put option to sell US dollars. The strike price on the put option is JPY 114/USD. GP paid JPY 10 for the option. The spot price is JPY 120/USD, and the option expires today. Should GP exercise the option? What is GP’s payoff?
20.12: Video Activity
Hedging at Southwest Airlines
Jet fuel costs represent a major expense for airlines. Southwest Airlines has been known as the most aggressive airline when it comes to hedging the risk of jet fuel cost volatility. In this interview, the CEO of Southwest Airlines, Gary Kelly, discusses crude oil prices in the spring of 2012 and the impact on Southwest Airlines.
1.
How volatile are oil prices, and how large of an impact does that volatility have on the cost structure of an airline?
2.
Gary Kelly states that he sees fuel prices as the largest single business risk Southwest Airlines faces and that hedging that risk has become more expensive. Why do you think it became more expensive for Southwest Airlines to hedge this risk in 2012?
BMW in the United States
While the name BMW may sound German, a significant amount of BMW’s production occurs outside of Germany. Watch this video to learn about this international activity of BMW.
3.
In the video, the potential car buyer is concerned about the impact of the value of the euro on the price of the BMW. Why, if he is paying for the car in US dollars, do you think that he is impacted by the currency exchange rate?
4.
How do you think opening plants in the United States, and in other parts of the world, provides a currency hedge for BMW? | textbooks/biz/Finance/Principles_of_Finance_(OpenStax)/20%3A_Risk_Management_and_the_Financial_Manager/20.10%3A_Review_Questions.txt |
In his novel A Tale of Two Cities, set during the French Revolution of the late eighteenth century, Charles Dickens wrote, “It was the best of times; it was the worst of times.” Dickens may have been premature, since the same might well be said now, at the beginning of the twenty-first century.
When we think of large risks, we often think in terms of natural hazards such as hurricanes, earthquakes, or tornados. Perhaps man-made disasters come to mind—such as the terrorist attacks that occurred in the United States on September 11, 2001. We typically have overlooked financial crises, such as the credit crisis of 2008. However, these types of man-made disasters have the potential to devastate the global marketplace. Losses in multiple trillions of dollars and in much human suffering and insecurity are already being totaled as the U.S. Congress fights over a \$700 billion bailout. The financial markets are collapsing as never before seen.
Many observers consider this credit crunch, brought on by subprime mortgage lending and deregulation of the credit industry, to be the worst global financial calamity ever. Its unprecedented worldwide consequences have hit country after country—in many cases even harder than they hit the United States.David J. Lynch, “Global Financial Crisis May Hit Hardest Outside U.S.,” USA Today, October 30, 2008. The initial thought that the trouble was more a U.S. isolated trouble “laid low by a Wall Street culture of heedless risk-taking” and the thinking was that “the U.S. will lose its status as the superpower of the global financial system…. Now everyone realizes they are in this global mess together. Reflecting that shared fate, Asian and European leaders gathered Saturday in Beijing to brainstorm ahead of a Nov. 15 international financial summit in Washington, D.C.” The world is now a global village; we’re so fundamentally connected that past regional disasters can no longer be contained locally.
We can attribute the 2008 collapse to financially risky behavior of a magnitude never before experienced. Its implications dwarf any other disastrous events. The 2008 U.S. credit markets were a financial house of cards with a faulty foundation built by unethical behavior in the financial markets:
1. Lenders gave home mortgages without prudent risk management to underqualified home buyers, starting the so-called subprime mortgage crisis.
2. Many mortgages, including subprime mortgages, were bundled into new instruments called mortgage-backed securities, which were guaranteed by U.S. government agencies such as Fannie Mae and Freddie Mac.
3. These new bundled instruments were sold to financial institutions around the world. Bundling the investments gave these institutions the impression that the diversification effect would in some way protect them from risk.
4. Guarantees that were supposed to safeguard these instruments, called credit default swaps, were designed to take care of an assumed few defaults on loans, but they needed to safeguard against a systemic failure of many loans.
5. Home prices started to decline simultaneously as many of the unqualified subprime mortgage holders had to begin paying larger monthly payments. They could not refinance at lower interest rates as rates rose after the 9/11 attacks.
6. These subprime mortgage holders started to default on their loans. This dramatically increased the number of foreclosures, causing nonperformance on some mortgage-backed securities.
7. Financial institutions guaranteeing the mortgage loans did not have the appropriate backing to sustain the large number of defaults. These firms thus lost ground, including one of the largest global insurers, AIG (American International Group).
8. Many large global financial institutions became insolvent, bringing the whole financial world to the brink of collapse and halting the credit markets.
9. Individuals and institutions such as banks lost confidence in the ability of other parties to repay loans, causing credit to freeze up.
10. Governments had to get into the action and bail many of these institutions out as a last resort. This unfroze the credit mechanism that propels economic activity by enabling lenders to lend again.
As we can see, a basic lack of risk management (and regulators’ inattention or inability to control these overt failures) lay at the heart of the global credit crisis. This crisis started with a lack of improperly underwritten mortgages and excessive debt. Companies depend on loans and lines of credit to conduct their routine business. If such credit lines dry up, production slows down and brings the global economy to the brink of deep recession—or even depression. The snowballing effect of this failure to manage the risk associated with providing mortgage loans to unqualified home buyers has been profound, indeed. The world is in a global crisis due to the prevailing (in)action by companies and regulators who ignored and thereby increased some of the major risks associated with mortgage defaults. When the stock markets were going up and homeowners were paying their mortgages, everything looked fine and profit opportunities abounded. But what goes up must come down, as Flannery O’Conner once wrote. When interest rates rose and home prices declined, mortgage defaults became more common. This caused the expected bundled mortgage-backed securities to fail. When the mortgages failed because of greater risk taking on Wall Street, the entire house of cards collapsed.
Additional financial instruments (called credit derivatives)In essence, a credit derivative is a financial instrument issued by one firm, which guarantees payment for contracts of another party. The guarantees are provided under a second contract. Should the issuer of the second contract not perform—for example, by defaulting or going bankrupt—the second contract goes into effect. When the mortgages defaulted, the supposed guarantor did not have enough money to pay their contract obligations. This caused others (who were counting on the payment) to default as well on other obligations. This snowball effect then caused others to default, and so forth. It became a chain reaction that generated a global financial market collapse. gave the illusion of insuring the financial risk of the bundled collateralized mortgages without actually having a true foundation—claims, that underlie all of risk management.This lack of risk management cannot be blamed on lack of warning of the risk alone. Regulators and firms were warned to adhere to risk management procedures. However, these warnings were ignored in pursuit of profit and “free markets.” See “The Crash: Risk and Regulation, What Went Wrong” by Anthony Faiola, Ellen Nakashima, and Jill Drew, Washington Post, October 15, 2008, A01. Lehman Brothers represented the largest bankruptcy in history, which meant that the U.S. government (in essence) nationalized banks and insurance giant AIG. This, in turn, killed Wall Street as we previously knew it and brought about the restructuring of government’s role in society. We can lay all of this at the feet of the investment banking industry and their inadequate risk recognition and management. Probably no other risk-related event has had, and will continue to have, as profound an impact worldwide as this risk management failure (and this includes the terrorist attacks of 9/11). Ramifications of this risk management failure will echo for decades. It will affect all voters and taxpayers throughout the world and potentially change the very structure of American government.
How was risk in this situation so badly managed? What could firms and individuals have done to protect themselves? How can government measure such risks (beforehand) to regulate and control them? These and other questions come immediately to mind when we contemplate the fateful consequences of this risk management fiasco.
With his widely acclaimed book Against the Gods: The Remarkable Story of Risk (New York City: John Wiley & Sons, 1996), Peter L. Bernstein teaches us how human beings have progressed so magnificently with their mathematics and statistics to overcome the unknown and the uncertainty associated with risk. However, no one fully practiced his plans of how to utilize the insights gained from this remarkable intellectual progression. The terrorist events of September 11, 2001; Hurricanes Katrina, Wilma, and Rita in 2005 and Hurricane Ike in 2008; and the financial meltdown of September 2008 have shown that knowledge of risk management has never, in our long history, been more important. Standard risk management practice would have identified subprime mortgages and their bundling into mortgage-backed securities as high risk. As such, people would have avoided these investments or wouldn’t have put enough money into reserve to be able to withstand defaults. This did not happen. Accordingly, this book may represent one of the most critical topics of study that the student of the twenty-first century could ever undertake.
Risk management will be a major focal point of business and societal decision making in the twenty-first century. A separate focused field of study, it draws on core knowledge bases from law, engineering, finance, economics, medicine, psychology, accounting, mathematics, statistics, and other fields to create a holistic decision-making framework that is sustainable and value-enhancing. This is the subject of this book.
In this chapter we discuss the following:
1. Links
2. The notion and definition of risks
3. Attitudes toward risks
4. Types of risk exposures
5. Perils and hazards
1.02: Links
Our “links” section in each chapter ties each concept and objective in the chapter into the realm of globally or holistically managing risk. The solutions to risk problems require a compilation of techniques and perspectives, shown as the pieces completing a puzzle of the myriad of personal and business risks we face. These are shown in the “connection” puzzle in Figure \(1\). As we progress through the text, each chapter will begin with a connection section to show how links between personal and enterprise holistic risk picture arise.
Even in chapters that you may not think apply to the individual, such as commercial risk, the connection will highlight the underlying relationships among different risks. Today, management of personal and commercial risks requires coordination of all facets of the risk spectrum. On a national level, we experienced the move toward holistic risk management with the creation of the Department of Homeland Security after the terrorist attacks of September 11, 2001. See http://www.dhs.gov/dhspublic/. After Hurricane Katrina struck in 2005, the impasse among local, state, and federal officials elevated the need for coordination to achieve efficient holistic risk management in the event of a megacatastrophe.The student is invited to read archival articles from all media sources about the calamity of the poor response to the floods in New Orleans. The insurance studies of Virginia Commonwealth University held a town hall meeting the week after Katrina to discuss the natural and man-made disasters and their impact both financially and socially. The PowerPoint basis for the discussion is available to the readers. The global financial crisis of 2008 created unprecedented coordination of regulatory actions across countries and, further, governmental involvement in managing risk at the enterprise level—essentially a global holistic approach to managing systemic financial risk. Systemic risk is a risk that affects everything, as opposed to individuals being involved in risky enterprises. In the next section, we define all types of risks more formally. | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/01%3A_The_Nature_of_Risk_-_Losses_and_Opportunities/1.01%3A_Introduction.txt |
Learning Objectives
• In this section, you will learn the concept of risk and differentiate between risk and uncertainty.
• You will build the definition of risk as a consequence of uncertainty and within a continuum of decision-making roles.
The notion of “risk” and its ramifications permeate decision-making processes in each individual’s life and business outcomes and of society itself. Indeed, risk, and how it is managed, are critical aspects of decision making at all levels. We must evaluate profit opportunities in business and in personal terms in terms of the countervailing risks they engender. We must evaluate solutions to problems (global, political, financial, and individual) on a risk-cost, cost-benefit basis rather than on an absolute basis. Because of risk’s all-pervasive presence in our daily lives, you might be surprised that the word “risk” is hard to pin down. For example, what does a businessperson mean when he or she says, “This project should be rejected since it is too risky”? Does it mean that the amount of loss is too high or that the expected value of the loss is high? Is the expected profit on the project too small to justify the consequent risk exposure and the potential losses that might ensue? The reality is that the term “risk” (as used in the English language) is ambiguous in this regard. One might use any of the previous interpretations. Thus, professionals try to use different words to delineate each of these different interpretations. We will discuss possible interpretations in what follows.
Risk as a Consequence of Uncertainty
We all have a personal intuition about what we mean by the term “risk.” We all use and interpret the word daily. We have all felt the excitement, anticipation, or anxiety of facing a new and uncertain event (the “tingling” aspect of risk taking). Thus, actually giving a single unambiguous definition of what we mean by the notion of “risk” proves to be somewhat difficult. The word “risk” is used in many different contexts. Further, the word takes many different interpretations in these varied contexts. In all cases, however, the notion of risk is inextricably linked to the notion of uncertainty. We provide here a simple definition of uncertainty: Uncertainty is having two potential outcomes for an event or situation.
Certainty refers to knowing something will happen or won’t happen. We may experience no doubt in certain situations. Nonperfect predictability arises in uncertain situations. Uncertainty causes the emotional (or physical) anxiety or excitement felt in uncertain volatile situations. Gambling and participation in extreme sports provide examples. Uncertainty causes us to take precautions. We simply need to avoid certain business activities or involvements that we consider too risky. For example, uncertainty causes mortgage issuers to demand property purchase insurance. The person or corporation occupying the mortgage-funded property must purchase insurance on real estate if we intend to lend them money. If we knew, without a doubt, that something bad was about to occur, we would call it apprehension or dread. It wouldn’t be risk because it would be predictable. Risk will be forever, inextricably linked to uncertainty.
As we all know, certainty is elusive. Uncertainty and risk are pervasive. While we typically associate “risk” with unpleasant or negative events, in reality some risky situations can result in positive outcomes. Take, for example, venture capital investing or entrepreneurial endeavors. Uncertainty about which of several possible outcomes will occur circumscribes the meaning of risk. Uncertainty lies behind the definition of risk.
While we link the concept of risk with the notion of uncertainty, risk isn’t synonymous with uncertainty. A person experiencing the flu is not necessarily the same as the virus causing the flu. Risk isn’t the same as the underlying prerequisite of uncertainty. Risk (intuitively and formally) has to do with consequences (both positive and negative); it involves having more than two possible outcomes (uncertainty).See www.dhs.gov/dhspublic/. The consequences can be behavioral, psychological, or financial, to name a few. Uncertainty also creates opportunities for gain and the potential for loss. Nevertheless, if no possibility of a negative outcome arises at all, even remotely, then we usually do not refer to the situation as having risk (only uncertainty) as shown in Figure \(1\).
Table 1.1 Examples of Consequences That Represent Risks
States of the World —Uncertainty Consequences—Risk
Could or could not get caught driving under the influence of alcohol Loss of respect by peers (non-numerical); higher car insurance rates or cancellation of auto insurance at the extreme.
Potential variety in interest rates over time Numerical variation in money returned from investment.
Various levels of real estate foreclosures Losses from financial instruments linked to mortgage defaults or some domino effect such as the one that starts this chapter.
Smoking cigarettes at various numbers per day Bad health changes (such as cancer and heart disease) and problems shortening length and quality of life. Inability to contract with life insurance companies at favorable rates.
Power plant and automobile emission of greenhouse gasses (CO2) Global warming, melting of ice caps, rising of oceans, increase in intensity of weather events, displacement of populations; possible extinction or mutations in some populations.
In general, we widely believe in an a priori (previous to the event) relation between negative risk and profitability. Namely, we believe that in a competitive economic market, we must take on a larger possibility of negative risk if we are to achieve a higher return on an investment. Thus, we must take on a larger possibility of negative risk to receive a favorable rate of return. Every opportunity involves both risk and return.
The Role of Risk in Decision Making
In a world of uncertainty, we regard risk as encompassing the potential provision of both an opportunity for gains as well as the negative prospect for losses. See Figure \(2\)—a Venn diagram to help you visualize risk-reward outcomes. For the enterprise and for individuals, risk is a component to be considered within a general objective of maximizing value associated with risk. Alternatively, we wish to minimize the dangers associated with financial collapse or other adverse consequences. The right circle of the figure represents mitigation of adverse consequences like failures. The left circle represents the opportunities of gains when risks are undertaken. As with most Venn diagrams, the two circles intersect to create the set of opportunities for which people take on risk (Circle 1) for reward (Circle 2).
Identify the overlapping area as the set in which we both minimize risk and maximize value.
Figure \(2\) will help you conceptualize the impact of risk. Risk permeates the spectrum of decision making from goals of value maximization to goals of insolvency minimization (in game theory terms, maximin). Here we see that we seek to add value from the opportunities presented by uncertainty (and its consequences). The overlapping area shows a tight focus on minimizing the pure losses that might accompany insolvency or bankruptcy. The 2008 financial crisis illustrates the consequences of exploiting opportunities presented by risk; of course, we must also account for the risk and can’t ignore the requisite adverse consequences associated with insolvency. Ignoring risk represents mismanagement of risk in the opportunity-seeking context. It can bring complete calamity and total loss in the pure loss-avoidance context.
We will discuss this trade-off more in depth later in the book. Managing risks associated with the context of minimization of losses has succeeded more than managing risks when we use an objective of value maximization. People model catastrophic consequences that involve risk of loss and insolvency in natural disaster contexts, using complex and innovative statistical techniques. On the other hand, risk management within the context of maximizing value hasn’t yet adequately confronted the potential for catastrophic consequences. The potential for catastrophic human-made financial risk is most dramatically illustrated by the fall 2008 financial crisis. No catastrophic models were considered or developed to counter managers’ value maximization objective, nor were regulators imposing risk constraints on the catastrophic potential of the various financial derivative instruments.
Definitions of Risk
We previously noted that risk is a consequence of uncertainty—it isn’t uncertainty itself. To broadly cover all possible scenarios, we don’t specify exactly what type of “consequence of uncertainty” we were considering as risk. In the popular lexicon of the English language, the “consequence of uncertainty” is that the observed outcome deviates from what we had expected. Consequences, you will recall, can be positive or negative. If the deviation from what was expected is negative, we have the popular notion of risk. “Risk” arises from a negative outcome, which may result from recognizing an uncertain situation.
If we try to get an ex-post (i.e., after the fact) risk measure, we can measure risk as the perceived variability of future outcomes. Actual outcomes may differ from expectations. Such variability of future outcomes corresponds to the economist’s notion of risk. Risk is intimately related to the “surprise an outcome presents.” Various actual quantitative risk measurements provide the topic of "2: Risk Measurement and Metrics". Another simple example appears by virtue of our day-to-day expectations. For example, we expect to arrive on time to a particular destination. A variety of obstacles may stop us from actually arriving on time. The obstacles may be within our own behavior or stand externally. However, some uncertainty arises as to whether such an obstacle will happen, resulting in deviation from our previous expectation. As another example, when American Airlines had to ground all their MD-80 planes for government-required inspections, many of us had to cancel our travel plans and couldn’t attend important planned meetings and celebrations. Air travel always carries with it the possibility that we will be grounded, which gives rise to uncertainty. In fact, we experienced this negative event because it was externally imposed upon us. We thus experienced a loss because we deviated from our plans. Other deviations from expectations could include being in an accident rather than a fun outing. The possibility of lower-than-expected (negative) outcomes becomes central to the definition of risk, because so-called losses produce the negative quality associated with not knowing the future. We must then manage the negative consequences of the uncertain future. This is the essence of risk management.
Our perception of risk arises from our perception of and quantification of uncertainty. In scientific settings and in actuarial and financial contexts, risk is usually expressed in terms of the probability of occurrence of adverse events. In other fields, such as political risk assessment, risk may be very qualitative or subjective. This is also the subject of "2: Risk Measurement and Metrics".
Key Takeaways
• Uncertainty is precursor to risk.
• Risk is a consequence of uncertainty; risk can be emotional, financial, or reputational.
• The roles of Maximization of Value and Minimization of Losses form a continuum on which risk is anchored.
• One consequence of uncertainty is that actual outcomes may vary from what is expected and as such represents risk.
Discussion Questions
1. What is the relationship between uncertainty and risk?
2. What roles contribute to the definition of risk?
3. What examples fit under uncertainties and consequences? Which are the risks?
4. What is the formal definition of risk?
5. What examples can you cite of quantitative consequences of uncertainty and a qualitative or emotional consequence of uncertainty? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/01%3A_The_Nature_of_Risk_-_Losses_and_Opportunities/1.03%3A_The_Notion_and_Definition_of_Risk.txt |
Learning Objectives
• In this section, you will learn that people’s attitudes toward risk affect their decision making.
• You will learn about the three major types of “risk attitudes.”
An in-depth exploration into individual and firms’ attitudes toward risk appears in "3: Risk Attitudes - Expected Utility Theory and Demand for Hedging". Here we touch upon this important subject, since it is key to understanding behavior associated with risk management activities. The following box illustrates risk as a psychological process. Different people have different attitudes toward the risk-return tradeoff. People are risk averse when they shy away from risks and prefer to have as much security and certainty as is reasonably affordable in order to lower their discomfort level. They would be willing to pay extra to have the security of knowing that unpleasant risks would be removed from their lives. Economists and risk management professionals consider most people to be risk averse. So, why do people invest in the stock market where they confront the possibility of losing everything? Perhaps they are also seeking the highest value possible for their pensions and savings and believe that losses may not be pervasive—very much unlike the situation in the fall of 2008.
A risk seeker, on the other hand, is not simply the person who hopes to maximize the value of retirement investments by investing the stock market. Much like a gambler, a risk seeker is someone who will enter into an endeavor (such as blackjack card games or slot machine gambling) as long as a positive long run return on the money is possible, however unlikely.
Finally, an entity is said to be risk neutral when its risk preference lies in between these two extremes. Risk neutral individuals will not pay extra to have the risk transferred to someone else, nor will they pay to engage in a risky endeavor. To them, money is money. They don’t pay for insurance, nor will they gamble. Economists consider most widely held or publicly traded corporations as making decisions in a risk-neutral manner since their shareholders have the ability to diversify away risk—to take actions that seemingly are not related or have opposite effects, or to invest in many possible unrelated products or entities such that the impact of any one event decreases the overall risk. Risks that the corporation might choose to transfer remain for diversification. In the fall of 2008, everyone felt like a gambler. This emphasizes just how fluidly risk lies on a continuum like that in Figure 1.3.1. Financial theories and research pay attention to the nature of the behavior of firms in their pursuit to maximize value. Most theories agree that firms work within risk limits to ensure they do not “go broke.” In the following box we provide a brief discussion of people’s attitudes toward risk. A more elaborate discussion can be found in "3: Risk Attitudes - Expected Utility Theory and Demand for Hedging".
Feelings Associated with Risk
Early in our lives, while protected by our parents, we enjoy security. But imagine yourself as your parents (if you can) during the first years of your life. A game called “Risk Balls” was created to illustrate tangibly how we handle and transfer risk.Etti G. Baranoff, “The Risk Balls Game: Transforming Risk and Insurance Into Tangible Concept,” Risk Management & Insurance Review 4, no. 2 (2001): 51–59. See, for example, Figure \(1\) below. The balls represent risks, such as dying prematurely, losing a home to fire, or losing one’s ability to earn an income because of illness or injury. Risk balls bring the abstract and fortuitous (accidental or governed by chance) nature of risk into a more tangible context. If you held these balls, you would want to dispose of them as soon as you possibly could. One way to dispose of risks (represented by these risk balls) is by transferring the risk to insurance companies or other firms that specialize in accepting risks. We will cover the benefits of transferring risk in many chapters of this text.
Right now, we focus on the risk itself. What do you actually feel when you hold the risk balls? Most likely, your answer would be, “insecurity and uneasiness.” We associate risks with fears. A person who is risk averse—that is, a “normal person” who shies away from risk and prefers to have as much security and certainty as possible—would wish to lower the level of fear. Professionals consider most of us risk averse. We sleep better at night when we can transfer risk to the capital market. The capital market usually appears to us as an insurance company or the community at large.
As risk-averse individuals, we will often pay in excess of the expected cost just to achieve some certainty about the future. When we pay an insurance premium, for example, we forgo wealth in exchange for an insurer’s promise to pay covered losses. Some risk transfer professionals refer to premiums as an exchange of a certain loss (the premium) for uncertain losses that may cause us to lose sleep. One important aspect of this kind of exchange: premiums are larger than are expected losses. Those who are willing to pay only the average loss as a premium would be considered risk neutral. Someone who accepts risk at less than the average loss, perhaps even paying to add risk—such as through gambling—is a risk seeker.
Figure \(1\): Risk Balls
KEY TAKEAWAY
• Differentiate among the three risk attitudes that prevail in our lives—risk averse, risk neutral, and risk seeker.
Discussion Questions
1. Name three risk attitudes that people display.
2. How do those risk attitudes fits into roles that lie behind the definition of risks? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/01%3A_The_Nature_of_Risk_-_Losses_and_Opportunities/1.04%3A_Attitudes_toward_Risks.txt |
Learning Objectives
• In this section, you will learn what a risk professional means by exposure.
• You will also learn several different ways to split risk exposures according to the risk types involved (pure versus speculative, systemic versus idiosyncratic, diversifiable versus nondiversifiable).
• You will learn how enterprise-wide risk approaches combine risk categories.
Most risk professionals define risk in terms of an expected deviation of an occurrence from what they expect—also known as anticipated variability. In common English language, many people continue to use the word “risk” as a noun to describe the enterprise, property, person, or activity that will be exposed to losses. In contrast, most insurance industry contracts and education and training materials use the term exposure to describe the enterprise, property, person, or activity facing a potential loss. So a house built on the coast near Galveston, Texas, is called an “exposure unit” for the potentiality of loss due to a hurricane. Throughout this text, we will use the terms “exposure” and “risk” to note those units that are exposed to losses.
Pure versus Speculative Risk Exposures
Some people say that Eskimos have a dozen or so words to name or describe snow. Likewise, professional people who study risk use several words to designate what others intuitively and popularly know as “risk.” Professionals note several different ideas for risk, depending on the particular aspect of the “consequences of uncertainty” that they wish to consider. Using different terminology to describe different aspects of risk allows risk professionals to reduce any confusion that might arise as they discuss risks.
As we noted in Table 1.2, risk professionals often differentiate between pure risk that features some chance of loss and no chance of gain (e.g., fire risk, flood risk, etc.) and those they refer to as speculative risk. Speculative risks feature a chance to either gain or lose (including investment risk, reputational risk, strategic risk, etc.). This distinction fits well into Figure 1.3.1. The right-hand side focuses on speculative risk. The left-hand side represents pure risk. Risk professionals find this distinction useful to differentiate between types of risk.
Some risks can be transferred to a third party—like an insurance company. These third parties can provide a useful “risk management solution.” Some situations, on the other hand, require risk transfers that use capital markets, known as hedging or securitizations. Hedging refers to activities that are taken to reduce or eliminate risks. Securitization is the packaging and transferring of insurance risks to the capital markets through the issuance of a financial security. We explain such risk retention in "4: Evolving Risk Management - Fundamental Tools" and "5: The Evolution of Risk Management - Enterprise Risk Management". Risk retention is when a firm retains its risk. In essence it is self-insuring against adverse contingencies out of its own cash flows. For example, firms might prefer to capture up-side return potential at the same time that they mitigate while mitigating the downside loss potential.
In the business environment, when evaluating the expected financial returns from the introduction of a new product (which represents speculative risk), other issues concerning product liability must be considered. Product liability refers to the possibility that a manufacturer may be liable for harm caused by use of its product, even if the manufacturer was reasonable in producing it.
Table 1.2 provides examples of the pure versus speculative risks dichotomy as a way to cross classify risks. The examples provided in Table 1.2 are not always a perfect fit into the pure versus speculative risk dichotomy since each exposure might be regarded in alternative ways. Operational risks, for example, can be regarded as operations that can cause only loss or operations that can provide also gain. However, if it is more specifically defined, the risks can be more clearly categorized.
The simultaneous consideration of pure and speculative risks within the objectives continuum of Figure 1.3.1 is an approach to managing risk, which is known as enterprise risk management (ERM). ERM is one of today’s key risk management approaches. It considers all risks simultaneously and manages risk in a holistic or enterprise-wide (and risk-wide) context. ERM was listed by the Harvard Business Review as one of the key breakthrough areas in their 2004 evaluation of strategic management approaches by top management.L. Buchanan, “Breakthrough Ideas for 2004,” Harvard Business Review 2 (2004): 13–16. In today’s environment, identifying, evaluating, and mitigating all risks confronted by the entity is a key focus. Firms that are evaluated by credit rating organizations such as Moody’s or Standard & Poor’s are required to show their activities in the areas of enterprise risk management. As you will see in later chapters, the risk manager in businesses is no longer buried in the tranches of the enterprise. Risk managers are part of the executive team and are essential to achieving the main objectives of the enterprise. A picture of the enterprise risk map of life insurers is shown later in Figure \(1\).
Table 1.2 Examples of Pure versus Speculative Risk Exposures
Pure Risk—Loss or No Loss Only Speculative Risk—Possible Gains or Losses
Physical damage risk to property (at the enterprise level) such as caused by fire, flood, weather damage Market risks: interest risk, foreign exchange risk, stock market risk
Liability risk exposure (such as products liability, premise liability, employment practice liability) Reputational risk
Innovational or technical obsolescence risk Brand risk
Operational risk: mistakes in process or procedure that cause losses Credit risk (at the individual enterprise level)
Mortality and morbidity risk at the individual level Product success risk
Intellectual property violation risks Public relation risk
Environmental risks: water, air, hazardous-chemical, and other pollution; depletion of resources; irreversible destruction of food chains Population changes
Natural disaster damage: floods, earthquakes, windstorms Market for the product risk
Man-made destructive risks: nuclear risks, wars, unemployment, population changes, political risks Regulatory change risk
Mortality and morbidity risk at the societal and global level (as in pandemics, social security program exposure, nationalize health care systems, etc.) Political risk
Accounting risk
Longevity risk at the societal level
Genetic testing and genetic engineering risk
Investment risk
Research and development risk
Within the class of pure risk exposures, it is common to further explore risks by use of the dichotomy of personal property versus liability exposure risk.
Personal Loss Exposures—Personal Pure Risk
Because the financial consequences of all risk exposures are ultimately borne by people (as individuals, stakeholders in corporations, or as taxpayers), it could be said that all exposures are personal. Some risks, however, have a more direct impact on people’s individual lives. Exposure to premature death, sickness, disability, unemployment, and dependent old age are examples of personal loss exposures when considered at the individual/personal level. An organization may also experience loss from these events when such events affect employees. For example, social support programs and employer-sponsored health or pension plan costs can be affected by natural or man-made changes. The categorization is often a matter of perspective. These events may be catastrophic or accidental.
Property Loss Exposures—Property Pure Risk
Property owners face the possibility of both direct and indirect (consequential) losses. If a car is damaged in a collision, the direct loss is the cost of repairs. If a firm experiences a fire in the warehouse, the direct cost is the cost of rebuilding and replacing inventory. Consequential or indirect losses are nonphysical losses such as loss of business. For example, a firm losing its clients because of street closure would be a consequential loss. Such losses include the time and effort required to arrange for repairs, the loss of use of the car or warehouse while repairs are being made, and the additional cost of replacement facilities or lost productivity. Property loss exposures are associated with both real property such as buildings and personal property such as automobiles and the contents of a building. A property is exposed to losses because of accidents or catastrophes such as floods or hurricanes.
Liability Loss Exposures—Liability Pure Risk
The legal system is designed to mitigate risks and is not intended to create new risks. However, it has the power of transferring the risk from your shoulders to mine. Under most legal systems, a party can be held responsible for the financial consequences of causing damage to others. One is exposed to the possibility of liability loss (loss caused by a third party who is considered at fault) by having to defend against a lawsuit when he or she has in some way hurt other people. The responsible party may become legally obligated to pay for injury to persons or damage to property. Liability risk may occur because of catastrophic loss exposure or because of accidental loss exposure. Product liability is an illustrative example: a firm is responsible for compensating persons injured by supplying a defective product, which causes damage to an individual or another firm.
Catastrophic Loss Exposure and Fundamental or Systemic Pure Risk
Catastrophic risk is a concentration of strong, positively correlated risk exposures, such as many homes in the same location. A loss that is catastrophic and includes a large number of exposures in a single location is considered a nonaccidental risk. All homes in the path will be damaged or destroyed when a flood occurs. As such the flood impacts a large number of exposures, and as such, all these exposures are subject to what is called a fundamental risk. Generally these types of risks are too pervasive to be undertaken by insurers and affect the whole economy as opposed to accidental risk for an individual. Too many people or properties may be hurt or damaged in one location at once (and the insurer needs to worry about its own solvency). Hurricanes in Florida and the southern and eastern shores of the United States, floods in the Midwestern states, earthquakes in the western states, and terrorism attacks are the types of loss exposures that are associated with fundamental risk. Fundamental risks are generally systemic and nondiversifiable.
Accidental Loss Exposure and Particular Pure Risk
Many pure risks arise due to accidental causes of loss, not due to man-made or intentional ones (such as making a bad investment). As opposed to fundamental losses, noncatastrophic accidental losses, such as those caused by fires, are considered particular risks. Often, when the potential losses are reasonably bounded, a risk-transfer mechanism, such as insurance, can be used to handle the financial consequences.
In summary, exposures are units that are exposed to possible losses. They can be people, businesses, properties, and nations that are at risk of experiencing losses. The term “exposures” is used to include all units subject to some potential loss.
Another possible categorization of exposures is as follows:
• Risks of nature
• Risks related to human nature (theft, burglary, embezzlement, fraud)
• Man-made risks
• Risks associated with data and knowledge
• Risks associated with the legal system (liability)—it does not create the risks but it may shift them to your arena
• Risks related to large systems: governments, armies, large business organizations, political groups
• Intellectual property
Pure and speculative risks are not the only way one might dichotomize risks. Another breakdown is between catastrophic risks, such as flood and hurricanes, as opposed to accidental losses such as those caused by accidents such as fires. Another differentiation is by systemic or nondiversifiable risks, as opposed to idiosyncratic or diversifiable risks; this is explained below.
Diversifiable and Nondiversifiable Risks
As noted above, another important dichotomy risk professionals use is between diversifiable and nondiversifiable risk. Diversifiable risks are those that can have their adverse consequences mitigated simply by having a well-diversified portfolio of risk exposures. For example, having some factories located in nonearthquake areas or hotels placed in numerous locations in the United States diversifies the risk. If one property is damaged, the others are not subject to the same geographical phenomenon causing the risks. A large number of relatively homogeneous independent exposure units pooled together in a portfolio can make the average, or per exposure, unit loss much more predictable, and since these exposure units are independent of each other, the per-unit consequences of the risk can then be significantly reduced, sometimes to the point of being ignorable. These will be further explored in a later chapter about the tools to mitigate risks. Diversification is the core of the modern portfolio theory in finance and in insurance. Risks, which are idiosyncratic (with particular characteristics that are not shared by all) in nature, are often viewed as being amenable to having their financial consequences reduced or eliminated by holding a well-diversified portfolio.
Systemic risks that are shared by all, on the other hand, such as global warming, or movements of the entire economy such as that precipitated by the credit crisis of fall 2008, are considered nondiversifiable. Every asset or exposure in the portfolio is affected. The negative effect does not go away by having more elements in the portfolio. This will be discussed in detail below and in later chapters. The field of risk management deals with both diversifiable and nondiversifiable risks. As the events of September 2008 have shown, contrary to some interpretations of financial theory, the idiosyncratic risks of some banks could not always be diversified away. These risks have shown they have the ability to come back to bite (and poison) the entire enterprise and others associated with them.
Table 1.3 provides examples of risk exposures by the categories of diversifiable and nondiversifiable risk exposures. Many of them are self explanatory, but the most important distinction is whether the risk is unique or idiosyncratic to a firm or not. For example, the reputation of a firm is unique to the firm. Destroying one’s reputation is not a systemic risk in the economy or the market-place. On the other hand, market risk, such as devaluation of the dollar is systemic risk for all firms in the export or import businesses. In Table 1.3 we provide examples of risks by these categories. The examples are not complete and the student is invited to add as many examples as desired.
Table 1.3 Examples of Risk Exposures by the Diversifiable and Nondiversifiable Categories
Diversifiable Risk—Idiosyncratic Risk Nondiversifiable Risks—Systemic Risk
• Reputational risk • Market risk
• Brand risk • Regulatory risk
• Credit risk (at the individual enterprise level) • Environmental risk
• Product risk • Political risk
• Legal risk • Inflation and recession risk
• Physical damage risk (at the enterprise level) such as fire, flood, weather damage • Accounting risk
• Liability risk (products liability, premise liability, employment practice liability) • Longevity risk at the societal level
• Innovational or technical obsolesce risk • Mortality and morbidity risk at the societal and global level (pandemics, social security program exposure, nationalize health care systems, etc.)
• Operational risk
• Strategic risk
• Longevity risk at the individual level
• Mortality and morbidity risk at the individual level
Enterprise Risks
As discussed above, the opportunities in the risks and the fear of losses encompass the holistic risk or the enterprise risk of an entity. The following is an example of the enterprise risks of life insurers in a map in Figure \(2\).Etti G. Baranoff and Thomas W. Sager, “Integrated Risk Management in Life Insurance Companies,” an award winning paper, International Insurance Society Seminar, Chicago, July 2006 and in Special Edition of the Geneva Papers on Risk and Insurance.
Since enterprise risk management is a key current concept today, the enterprise risk map of life insurers is offered here as an example. Operational risks include public relations risks, environmental risks, and several others not detailed in the map in Figure 1.4.1. Because operational risks are so important, they usually include a long list of risks from employment risks to the operations of hardware and software for information systems.
Risks in the Limelight
Our great successes in innovation are also at the heart of the greatest risks of our lives. An ongoing concern is the electronic risk (e-risk) generated by the extensive use of computers, e-commerce, and the Internet. These risks are extensive and the exposures are becoming more defined. The box below illustrates the newness and not-so-newness in our risks.
The Risks of E-exposures
Electronic risk, or e-risk, comes in many forms. Like any property, computers are vulnerable to theft and employee damage (accidental or malicious). Certain components are susceptible to harm from magnetic or electrical disturbance or extremes of temperature and humidity. More important than replaceable hardware or software is the data they store; theft of proprietary information costs companies billions of dollars. Most data theft is perpetrated by employees, but “netspionage”—electronic espionage by rival companies—is on the rise.
Companies that use the Internet commercially—who create and post content or sell services or merchandise—must follow the laws and regulations that traditional businesses do and are exposed to the same risks. An online newsletter or e-zine can be sued for libel, defamation, invasion of privacy, or misappropriation (e.g., reproducing a photograph without permission) under the same laws that apply to a print newspaper. Web site owners and companies conducting business over the Internet have three major exposures to protect: intellectual property (copyrights, patents, trade secrets); security (against viruses and hackers); and business continuity (in case of system crashes).
All of these losses are covered by insurance, right? Wrong. Some coverage is provided through commercial property and liability policies, but traditional insurance policies were not designed to include e-risks. In fact, standard policies specifically exclude digital risks (or provide minimal coverage). Commercial property policies cover physical damage to tangible assets—and computer data, software, programs, and networks are generally not counted as tangible property. (U.S. courts are still debating the issue.)
This coverage gap can be bridged either by buying a rider or supplemental coverage to the traditional policies or by purchasing special e-risk or e-commerce coverage. E-risk property policies cover damages to the insured’s computer system or Web site, including lost income because of a computer crash. An increasing number of insurers are offering e-commerce liability policies that offer protection in case the insured is sued for spreading a computer virus, infringing on property or intellectual rights, invading privacy, and so forth.
Cybercrime is just one of the e-risk-related challenges facing today’s risk managers. They are preparing for it as the world evolves faster around cyberspace, evidenced by record-breaking online sales during the 2005 Christmas season.
Sources: Harry Croydon, “Making Sense of Cyber-Exposures,” National Underwriter, Property & Casualty/Risk & Benefits Management Edition, 17 June 2002; Joanne Wojcik, “Insurers Cut E-Risks from Policies,” Business Insurance, 10 September 2001; Various media resources at the end of 2005 such as Wall Street Journal and local newspapers.
Today, there is no media that is not discussing the risks that brought us to the calamity we are enduring during our current financial crisis. Thus, as opposed to the megacatastrophes of 2001 and 2005, our concentration is on the failure of risk management in the area of speculative risks or the opportunity in risks and not as much on the pure risk. A case at point is the little media coverage of the devastation of Galveston Island from Hurricane Ike during the financial crisis of September 2008. The following box describes the risks of the first decade of the new millennium.
Risks in the New Millennium
While man-made and natural disasters are the stamps of this decade, another type of man-made disaster marks this period.Reprinted with permission from the author; Etti G. Baranoff, “Risk Management and Insurance During the Decade of September 11,” in The Day that Changed Everything? An Interdisciplinary Series of Edited Volumes on the Impact of 9/11, vol. 2. Innovative financial products without appropriate underwriting and risk management coupled with greed and lack of corporate controls brought us to the credit crisis of 2007 and 2008 and the deepest recession in a generation. The capital market has become an important player in the area of risk management with creative new financial instruments, such as Catastrophe Bonds and securitized instruments. However, the creativity and innovation also introduced new risky instruments, such as credit default swaps and mortgage-backed securities. Lack of careful underwriting of mortgages coupled with lack of understanding of the new creative “insurance” default swaps instruments and the resulting instability of the two largest remaining bond insurers are at the heart of the current credit crisis.
As such, within only one decade we see the escalation in new risk exposures at an accelerated rate. This decade can be named “the decade of extreme risks with inadequate risk management.” The late 1990s saw extreme risks with the stock market bubble without concrete financial theory. This was followed by the worst terrorist attack in a magnitude not experienced before on U.S. soil. The corporate corruption at extreme levels in corporations such as Enron just deepened the sense of extreme risks. The natural disasters of Katrina, Rita, and Wilma added to the extreme risks and were exacerbated by extraordinary mismanagement. Today, the extreme risks of mismanaged innovations in the financial markets combined with greed are stretching the field of risk management to new levels of governmental and private controls.
However, did the myopic concentration on terrorism risk derail the holistic view of risk management and preparedness? The aftermath of Katrina is a testimonial to the lack of risk management. The increase of awareness and usage of enterprise risk management (ERM) post–September 11 failed to encompass the already well-known risks of high-category hurricanes on the sustainability of New Orleans levies. The newly created holistic Homeland Security agency, which houses FEMA, not only did not initiate steps to avoid the disaster, it also did not take the appropriate steps to reduce the suffering of those afflicted once the risk materialized. This outcome also points to the importance of having a committed stakeholder who is vested in the outcome and cares to lower and mitigate the risk. Since the insurance industry did not own the risk of flood, there was a gap in the risk management. The focus on terrorism risk could be regarded as a contributing factor to the neglect of the natural disasters risk in New Orleans. The ground was fertile for mishandling the extreme hurricane catastrophes. Therefore, from such a viewpoint, it can be argued that September 11 derailed our comprehensive national risk management and contributed indirectly to the worsening of the effects of Hurricane Katrina.
Furthermore, in an era of financial technology and creation of innovative modeling for predicting the most infrequent catastrophes, the innovation and growth in human capacity is at the root of the current credit crisis. While the innovation allows firms such as Risk Management Solutions (RMS) and AIR Worldwide to provide modelshttp://www.rms.com, www.iso.com/index.php?option= com_content&task=view&id=932&Itemid=587, and www.iso.com/index.php?option= com_content&task=view&id=930&Itemid=585. that predict potential man-made and natural catastrophes, financial technology also advanced the creation of financial instruments, such as credit default derivatives and mortgage-backed securities. The creation of the products provided “black boxes” understood by few and without appropriate risk management. Engineers, mathematicians, and quantitatively talented people moved from the low-paying jobs in their respective fields into Wall Street. They used their skills to create models and new products but lacked the business acumen and the required safety net understanding to ensure product sustenance. Management of large financial institutions globally enjoyed the new creativity and endorsed the adoption of the new products without clear understanding of their potential impact or just because of greed. This lack of risk management is at the heart of the credit crisis of 2008. No wonder the credit rating organizations are now adding ERM scores to their ratings of companies.
The following quote is a key to today’s risk management discipline: “Risk management has been a significant part of the insurance industry…, but in recent times it has developed a wider currency as an emerging management philosophy across the globe…. The challenge facing the risk management practitioner of the twenty-first century is not just breaking free of the mantra that risk management is all about insurance, and if we have insurance, then we have managed our risks, but rather being accepted as a provider of advice and service to the risk makers and the risk takers at all levels within the enterprise. It is the risk makers and the risk takers who must be the owners of risk and accountable for its effective management.”Laurent Condamin, Jean-Paul Louisot, and Patrick Maim, “Risk Quantification: Management, Diagnosis and Hedging” (Chichester, UK: John Wiley & Sons Ltd., 2006).
Key Takeaways
• You should be able to delineate the main categories of risks: pure versus speculative, diversifiable versus nondiversifiable, idiosyncratic versus systemic.
• You should also understand the general concept of enterprise-wide risk.
• Try to illustrate each cross classification of risk with examples.
• Can you discuss the risks of our decade?
Discussion Questions
1. Name the main categories of risks.
2. Provide examples of risk categories.
3. How would you classify the risks embedded in the financial crisis of fall 2008 within each of cross-classification?
4. How does e-risk fit into the categories of risk? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/01%3A_The_Nature_of_Risk_-_Losses_and_Opportunities/1.05%3A_Types_of_RisksRisk_Exposures.txt |
Learning Objectives
• In this section you will learn the terminology used by risk professionals to note different risk concepts.
• You will learn about causes of losses—perils and the hazards, which are the items increasing the chance of loss.
As we mentioned earlier, in English, people often use the word “risk” to describe a loss. Examples include hurricane risk or fraud risk. To differentiate between loss and risk, risk management professionals prefer to use the term perils to refer to “the causes of loss.” If we wish to understand risk, we must first understand the terms “loss” and “perils.” We will use both terms throughout this text. Both terms represent immediate causes of loss. The environment is filled with perils such as floods, theft, death, sickness, accidents, fires, tornadoes, and lightning—or even contaminated milk served to Chinese babies. We include a list of some perils below. Many important risk transfer contracts (such as insurance contracts) use the word “peril” quite extensively to define inclusions and exclusions within contracts. We will also explain these definitions in a legal sense later in the textbook to help us determine terms such as “residual risk retained.”
Table 1.4 Types of Perils by Ability to Insure
Natural Perils Human Perils
Generally Insurable Generally Difficult to Insure Generally Insurable Generally Difficult to Insure
Windstorm Flood Theft War
Lightning Earthquake Vandalism Radioactive contamination
Natural combustion Epidemic Hunting accident Civil unrest
Heart attacks Volcanic eruption Negligence Terrorism
Frost Fire and smoke
Global
E-commerce
Mold
Although professionals have attempted to categorize perils, doing so is difficult. We could talk about natural versus human perils. Natural perils are those over which people have little control, such as hurricanes, volcanoes, and lightning. Human perils, then, would include causes of loss that lie within individuals’ control, including suicide, terrorism, war, theft, defective products, environmental contamination, terrorism, destruction of complex infrastructure, and electronic security breaches. Though some would include losses caused by the state of the economy as human perils, many professionals separate these into a third category labeled economic perils. Professionals also consider employee strikes, arson for profit, and similar situations to be economic perils.
We can also divide perils into insurable and noninsurable perils. Typically, noninsurable perils include those that may be considered catastrophic to an insurer. Such noninsurable perils may also encourage policyholders to cause loss. Insurers’ problems rest with the security of its financial standing. For example, an insurer may decline to write a policy for perils that might threaten its own solvency (e.g., nuclear power plant liability) or those perils that might motivate insureds to cause a loss.
Hazards
Risk professionals refer to hazards as conditions that increase the cause of losses. Hazards may increase the probability of losses, their frequency, their severity, or both. That is, frequency refers to the number of losses during a specified period. Severity refers to the average dollar value of a loss per occurrence, respectively. Professionals refer to certain conditions as being “hazardous.” For example, when summer humidity declines and temperature and wind velocity rise in heavily forested areas, the likelihood of fire increases. Conditions are such that a forest fire could start very easily and be difficult to contain. In this example, low humidity increases both loss probability and loss severity. The more hazardous the conditions, the greater the probability and/or severity of loss. Two kinds of hazards—physical and intangible—affect the probability and severity of losses.
Physical Hazards
We refer to physical hazards as tangible environmental conditions that affect the frequency and/or severity of loss. Examples include slippery roads, which often increase the number of auto accidents; poorly lit stairwells, which add to the likelihood of slips and falls; and old wiring, which may increase the likelihood of a fire.
Physical hazards that affect property include location, construction, and use. Building locations affect their susceptibility to loss by fire, flood, earthquake, and other perils. A building located near a fire station and a good water supply has a lower chance that it will suffer a serious loss by fire than if it is in an isolated area with neither water nor firefighting service. Similarly, a company that has built a backup generator will have lower likelihood of a serious financial loss in the event of a power loss hazard.
Construction affects both the probability and severity of loss. While no building is fireproof, some construction types are less susceptible to loss from fire than others. But a building that is susceptible to one peril is not necessarily susceptible to all. For example, a frame building is more apt to burn than a brick building, but frame buildings may suffer less damage from an earthquake.
Use or occupancy may also create physical hazards. For example, buildings used to manufacture or store fireworks will have greater probability of loss by fire than do office buildings. Likewise, buildings used for dry cleaning (which uses volatile chemicals) will bear a greater physical hazard than do elementary schools. Cars used for business purposes may be exposed to greater chance of loss than a typical family car since businesses use vehicles more extensively and in more dangerous settings. Similarly, people have physical characteristics that affect loss. Some of us have brittle bones, weak immune systems, or vitamin deficiencies. Any of these characteristics could increase the probability or severity of health expenses.
Intangible Hazards
Here we distinguish between physical hazards and intangible hazards—attitudes and nonphysical cultural conditions can affect loss probabilities and severities of loss. Their existence may lead to physical hazards. Traditionally, authors of insurance texts categorize these conditions as moral and morale hazards, which are important concepts but do not cover the full range of nonphysical hazards. Even the distinction between moral and morale hazards is fuzzy.
Moral hazards are hazards that involve behavior that can be construed as negligence or that borders on criminality. They involve dishonesty on the part of people who take out insurance (called “insureds”). Risk transfer through insurance invites moral hazard by potentially encouraging those who transfer risks to cause losses intentionally for monetary gain. Generally, moral hazards exist when a person can gain from the occurrence of a loss. For example, an insured that will be reimbursed for the cost of a new stereo system following the loss of an old one has an incentive to cause loss. An insured business that is losing money may have arson as a moral hazard. Such incentives increase loss probabilities; as the name “moral” implies, moral hazard is a breach of morality (honesty).
Morale hazards, in contrast, do not involve dishonesty. Rather, morale hazards involve attitudes of carelessness and lack of concern. As such, morale hazards increase the chance a loss will occur or increase the size of losses that do occur. Poor housekeeping (e.g., allowing trash to accumulate in attics or basements) or careless cigarette smoking are examples of morale hazards that increase the probability fire losses. Often, such lack of concern occurs because a third party (such as an insurer) is available to pay for losses. A person or company that knows they are insured for a particular loss exposure may take less precaution to protect this exposure than otherwise. Nothing dishonest lurks in not locking your car or in not taking adequate care to reduce losses, so these don’t represent morality breaches. Both practices, however, increase the probability of loss severity.
Many people unnecessarily and often unconsciously create morale hazards that can affect their health and life expectancy. Such hazards include excessive use of tobacco, drugs, and other harmful substances; poor eating, sleeping, and exercise habits; unnecessary exposure to falls, poisoning, electrocution, radiation, venomous stings and bites, and air pollution; and so forth.
Hazards are critical because our ability to reduce their effects will reduce both overall costs and variability. Hazard management, therefore, can be a highly effective risk management tool. At this point, many corporations around the world emphasize disaster control management to reduce the impact of biological or terrorist attacks. Safety inspections in airports are one example of disaster control management that intensified after September 11. See "Is Airport Security Worth It to You?" for a discussion of safety in airports.
Is Airport Security Worth It to You?
Following the September 11, 2001, terrorist attacks, the Federal Aviation Administration (now the Transportation Security Administration [TSA] under the U.S. Department of Homeland Security [DHS]) wrestled with a large question: how could a dozen or more hijackers armed with knives slip through security checkpoints at two major airports? Sadly, it wasn’t hard. Lawmakers and security experts had long complained about lax safety measures at airports, citing several studies over the years that had documented serious security lapses. “I think a major terrorist incident was bound to happen,” Paul Bracken, a Yale University professor who teaches national security issues and international business, told Wired magazine a day after the attacks. “I think this incident exposed airport security for what any frequent traveler knows it is—a complete joke. It’s effective in stopping people who may have a cigarette lighter or a metal belt buckle, but against people who want to hijack four planes simultaneously, it is a failure.”
Two days after the attacks, air space was reopened under extremely tight security measures, including placing armed security guards on flights; ending curbside check-in; banning sharp objects (at first, even tweezers, nail clippers, and eyelash curlers were confiscated); restricting boarding areas to ticket-holding passengers; and conducting extensive searches of carry-on bags.
In the years since the 2001 terrorist attacks, U.S. airport security procedures have undergone many changes, often in response to current events and national terrorism threat levels. Beginning in December 2005, the Transportation Security Administration (TSA) refocused its efforts to detect suspicious persons, items, and activities. The new measures called for increased random passenger screenings. They lifted restrictions on certain carry-on items. Overall, the changes were viewed as a relaxation of the extremely strict protocols that had been in place subsequent to the events of 9/11.
The TSA had to revise its airline security policy yet again shortly after the December 2005 adjustments. On August 10, 2006, British police apprehended over twenty suspects implicated in a plot to detonate liquid-based explosives on flights originating from the United Kingdom bound for several major U.S. cities. Following news of this aborted plot, the U.S. Terror Alert Level soared to red (denoting a severe threat level). As a result, the TSA quickly barred passengers from carrying on most liquids and other potentially explosives-concealing compounds to flights in U.S. airports. Beverages, gels, lotions, toothpastes, and semisolid cosmetics (such as lipstick) were thus expressly forbidden.
Less-burdensome modifications were made to the list of TSA-prohibited items not long after publication of the initial requirements. Nevertheless, compliance remains a controversial issue among elected officials and the public, who contend that the many changes are difficult to keep up with. Many contended that the changes represented too great a tradeoff of comfort or convenience for the illusion of safety. To many citizens, though, the 2001 terrorist plot served as a wake-up call, reminding a nation quietly settling into a state of complacency of the need for continued vigilance. Regardless of the merits of these viewpoints, air travel security will no doubt remain a hot topic in the years ahead as the economic, financial, regulatory, and sociological issues become increasingly complex.
Questions for Discussion
1. Discuss whether the government has the right to impose great cost to many in terms of lost time in using air travel, inconvenience, and affronts to some people’s privacy to protect a few individuals.
2. Do you see any morale or moral hazards associated with the homeland security monitoring and actively searching people and doing preflight background checks on individuals prior to boarding?
3. Discuss the issue of personal freedom versus national security as it relates to this case.
Sources: Tsar’s Press release at www.tsa.gov/public/display?theme=44&content=090005198018c27e. For more information regarding TSA, visit our Web site at http://www.TSA.gov; Dave Linkups, “Airports Vulnerable Despite Higher Level of Security,” Business Insurance, 6 May 2002; “U.S. Flyers Still at Risk,” National Underwriter Property & Casualty/Risk & Benefits Management Edition, 1 April 2002; Stephen Power, “Background Checks Await Fliers,” The Wall Street Journal, 7 June 2002. For media sources related to 2006 terrorist plot, see http://en.Wikipedia.org/wiki/2006_transatlantic_aircraft_plot#References.
Key Takeaways
• You should be able to differentiate between different types of hazards.
• You should be able to differentiate between different types of perils.
• Can you differentiate between a hazard and a peril?
Discussion Questions
1. What are perils?
2. What are hazards?
3. Why do we not just call perils and hazards by the name “risk,” as is often done in common English conversations?
4. Discuss the perils and hazards in box, "Is Airport Security Worth It to You?". | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/01%3A_The_Nature_of_Risk_-_Losses_and_Opportunities/1.06%3A_Perils_and_Hazards.txt |
1. What are underlying objectives for the definition of risk?
2. How does risk fit on the spectrum of certainty and uncertainty?
3. Provide the formal definition of risk.
4. What are three major categories of risk attitudes?
5. Explain the categories and risk and provide examples for each category.
6. What are exposures? Give examples of exposures.
7. What are perils? Give examples of perils.
8. What are hazards? Give examples of hazards.
9. In a particular situation, it may be difficult to distinguish between moral hazard and morale hazard. Why? Define both terms.
10. Some people with complete health insurance coverage visit doctors more often than required. Is this tendency a moral hazard, a morale hazard, or simple common sense? Explain.
11. Give examples of perils, exposures, and hazards for a university or college. Define each term.
12. Give examples of exposure for speculative risks in a company such as Google.
13. Inflation causes both pure and speculative risks in our society. Can you give some examples of each?
14. Define holistic risk and enterprise risk and give examples of each.
15. Describe the new risks facing society today. Give examples of risks in electronic commerce.
16. Read the box "The Risks of E-exposures" in this chapter. Can you help the risk managers identify all the risk exposures associated with e-commerce and the Internet?
17. Read the box "Is Airport Security Worth It to You?" in this chapter and respond to the discussion questions at the end. What additional risk exposures do you see that the article did not cover?
18. One medical practice that has been widely discussed in recent years involves defensive medicine, in which a doctor orders more medical tests and X-rays than she or he might have in the past—not because of the complexity of the case, but because the doctor fears being sued by the patient for medical malpractice. The extra tests may establish that the doctor did everything reasonable and prudent to diagnose and treat the patient.
1. What does this tell you about the burden of risk?
2. What impact does this burden place on you and your family in your everyday life?
3. Is the doctor wrong to do this, or is it a necessary precaution?
4. Is there some way to change this situation?
19. Thompson’s department store has a fleet of delivery trucks. The store also has a restaurant, a soda fountain, a babysitting service for parents shopping there, and an in-home appliance service program.
1. Name three perils associated with each of these operations.
2. For the pure risk situations you noted in part 1 of this exercise, name three hazards that could be controlled by the employees of the department store.
3. If you were manager of the store, would you want all these operations? Which—if any—would you eliminate? Explain.
20. Omer Laskwood, the major income earner for a family of four, was overheard saying to his friend Vince, “I don’t carry any life insurance because I’m young, and I know from statistics few people die at my age.”
1. What are your feelings about this statement?
2. How does Omer perceive risk relative to his situation?
3. What characteristic in this situation is more important than the likelihood of Mr. Laskwood dying?
4. Are there other risks Omer should consider?
21. The council members of Flatburg are very proud of the proposed new airport they are discussing at a council meeting. When it is completed, Flatburg will finally have regular commercial air service. Some type of fire protection is needed at the new airport, but a group of citizens is protesting that Flatburg cannot afford to purchase another fire engine. The airport could share the downtown fire station, or the firehouse could be moved to the airport five miles away. Someone suggested a compromise—move the facilities halfway. As the council members left their meeting that evening, they had questions regarding this problem.
1. What questions would you raise?
2. How would you handle this problem using the information discussed in this chapter? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/01%3A_The_Nature_of_Risk_-_Losses_and_Opportunities/1.07%3A_The_Nature_of_Risk_-_Losses_and_Opportunities_%28Exercises%29.txt |
In "1:The Nature of Risk - Losses and Opportunities", we discussed how risk arises as a consequence of uncertainty. Recall also that risk is not the state of uncertainty itself. Risk and uncertainty are connected and yet are distinct concepts.
In this chapter, we will discuss the ways in which we measure risk and uncertainty. If we wish to understand and use the concepts of risk and uncertainty, we need to be able to measure these concepts’ outcomes. Psychological and economic research shows that emotions such as fear, dread, ambiguity avoidance, and feelings of emotional loss represent valid risks. Such feelings are thus relevant to decision making under uncertainty. Our focus here, however, will draw more on financial metrics rather than emotional or psychological measures of risk perception. In this chapter, we thus discuss measurable and quantifiable outcomes and how we can measure risk and uncertainty using numerical methods.
A “metric” in this context is a system of related measures that helps us quantify characteristics or qualities. Any individual or enterprise needs to be able to quantify risk before they can decide whether or not a particular risk is critical enough to commit resources to manage. If such resources have been committed, then we need measurements to see whether the risk management process or procedure has reduced risk. And all forms of enterprises, for financial profit or for social profit, must strive to reduce risk. Without risk metrics, enterprises cannot tell whether or not they have reached risk management objectives. Enterprises including businesses hold risk management to be as important as any other objective, including profitability. Without risk metrics to measure success, failure, or incremental improvement, we cannot judge progress in the control of risk.
Risk management provides a framework for assessing opportunities for profit, as well as for gauging threats of loss. Without measuring risk, we cannot ascertain what action of the available alternatives the enterprise should take to optimize the risk-reward tradeoff. The risk-reward tradeoff is essentially a cost-benefit analysis taking uncertainty into account. In (economic) marginal analysis terms, we want to know how many additional units of risk we need to take on in order to get an additional unit of reward or profit. A firm, for example, wants to know how much capital it needs to keep from going insolvent if a bad risk is realized.This is particularly true in firms like insurance companies and banks where the business opportunity they pursue is mainly based on taking calculated and judgment-based risks. Indeed, if they cannot measure risk, enterprises are stuck in the ancient world of being helpless to act in the face of uncertainty. Risk metrics allow us to measure risk, giving us an ability to control risk and simultaneously exploit opportunities as they arise. No one profits from establishing the existence of an uncertain state of nature. Instead, managers must measure and assess their enterprise’s degree of vulnerability (risk) and sensitivity to the various potential states of nature. After reading this chapter, you should be able to define several different risk metrics and be able to discuss when each metric is appropriate for a given situation.
We will discuss several risk measures here, each of which comes about from the progression of mathematical approaches to describing and evaluating risk. We emphasize from the start, however, that measuring risk using these risk metrics is only one step as we assess any opportunity-risk issue. Risk metrics cannot stand alone. We must also evaluate how appropriate each underlying model might be for the occasion. Further, we need to evaluate each question in terms of the risk level that each entity is willing to assume for the gain each hopes to receive. Firms must understand the assumptions behind worst-case or ruin scenarios, since most firms do not want to take on risks that “bet the house.” To this end, knowing the severity of losses that might be expected in the future (severity is the dollar value per claim) using forecasting models represents one aspect of quantifying risk. However, financial decision making requires that we evaluate severity levels based upon what an individual or a firm can comfortably endure (risk appetite). Further, we must evaluate the frequency with which a particular outcome will occur. As with the common English language usage of the term, frequency is the number of times the event is expected to occur in a specified period of time. The 2008 financial crisis provides an example: Poor risk management of the financial models used for creating mortgage-backed securities and credit default derivatives contributed to a worldwide crisis. The assessment of loss frequency, particularly managers’ assessment of the severity of losses, was grossly underestimated. We discuss risk assessment using risk metrics in the pages that follow.
As we noted in "1:The Nature of Risk - Losses and Opportunities", risk is a concept encompassing perils, hazards, exposures, and perception (with a strong emphasis on perception). It should come as no surprise that the metrics for measuring risk are also quite varied. The aspect of risk being considered in a particular situation dictates the risk measure used. If we are interested in default risk (the risk that a contracting party will be unable to live up to the terms of some financial contract, usually due to total ruin or bankruptcy), then one risk measure might be employed. If, on the other hand, we are interested in expected fluctuations of retained earnings for paying future losses, then we would likely use another risk measure. If we wish to know how much risk is generated by a risky undertaking that cannot be diversified away in the marketplace, then we would use yet another risk measure. Each risk measure has its place and appropriate application. One part of the art of risk management is to pick the appropriate risk measure for each situation.
In this chapter, we will cover the following:
1. Links
2. Quantification of uncertain outcomes via probability models
3. Measures of risk: putting it together
Links
The first step in developing any framework for the measuring risk quantitatively involves creating a framework for addressing and studying uncertainty itself. Such a framework lies within the realm of probability. Since risk arises from uncertainty, measures of risk must also take uncertainty into account. The process of quantifying uncertainty, also known as probability theory, actually proved to be surprisingly difficult and took millennia to develop. Progress on this front required that we develop two fundamental ideas. The first is a way to quantify uncertainty (probability) of potential states of the world. Second, we had to develop the notion that the outcomes of interest to human events, the risks, were subject to some kind of regularity that we could predict and that would remain stable over time. Developing and accepting these two notions represented path-breaking, seminal changes from previous mindsets. Until research teams made and accepted these steps, any firm, scientific foundation for developing probability and risk was impossible.
Solving risk problems requires that we compile a puzzle of the many personal and business risks. First, we need to obtain quantitative measures of each risk. Again, as in "1:The Nature of Risk - Losses and Opportunities", we repeat the Link puzzle in Figure \(1\). The point illustrated in Figure \(1\) is that we face many varied risk exposures, appropriate risk measures, and statistical techniques that we apply for different risks. However, most risks are interconnected. When taken together, they provide a holistic risk measure for the firm or a family. For some risks, measures are not sophisticated and easy to achieve, such as the risk of potential fires in a region. Sometimes trying to predict potential risks is much more complex, such as predicting one-hundred-year floods in various regions. For each type of peril and hazard, we may well have different techniques to measure the risks. Our need to realize that catastrophes can happen and our need to account for them are of paramount importance. The 2008–2009 financial crisis may well have occurred in part because the risk measures in use failed to account for the systemic collapses of the financial institutions. Mostly, institutions toppled because of a result of the mortgage-backed securities and the real estate markets. As we explore risk computations and measures throughout this chapter, you will learn terminology and understand how we use such measures. You will thus embark on a journey into the world of risk management. Some measures may seem simplistic. Other measures will show you how to use complex models that use the most sophisticated state-of-the-art mathematical and statistical technology. You’ll notice also that many computations would be impossible without the advent of powerful computers and computation memory. Now, on to the journey. | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/02%3A_Risk_Measurement_and_Metrics/2.01%3A_Introduction.txt |
Learning Objectives
• In this section, you will learn how to quantify the relative frequency of occurrences of uncertain events by using probability models.
• You will learn about the measures of frequency, severity, likelihood, statistical distributions, and expected values.
• You will use examples to compute these values.
As we consider uncertainty, we use rigorous quantitative studies of chance, the recognition of its empirical regularity in uncertain situations. Many of these methods are used to quantify the occurrence of uncertain events that represent intellectual milestones. As we create models based upon probability and statistics, you will likely recognize that probability and statistics touch nearly every field of study today. As we have internalized the predictive regularity of repeated chance events, our entire worldview has changed. For example, we have convinced ourselves of the odds of getting heads in a coin flip so much that it’s hard to imagine otherwise. We’re used to seeing statements such as “average life of 1,000 hours” on a package of light bulbs. We understand such a phrase because we can think of the length of life of a light bulb as being uncertain but statistically predictable. We routinely hear such statements as “The chance of rain tomorrow is 20 percent.” It’s hard for us to imagine that only a few centuries ago people did not believe even in the existence of chance occurrences or random events or in accidents, much less explore any method of quantifying seemingly chance events. Up until very recently, people have believed that God controlled every minute detail of the universe. This belief rules out any kind of conceptualization of chance as a regular or predictable phenomenon. For example, until recently the cost of buying a life annuity that paid buyers $100 per month for life was the same for a thirty-year-old as it was for a seventy-year-old. It didn’t matter that empirically, the “life expectancy” of a thirty-year-old was four times longer than that of a seventy-year-old.The government of William III of England, for example, offered annuities of 14 percent regardless of whether the annuitant was 30 or 70 percent; (Karl Pearson, The History of Statistics In the 17th and 18th Centuries against the Changing Background of Intellectual, Scientific and Religious Thought (London: Charles Griffin & Co., 1978), 134. After all, people believed that a person’s particular time of death was “God’s will.” No one believed that the length of someone’s life could be judged or predicted statistically by any noticed or exhibited regularity across people. In spite of the advancements in mathematics and science since the beginning of civilization, remarkably, the development of measures of relative frequency of occurrence of uncertain events did not occur until the 1600s. This birth of the “modern” ideas of chance occurred when a problem was posed to mathematician Blaisé Pascal by a frequent gambler. As often occurs, the problem turned out to be less important in the long run than the solution developed to solve the problem. The problem posed was: If two people are gambling and the game is interrupted and discontinued before either one of the two has won, what is a fair way to split the pot of money on the table? Clearly the person ahead at that time had a better chance of winning the game and should have gotten more. The player in the lead would receive the larger portion of the pot of money. However, the person losing could come from behind and win. It could happen and such a possibility should not be excluded. How should the pot be split fairly? Pascal formulated an approach to this problem and, in a series of letters with Pierre de Fermat, developed an approach to the problem that entailed writing down all possible outcomes that could possibly occur and then counting the number of times the first gambler won. The proportion of times that the first gambler won (calculated as the number of times the gambler won divided by the total number of possible outcomes) was taken to be the proportion of the pot that the first gambler could fairly claim. In the process of formulating this solution, Pascal and Fermat more generally developed a framework to quantify the relative frequency of uncertain outcomes, which is now known as probability. They created the mathematical notion of expected value of an uncertain event. They were the first to model the exhibited regularity of chance or uncertain events and apply it to solve a practical problem. In fact, their solution pointed to many other potential applications to problems in law, economics, and other fields. From Pascal and Fermat’s work, it became clear that to manage future risks under uncertainty, we need to have some idea about not only the possible outcomes or states of the world but also how likely each outcome is to occur. We need a model, or in other words, a symbolic representation of the possible outcomes and their likelihoods or relative frequencies. A Historical Prelude to the Quantification of Uncertainty Via Probabilities Historically, the development of measures of chance (probability) only began in the mid-1600s. Why in the middle ages, and not with the Greeks? The answer, in part, is that the Greeks and their predecessors did not have the mathematical concepts. Nor, more importantly, did the Greeks have the psychological perspective to even contemplate these notions, much less develop them into a cogent theory capable of reproduction and expansion. First, the Greeks did not have the mathematical notational system necessary to contemplate a formal approach to risk. They lacked, for example, the simple and complete symbolic system including a zero and an equal sign useful for computation, a contribution that was subsequently developed by the Arabs and later adopted by the Western world. The use of Roman numerals might have been sufficient for counting, and perhaps sufficient for geometry, but certainly it was not conducive to complex calculations. The equal sign was not in common use until the late middle ages. Imagine doing calculations (even such simple computations as dividing fractions or solving an equation) in Roman numerals without an equal sign, a zero element, or a decimal point! But mathematicians and scientists settled these impediments a thousand years before the advent of probability. Why did risk analysis not emerge with the advent of a more complete numbering system just as sophisticated calculations in astronomy, engineering, and physics did? The answer is more psychological than mathematical and goes to the heart of why we consider risk as both a psychological and a numerical concept in this book. To the Greeks (and to the millennia of others who followed them), the heavens, divinely created, were believed to be static and perfect and governed by regularity and rules of perfection—circles, spheres, the six perfect geometric solids, and so forth. The earthly sphere, on the other hand, was the source of imperfection and chaos. The Greeks accepted that they would find no sense in studying the chaotic events of Earth. The ancient Greeks found the path to truth in contemplating the perfection of the heavens and other perfect unspoiled or uncorrupted entities. Why would a god (or gods) powerful enough to know and create everything intentionally create a world using a less than perfect model? The Greeks, and others who followed, believed pure reasoning, not empirical, observation would lead to knowledge. Studying regularity in the chaotic earthly sphere was worst than a futile waste of time; it distracted attention from important contemplations actually likely to impart true knowledge. It took a radical change in mindset to start to contemplate regularity in events in the earthly domain. We are all creatures of our age, and we could not pose the necessary questions to develop a theory of probability and risk until we shook off these shackles of the mind. Until the age of reason, when church reforms and a growing merchant class (who pragmatically examined and counted things empirically) created a tremendous growth in trade, we remained trapped in the old ways of thinking. As long as society was static and stationary, with villages this year being essentially the same as they were last year or a decade or century before, there was little need to pose or solve these problems. M. G. Kendall captures this succinctly when he noted that “mathematics never leads thought, but only expresses it.”* The western world was simply not yet ready to try to quantify risk or event likelihood (probability) or to contemplate uncertainty. If all things are believed to be governed by an omnipotent god, then regularity is not to be trusted, perhaps it can even be considered deceptive, and variation is irrelevant and illusive, being merely reflective of God’s will. Moreover, the fact that things like dice and drawing of lots were simultaneously used by magicians, by gamblers, and by religious figures for divination did not provide any impetus toward looking for regularity in earthly endeavors. * M. G. Kendall, “The Beginnings of a Probability Calculus,” in Studies in the History of Statistics and Probability, vol. 1, ed. E. S. Pearson and Sir Maurice Kendall (London: Charles Griffin & Co., 1970), 30. Measurement Techniques for Frequency, Severity, and Probability Distribution Measures for Quantifying Uncertain Events When we can see the pattern of the losses and/or gains experienced in the past, we hope that the same pattern will continue in the future. In some cases, we want to be able to modify the past results in a logical way like inflating them for the time value of money discussed in "4: Evolving Risk Management - Fundamental Tools". If the patterns of gains and losses continue, our predictions of future losses or gains will be informative. Similarly, we may develop a pattern of losses based on theoretical or physical constructs (such as hurricane forecasting models based on physics or likelihood of obtaining a head in a flip of a coin based on theoretical models of equal likelihood of a head and a tail). Likelihood is the notion of how often a certain event will occur. Inaccuracies in our abilities to create a correct distribution arise from our inability to predict futures outcomes accurately. The distribution is the display of the events on a map that tells us the likelihood that the event or events will occur. In some ways, it resembles a picture of the likelihood and regularity of events that occur. Let’s now turn to creating models and measures of the outcomes and their frequency. Measures of Frequency and Severity Table 2.1 and Table 2.2 show the compilation of the number of claims and their dollar amounts for homes that were burnt during a five-year period in two different locations labeled Location A and Location B. We have information about the total number of claims per year and the amount of the fire losses in dollars for each year. Each location has the same number of homes (1,000 homes). Each location has a total of 51 claims for the five-year period, an average (or mean) of 10.2 claims per year, which is the frequency. The average dollar amount of losses per claim for the whole period is also the same for each location,$6,166.67, which is the definition of severity.
Table 2.1 Claims and Fire Losses for Group of Homes in Location A
Year Number of Fire Claims Number of Fire Losses ($) Average Loss per Claim ($)
1 11 16,500.00 1,500.00
2 9 40,000.00 4,444.44
3 7 30,000.00 4,285.71
4 10 123,000.00 12,300.00
5 14 105,000.00 7,500.00
Total 51.00 314,500.00 6,166.67
Mean 10.20 62,900.00 6,166.67
Average Frequency = 10.20
Average Severity = 6,166.67 for the 5-year period
Table 2.2 Claims and Fire Losses ($) for Homes in Location B Year Number of Fire Claims Fire Losses Average Loss per Claim ($)
1 15 16,500.00 1,100.00
2 5 40,000.00 8,000.00
3 12 30,000.00 2,500.00
4 10 123,000.00 12,300.00
5 9 105,000.00 11,666.67
Total 51.00 314,500.00 6,166.67
Mean 10.20 62,900.00 6,166.67
Average frequency = 10.20
Average severity = 6,166.67 for the 5-year period
As shown in Table 2.1 and Table 2.2, the total number of fire claims for the two locations A and B is the same, as is the total dollar amount of losses shown. You might recall from earlier, the number of claims per year is called the frequency. The average frequency of claims for locations A and B is 10.2 per year. The size of the loss in terms of dollars lost per claim is called severity, as we noted previously. The average dollars lost per claim per year in each location is $6,166.67. The most important measures for risk managers when they address potential losses that arise from uncertainty are usually those associated with frequency and severity of losses during a specified period of time. The use of frequency and severity data is very important to both insurers and firm managers concerned with judging the risk of various endeavors. Risk managers try to employ activities (physical construction, backup systems, financial hedging, insurance, etc.) to decrease the frequency or severity (or both) of potential losses. In "4: Evolving Risk Management - Fundamental Tools", we will see frequency data and severity data represented. Typically, the risk manager will relate the number of incidents under investigation to a base, such as the number of employees if examining the frequency and severity of workplace injuries. In the examples in Table 2.1 and Table 2.2, the severity is related to the number of fire claims in the five-year period per 1,000 homes. It is important to note that in these tables the precise distribution (frequencies and dollar losses) over the years for the claims per year arising in Location A is different from distribution for Location B. This will be discussed later in this chapter. Next, we discuss the concept of frequency in terms of probability or likelihood. Frequency and Probability Returning back to the quantification of the notion of uncertainty, we first observe that our intuitive usage of the word probability can have two different meanings or forms as related to statements of uncertain outcomes. This is exemplified by two different statements:See Patrick Brockett and Arnold Levine Brockett, Statistics, Probability and Their Applications (W. B. Saunders Publishing Co., 1984), 62. 1. “If I sail west from Europe, I have a 50 percent chance that I will fall off the edge of the earth.” 2. “If I flip a coin, I have a 50 percent chance that it will land on heads.” Conceptually, these represent two distinct types of probability statements. The first is a statement about probability as a degree of belief about whether an event will occur and how firmly this belief is held. The second is a statement about how often a head would be expected to show up in repeated flips of a coin. The important difference is that the first statement’s validity or truth will be stated. We can clear up the statement’s veracity for all by sailing across the globe. The second statement, however, still remains unsettled. Even after the first coin flip, we still have a 50 percent chance that the next flip will result in a head. The second provides a different interpretation of “probability,” namely, as a relative frequency of occurrence in repeated trials. This relative frequency conceptualization of probability is most relevant for risk management. One wants to learn from past events about the likelihood of future occurrences. The discoverers of probability theory adopted the relative frequency approach to formalizing the likelihood of chance events. Pascal and Fermat ushered in a major conceptual breakthrough: the concept that, in repeated games of chance (or in many other situations encountered in nature) involving uncertainty, fixed relative frequencies of occurrence of the individual possible outcomes arose. These relative frequencies were both stable over time and individuals could calculate them by simply counting the number of ways that the outcome could occur divided by the total number of equally likely possible outcomes. In addition, empirically the relative frequency of occurrence of events in a long sequence of repeated trials (e.g., repeated gambling games) corresponded with the theoretical calculation of the number of ways an event could occur divided by the total number of possible outcomes. This is the model of equally likely outcomes or relative frequency definition of probability. It was a very distinct departure from the previous conceptualization of uncertainty that had all events controlled by God with no humanly discernable pattern. In the Pascal-Fermat framework, prediction became a matter of counting that could be done by anyone. Probability and prediction had become a tool of the people! Figure $1$ provides an example representing all possible outcomes in the throw of two colored dice along with their associated probabilities. Figure $1$ lists the probabilities for the number of dots facing upward (2, 3, 4, etc.) in a roll of two colored dice. We can calculate the probability for any one of these numbers (2, 3, 4, etc.) by adding up the number of outcomes (rolls of two dice) that result in this number of dots facing up divided by the total number of possibilities. For example, a roll of thirty-six possibilities total when we roll two dice (count them). The probability of rolling a 2 is 1/36 (we can only roll a 2 one way, namely, when both dice have a 1 facing up). The probability of rolling a 7 is $\frac{6}{36}$=$\frac{1}{6}$ (since rolls can fall any of six ways to roll a 7—1 and 6 twice, 2 and 5 twice, 3 and 4 twice). For any other choice of number of dots facing upward, we can get the probability by just adding the number of ways the event can occur divided by thirty-six. The probability of rolling a 7 or an 11 (5 and 6 twice) on a throw of the dice, for instance, is $\frac{6+2}{36}$=$\frac{2}{9}$. The notions of “equally likely outcomes” and the calculation of probabilities as the ratio of “the number of ways in which an event could occur, divided by the total number of equally likely outcomes” is seminal and instructive. But, it did not include situations in which the number of possible outcomes was (at least conceptually) unbounded or infinite or not equally likely.Nor was the logic of the notion of equally likely outcomes readily understood at the time. For example, the famous mathematician D’Alembert made the following mistake when calculating the probability of a head appearing in two flips of a coin (Karl Pearson, The History of Statistics in the 17th and 18th Centuries against the Changing Background of Intellectual, Scientific and Religious Thought [London: Charles Griffin & Co., 1978], 552). D’Alembert said the head could come up on the first flip, which would settle that matter, or a tail could come up on the first flip followed by either a head or a tail on the second flip. There are three outcomes, two of which have a head, and so he claimed the likelihood of getting a head in two flips is $\frac{2}{3}$. Evidently, he did not take the time to actually flip coins to see that the probability was $\frac{3}{4}$, since the possible equally likely outcomes are actually (H,T), (H,H), (T,H), (T,T) with three pairs of flips resulting in a head. The error is that the outcomes stated in D’Alembert’s solution are not equally likely using his outcomes H, (T,H), (T,T), so his denominator is wrong. The moral of this story is that postulated theoretical models should always be tested against empirical data whenever possible to uncover any possible errors. We needed an extension. Noticing that the probability of an event, any event, provided that extension. Further, extending the theory to nonequally likely possible outcomes arose by noticing that the probability of an event—any event—occurring could be calculated as the relative frequency of an event occurring in a long run of trials in which the event may or may not occur. Thus, different events could have different, nonequal chances of occurring in a long repetition of scenarios involving the possible occurrences of the events. Table 2.3 provides an example of this. We can extend the theory yet further to a situation in which the number of possible outcomes is potentially infinite. But what about a situation in which no easily definable bound on the number of possible outcomes can be found? We can address this situation by again using the relative frequency interpretation of probability as well. When we have a continuum of possible outcomes (e.g., if an outcome is time, we can view it as a continuous variable outcome), then a curve of relative frequency is created. Thus, the probability of an outcome falling between two numbers x and y is the area under the frequency curve between x and y. The total area under the curve is one reflecting that it’s 100 percent certain that some outcome will occur. The so-called normal distribution or bell-shaped curve from statistics provides us with an example of such a continuous probability distribution curve. The bell-shaped curve represents a situation wherein a continuum of possible outcomes arises. Figure $2$ provides such a bell-shaped curve for the profitability of implementing a new research and development project. It may have profit or loss. To find the probability of any range of profitability values for this research and development project, we find the area under the curve in Figure $2$ between the desired range of profitability values. For example, the distribution in Figure $2$ was constructed to have what is called a normal distribution with the hump over the point$30 million and a measure of spread of $23 million. This spread represents the standard deviation that we will discuss in the next section. We can calculate the area under the curve above$0, which will be the probability that we will make a profit by implementing the research and development project. We do this by reference to a normal distribution table of values available in any statistics book. The area under the curve is 0.904, meaning that we have approximately a 90 percent change (probability of 0.9) that the project will result in a profit.
In practice, we build probability distribution tables or probability curves such as those in Figure $1$, Figure $2$, and Table 2.3 using estimates of the likelihood (probability) of various different states of nature based on either historical relative frequency of occurrence or theoretical data. For example, empirical data may come from repeated observations in similar situations such as with historically constructed life or mortality tables. Theoretical data may come from a physics or engineering assessment of failure likelihood for a bridge or nuclear power plant containment vessel. In some situations, however, we can determine the likelihoods subjectively or by expert opinion. For example, assessments of political overthrows of governments are used for pricing political risk insurance needed by corporations doing business in emerging markets. Regardless of the source of the likelihoods, we can obtain an assessment of the probabilities or relative frequencies of the future occurrence of each conceivable event. The resulting collection of possible events together with their respective probabilities of occurrence is called a probability distribution, an example of which is shown in Table 2.3.
Measures of Outcome Value: Severity of Loss, Value of Gain
We have developed a quantified measure of the likelihood of the various uncertain outcomes that a firm or individual might face—these are also called probabilities. We can now turn to address the consequences of the uncertainty. The consequences of uncertainty are most often a vital issue financially. The reason that uncertainty is unsettling is not the uncertainty itself but rather the various different outcomes that can impact strategic plans, profitability, quality of life, and other important aspects of our life or the viability of a company. Therefore, we need to assess how we are impacted in each state of the world. For each outcome, we associate a value reflecting how we are affected by being in this state of the world.
As an example, consider a retail firm entering a new market with a newly created product. They may make a lot of money by taking advantage of “first-mover” status. They may lose money if the product is not accepted sufficiently by the marketplace. In addition, although they have tried to anticipate any problems, they may be faced with potential product liability. While they naturally try to make their products as safe as possible, they have to regard the potential liability because of the limited experience with the product. They may be able to assess the likelihood of a lawsuit as well as the consequences (losses) that might result from having to defend such lawsuits. The uncertainty of the consequences makes this endeavor risky and the potential for gain that motivates the company’s entry into the new market. How does one calculate these gains and losses? We already demonstrated some calculations in the examples above in Table 2.1 and Table 2.2 for the claims and fire losses for homes in locations A and B. These examples concentrated on the consequences of the uncertainty about fires. Another way to compute the same type of consequences is provided in the example in Table 2.3 for the probability distribution for this new market entry. We look for an assessment of the financial consequences of the entry into the market as well. This example looks at a few possible outcomes, not only the fire losses outcome. These outcomes can have positive or negative consequences. Therefore, we use the opportunity terminology here rather than only the loss possibilities.
Table 2.3 Opportunity and Loss Assessment Consequences of New Product Market Entry
State of Nature Probability Assessment of Likelihood of State Financial Consequences of Being in This State (in Millions of Dollars)
Subject to a loss in a product liability lawsuit .01 −10.2
Market acceptance is limited and temporary .10 −.50
Some market acceptance but no great consumer demand .40 .10
Good market acceptance and sales performance .40 1
Great market demand and sales performance .09 8
As you can see, it’s not the uncertainty of the states themselves that causes decision makers to ponder the advisability of market entry of a new product. It’s the consequences of the different outcomes that cause deliberation. The firm could lose $10.2 million or gain$8 million. If we knew which state would materialize, the decision would be simple. We address the issue of how we combine the probability assessment with the value of the gain or loss for the purpose of assessing the risk (consequences of uncertainty) in the next section.
Combining Probability and Outcome Value Together to Get an Overall Assessment of the Impact of an Uncertain Endeavor
Early probability developers asked how we could combine the various probabilities and outcome values together to obtain a single number reflecting the “value” of the multitude of different outcomes and different consequences of these outcomes. They wanted a single number that summarized in some way the entire probability distribution. In the context of the gambling games of the time when the outcomes were the amount you won in each potential uncertain state of the world, they asserted that this value was the “fair value” of the gamble. We define fair value as the numerical average of the experience of all possible outcomes if you played the game over and over. This is also called the “expected value.” Expected value is calculated by multiplying each probability (or relative frequency) by its respective gain or loss.In some ways it is a shame that the term “expected value” has been used to describe this concept. A better term is “long run average value” or “mean value” since this particular value is really not to be expected in any real sense and may not even be a possibility to occur (e.g., the value calculated from Table 2.3 is 1.008, which is not even a possibility). Nevertheless, we are stuck with this terminology, and it does convey some conception of what we mean as long as we interpreted it as being the number expected as an average value in a long series of repetitions of the scenario being evaluated. It is also referred to as the mean value, or the average value. If X denotes the value that results in an uncertain situation, then the expected value (or average value or mean value) is often denoted by $E(X)$, sometimes also referred to by economists as $E(U)$—expected utility—and $E(G)$—expected gain. In the long run, the total experienced loss or gain divided by the number of repeated trials would be the sum of the probabilities times the experience in each state. In Table 2.3 the expected value is $(.01)×(–10.2) + (.1) ×( −.50) + (.4) ×(.1) + (.4) ×(1) + (.09) ×(8) = 1.008$. Thus, we would say the expected outcome of the uncertain situation described in Table 2.3 was $1.008 million, or$1,008,000.00. Similarly, the expected value of the number of points on the toss of a pair of dice calculated from example in Figure $1$ is $2×((\frac{1}{36})+3×(\frac{2}{36})+4 ×(\frac{3}{36})+5×(\frac{4}{36})+6×(\frac{5}{36})+7×(\frac{6}{36})+8×(\frac{5}{36})+9×(\frac{4}{36})+10×(\frac{3}{36})+11×(\frac{2}{36})+12×(\frac{1}{36})= 7$. In uncertain economic situations involving possible financial gains or losses, the mean value or average value or expected value is often used to express the expected returns.Other commonly used measures of profitability in an uncertain opportunity, other than the mean or expected value, are the mode (the most likely value) and the median (the number with half the numbers above it and half the numbers below it—the 50 percent mark). It represents the expected return from an endeavor; however, it does not express the risk involved in the uncertain scenario. We turn to this now.
Relating back to Table 2.1 and Table 2.2, for locations A and B of fire claim losses, the expected value of losses is the severity of fire claims, $6,166.67, and the expected number of claims is the frequency of occurrence, 10.2 claims per year. Key Takeaways In this section you learned about the quantification of uncertain outcomes via probability models. More specifically, you delved into methods of computing: • Severity as a measure of the consequence of uncertainty—it is the expected value or average value of the loss that arises in different states of the world. Severity can be obtained by adding all the loss values in a sample and dividing by the total sample size. • If we take a table of probabilities (probability distribution), the expected value is obtained by multiplying the probability of a particular loss occurring times the size of the loss and summing over all possibilities. • Frequency is the expected number of occurrences of the loss that arises in different states of the world. • Likelihood and probability distribution represent relative frequency of occurrence (frequency of occurrence of the event divided by the total frequency of all events) of different events in uncertain situations. Discussion Questions 1. A study of data losses incurred by companies due to hackers penetrating the Internet security of the firm found that 60 percent of the firms in the industry studied had experienced security breaches and that the average loss per security breach was$15,000.
1. What is the probability that a firm will not have a security breach?
2. One firm had two breaches in one year and is contemplating spending money to decrease the likelihood of a breach. Assuming that the next year would be the same as this year in terms of security breaches, how much should the firm be willing to pay to eliminate security breaches (i.e., what is the expected value of their loss)?
2. The following is the experience of Insurer A for the last three years:
Year Number of Exposures Number of Collision Claims Collision Losses ($) 1 10,000 375 350,000 2 10,000 330 250,000 3 10,000 420 400,000 1. What is the frequency of losses in year 1? 2. Calculate the probability of a loss in year 1. 3. Calculate the mean losses per year for the collision claims and losses. 4. Calculate the mean losses per exposure. 5. Calculate the mean losses per claim. 6. What is the frequency of the losses? 7. What is the severity of the losses? 3. The following is the experience of Insurer B for the last three years: Year Number of Exposures Number of Collision Claims Collision Losses ($)
1 20,000 975 650,000
2 20,000 730 850,000
3 20,000 820 900,000
1. Calculate the mean or average number of claims per year for the insurer over the three-year period.
2. Calculate the mean or average dollar value of collision losses per exposure for year 2.
3. Calculate the expected value (mean or average) of losses per claim over the three-year period.
4. For each of the three years, calculate the probability that an exposure unit will file a claim.
5. What is the average frequency of losses?
6. What is the average severity of the losses?
7. What is the standard deviation of the losses?
8. Calculate the coefficient of variation. | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/02%3A_Risk_Measurement_and_Metrics/2.02%3A_Quantification_of_Uncertainty_via_Probability_Models.txt |
Learning Objectives
• In this section, you will learn how to compute several common measures of risk using various methods and statistical concepts.
Having developed the concept of probability to quantify the relative likelihood of an uncertain event, and having developed a measure of “expected value” for an uncertain event, we are now ready to try to quantify risk itself. The “expected value” (or mean value or fair value) quantifying the potential outcome arising from an uncertain scenario or situation in which probabilities have been assigned is a common input into the decision-making process concerning the advisability of taking certain actions, but it is not the only consideration. The financial return outcomes of various uncertain research and development, might, for example, be almost identical except that the return distributions are sort of shifted in one direction or the other. Such a situation is shown in Figure $1$. This figure describes the (continuous) distributions of anticipated profitability for each of three possible capital expenditures on uncertain research and development projects. These are labeled A, B, and C, respectively.
Intuitively, in economic terms a risk is a “surprise” outcome that is a consequence of uncertainty. It can be a positive surprise or a negative surprise, as we discussed in "1: The Nature of Risk - Losses and Opportunities".
Using the terms explained in the last section, we can regard risk as the deviation from the expected value. The more an observation deviates from what we expected, the more surprised we are likely to become if we should see it, and hence the more risky (in an economic sense) we deem the outcome to be. Intuitively, the more surprise we “expect” from a venture or a scenario, the riskier we judge this venture or scenario to be.
Looking back on Figure $1$, we might say that all three curves actually represent the same level of risk in that they each differ from their expected value (the mean or hump of the distribution) in identical ways. They only differ in their respective expected level of profitability (the hump in the curve). Note that the uncertain scenarios “B” and “C” still describe risky situations, even though virtually all of the possible outcomes of these uncertain scenarios are in the positive profit range. The “risk” resides in the deviations from the expected value that might result (the surprise potential), whether on the average the result is negative or positive. Look at the distribution labeled “A,” which describes a scenario or opportunity/loss description where much more of the possible results are on the negative range (damages or losses). Economists don’t consider “A” to be any more risky (or more dangerous) than “B” or “C,” but simply less profitable. The deviation from any expected risk defines risk here. We can plan for negative as well as positive outcomes if we know what to expect. A certain negative value may be unfortunate, but it is not risky.
Some other uncertain situations or scenarios will have the same expected level of “profitability,” but will differ in the amount of “surprise” they might present. For example, let’s assume that we have three potential corporate project investment opportunities. We expect that, over a decade, the average profitability in each opportunity will amount to $30 million. The projects differ, however, by the level of uncertainty involved in this profitability assessment (see Figure $2$). In Opportunity A, the possible range of profitability is$5–$60 million, whereas Opportunity B has a larger range of possible profits, between –$20 million and + $90 million. The third opportunity still has an expected return of$30 million, but now the range of values is from –$40 million to +$100. You could make more from Opportunity C, but you could lose more, as well. The deviation of the results around the expected value can measure the level of “surprise” potential the uncertain situation or profit/loss scenario contains. The uncertain situation concerning the profitability in Opportunity B contains a larger potential surprise in it than A, since we might get a larger deviation from the expected value in B than in A. That’s why we consider Opportunity B more risky than A. Opportunity C is the riskiest of all, having the possibility of a giant $100 million return, with the downside potential of creating a$40 million loss.
Our discussion above is based upon intuition rather than mathematics. To make it specific, we need to actually define quantitatively what we mean by the terms “a surprise” and “more surprised.” To this end, we must focus on the objective of the analysis. A sequence of throws of a pair of colored dice in which the red die always lands to the left of the green die may be surprising, but this surprise is irrelevant if the purpose of the dice throw is to play a game in which the number of dots facing up determines the pay off. We thus recognize that we must define risk in a context of the goal of the endeavor or study. If we are most concerned about the risk of insolvency, we may use one risk measure, while if we are interested in susceptibility of portfolio of assets to moderate interest rate changes, we may use another measure of risk. Context is everything. Let’s discuss several risk measures that are appropriate in different situations.
Some Common Measures of Risk
As we mentioned previously, intuitively, a risk measure should reflect the level of “surprise” potential intrinsic in the various outcomes of an uncertain situation or scenario. To this end, the literature proposes a variety of statistical measures for risk levels. All of these measures attempt to express the result variability for each relevant outcome in the uncertain situation. The following are some risk measures.
The Range
We can use the range of the distribution—that is, the distance between the highest possible outcome value to the lowest—as a rough risk measure. The range provides an idea about the “worst-case” dispersion of successive surprises. By taking the “best-case scenario minus the worst-case scenario” we define the potential breadth of outcomes that could arise in the uncertain situation.
As an example, consider the number of claims per year in Location A of Table 2.1. Table 2.1 shows a low of seven claims per year to a high of fourteen claims per year, for a range of seven claims per year. For Location B of Table 2.2, we have a range in the number of claims from a low of five in one year to a high of fifteen claims per year, which gives us a range of ten claims per year. Using the range measure of risk, we would say that Location A is less risky than Location B in this situation, especially since the average claim is the same (10.2) in each case and we have more variability or surprise potential in Location B. As another example, if we go back to the distribution of possible values in Table 2.3, the extremes vary from −$10.2 million to +$8 million, so the range is $18.2 million. This risk measure leaves the picture incomplete because it cannot distinguish in riskiness between two distributions of situations where the possible outcomes are unbounded, nor does it take into account the frequency or probability of the extreme values. The lower value of –$10.2 million in Table 2.3 only occurs 1 percent of the time, so it’s highly unlikely that you would get a value this small. It could have had an extreme value of –$100 million, which occurred with probability 0.0000000001, in which case the range would have reflected this possibility. Note that it’s extremely unlikely that you would ever experience a one-in-a-trillion event. Usually you would not want your risk management activities or managerial actions to be dictated by a one-in-a-trillion event. Deviation from a Central Value A more sophisticated (and more traditional) way to measure risk would consider not just the most extreme values of the distribution but all values and their respective occurrence probabilities. One way to do this is to average the deviations of the possible values of the distribution from a central value, such as the expected value $E(V)$ or mean value discussed earlier. We develop this idea further below. Variance and Standard Deviation Continuing the example from Table 2.1 and Table 2.2, we now ask what differentiates the claims distribution of Location A and B, both of which possess the same expected frequency and severity. We have already seen that the range is different. We now examine how the two locations differ in terms of their deviation from the common mean or expected value. Essentially, we want to examine how they differ in terms of the amount of surprise we expect to see in observations form the distributions. One such measure of deviation or surprise is by calculating the expected squared distance of each of the various outcomes from their mean value. This is a weighted average squared distance of each possible value from the mean of all observations, where the weights are the probabilities of occurrence. Computationally, we do this by individually squaring the deviation of each possible outcome from the expected value, multiplying this result by its respective probability or likelihood of occurring, and then summing up the resulting products.Calculating the average signed deviation from the mean or expected value since is a useless exercise since the result will always be zero. Taking the square of each deviation for the mean or expected value gets rid of the algebraic sign and makes the sum positive and meaningful. One might alternatively take the absolute value of the deviations from the mean to obtain another measure called the absolute deviation, but this is usually not done because it results in a mathematically inconvenient formulation. We shall stick to the squared deviation and its variants here. This produces a measure known as the variance. Variance provides a very commonly used measure of risk in financial contexts and is one of the bases of the notion of efficient portfolio selection in finance and the Capital Asset Pricing Model, which is used to explicitly show the trade-off between risk and return of assets in a capital market. We first illustrate the calculation of the variance by using the probability distribution shown in Table 2.2. We already calculated the expected value to be$1.008 million, so we may calculate the variance to be $(.01) × (–10.2 –1.008)^2 + (.1) × (–.5 –1.008)^2+ (.4) × (.1 – 1.008)^2+ (.4) × (1 – 1.008)^2 + (.09) × (8 – 1.008)^2= 7.445$. Usually, variance is denoted with the Greek symbol sigma squared, $σ^2$, or simply V.
As another example, Table 2.4 and Table 2.5 show the calculation of the variance for the two samples of claims given in locations A and B of Table 2.1 and Table 2.2 , respectively. In this case, the years are all treated equally so the average squared deviation from the mean is just the simple average of the five years squared deviations from the mean. We calculate the variance of the number of claims only.
Table 2.4 Variance and Standard Deviation of Fire Claims of Location A
Year Number of Fire Claims Difference between Observed Number of Claims and Mean Number of Claims Difference Squared
1 11 0.8 0.64
2 9 −1.2 1.44
3 7 −3.2 10.24
4 10 −0.2 0.04
5 14 3.8 14.44
Total 51 0 26.8
Mean 10.2 = $\frac{26.8}{4}$= 6.7
Variance 6.70
Standard Deviation = Square Root (6.7) = 2.59
Table 2.5 Variance and Standard Deviation of Fire Claims of Location B
Year Number of Fire Claims Difference between Observed Number of Claims and Mean Number of Claims Difference Squared
1 15 4.8 23.04
2 5 −5.2 27.04
3 12 1.8 3.24
4 10 −0.2 0.04
5 9 −1.2 1.44
Total 51 0 54.8
Mean 10.2 = $\frac{54.8}{4}$ = 13.70
Variance 13.70
Standard Deviation 3.70
A problem with the variance as a measure of risk is that by squaring the individual deviations from the mean, you end up with a measure that is in squared units (e.g., if the original losses are measured in dollars, then the variance is measured in dollars-squared). To get back to the original units of measurement we commonly take the square root and obtain a risk measure known as the standard deviation, denoted by the Greek letter sigma (σ). To provide a more meaningful measure of risk denominated in the same units as the original data, economists and risk professionals often use this square root of the variance—the standard deviation—as a measure of risk. It provides a value comparable with the original expected outcomes. Remember that variance uses squared differences; therefore, taking the square root returns the measure to its initial unit of measurement.
The risk can now be communicated with the statement: Under normal market conditions, the most the investment security portfolio will lose in value over a five-day period is about $3,275,000 with a confidence level of 99 percent.Philippe Jorion, Value at Risk: The New Benchmark for Managing Financial Risk, 2nd ed. (McGraw Hill, 2001), ch. 1. "1:The Nature of Risk - Losses and Opportunities". In the context of pure risk exposures, the equivalent notion to VaR is the Maximal Probable Annual Loss (MPAL). As with the VaR measure, it looks at a probability distribution, in this case of losses over a year period and then picks the selected lower percentile value as the MPAL. For example, if the loss distribution is given by Figure 2.1.2, and the 95 percent level of confidence is selected, then the MPAL is the same as the 95 percent VaR value. In insurance contexts one often encounters the term MPAL, whereas in finance one often encounters the term VaR. Their calculation is the same and their interpretation as a measure of risk is the same. We also note that debate rages about perceived weaknesses in using VaR as a risk measure in finance. “In short, VaR models do not provide an accurate measure of the losses that occur in extreme events. You simply cannot depict the full texture and range of your market risks with VaR alone.”Gleason, chapter 12. In addition, the VaR examines the size of loss that would occur only 1 percent of the time, but it does not specify the size of the shortfall that the company would be expected to have to make up by a distress liquidation of assets should such a large loss occur. Another measure called the expected shortfall is used for this. The interested reader is referred to Brockett and AiPatrick L. Brockett and Jing Ai, “Enterprise Risk Management (ERM),” in Encyclopedia of Quantitative Risk Assessment and Analysis, ed. E. Melnick and B. Everitt (Chichester, UK: John Wiley & Sons Ltd., 2008), 559–66. for this calculation. CAPM’s Beta Measure of Nondiversifiable Portfolio Risk Some risk exposures affect many assets of a firm at the same time. In finance, for example, movements in the market as a whole or in the entire economy can affect the value of many individual stocks (and firms) simultaneously. We saw this very dramatically illustrated in the financial crisis in 2008–2009 where the entire stock market went down and dragged many stocks (and firms) down with it, some more than others. In "1: The Nature of Risk - Losses and Opportunities" we referred to this type of risk as systematic, fundamental, or nondiversifiable risk. For a firm (or individual) having a large, well-diversified portfolio of assets, the total negative financial impact of any single idiosyncratic risk on the value of the portfolio is minimal since it constitutes only a small fraction of their wealth. Therefore, the asset-specific idiosyncratic risk is generally ignored when making decisions concerning the additional amount of risk involved when acquiring an additional asset to be added to an already well-diversified portfolio of assets. The question is how to disentangle the systematic from the nonsystematic risk embedded in any asset. Finance professors Jack Treynor, William Sharpe, John Lintner, and Jan Mossin worked independently and developed a model called the Capital Asset Pricing Model (CAPM). From this model we can get a measure of how the return on an asset systematically varies with the variations in the market, and consequently we can get a measure of systematic risk. The idea is similar to the old adage that a rising tide lifts all ships. In this case a rising (or falling) market or economy rises (or lowers) all assets to a greater or lesser degree depending on their covariation with the market. This covariation with the market is fundamental to obtaining a measure of systematic risk. We develop it now. Essentially, the CAPM model assumes that investors in assets expect to be compensated for both the time value of money and the systematic or nondiversifiable risk they bear. In this regard, the return on an asset A, $R_A$, is assumed to be equal to the return on an absolutely safe or risk-free investment, $r_f$ (the time value of money part) and a risk premium, which measures the compensation for the systematic risk they are bearing. To measure the amount of this systematic risk, we first look at the correlation between the returns on the asset and the returns on a market portfolio of all assets. The assumption is that the market portfolio changes with changes in the economy as a whole, and so systematic changes in the economy are reflected by changes in the level of the market portfolio. The variation of the asset returns with respect to the market returns is assumed to be linear and so the general framework is expressed as $R_A= r_f+β_A× (R_m − r_f )+ ε,$ where ε denotes a random term that is unrelated to the market return. Thus the term $β_A× (R_m − r_f )$ represents a systematic return and ε represents a firm-specific or idiosyncratic nonsystematic component of return. Notice that upon taking variances, we have $σ_A^{2}= .β_A^2×β_m^2,+ σ_ε^2$, so the first term is called the systematic variance and the second term is the idiosyncratic or firm-specific variance. The idea behind the CAPM is that investors would be compensated for the systematic risk and not the idiosyncratic risk, since the idiosyncratic risk should be diversifiable by the investors who hold a large diversified portfolio of assets, while the systematic or market risk affects them all. In terms of expected values, we often write the equation as $E[R_A]= r_f+ β_A×(E[R_m]− r_f),$ which is the so-called CAPM model. In this regard the expected rate of return on an asset $E[R_A]$, is the risk-free investment, $r_f$, plus a market risk premium equal to $β_A×(E[R_m] − r_f)$. The coefficient $β_A$ is called the market risk or systematic risk of asset A. By running a linear regression of the returns experienced on asset A with those returns experienced on a market portfolio (such as the Dow Jones Industrial stock portfolio) and the risk-free asset return (such as the U.S. T-Bill rate of return), one can find the risk measure $β_A$. A regression is a statistical technique that creates a trend based on the data. An actual linear regression to compute future frequency and severity based on a trend is used in "4: Evolving Risk Management - Fundamental Tools" for risk management analysis. Statistical books showSee Patrick Brockett and Arnold Levine Brockett, Statistics, Probability and Their Applications (W. B. Saunders Publishing Co., 1984). that $β_A. = \tfrac{COV(R_A, R_m)}{β_m^2}$ where $COV(R_A,R_m)$ is the covariance of the return on the asset with the return on the market and is defined by $COV(R_A, R_m) = E[{R_A,−E(R_A)} × {R_m ,-E(R_m )}],$ that is, the average value of the product of the deviation of the asset return from its expected value and the market returns from its expected value. In terms of the correlation coefficient $ρ_{Am}$ between the return on the asset and the market, we have $β_A= ρ_{Am}×(\frac{β_A}{β_m})$, so we can also think of beta as scaling the asset volatility by the market volatility and the correlation of the asset with the market. The $β$ (beta) term in the above equations attempts to quantify the risk associated with market fluctuations or swings in the market. A beta of 1 means that the asset return is expected to move in conjunction with the market, that is, a 5 percent move (measured in terms of standard deviation units of the market) in the market will result in a 5 percent move in the asset (measured in terms of standard deviation units of the asset). A beta less than one indicates that the asset is less volatile than the market in that when the market goes up (or down) by 5 percent the asset will go up (or down) by less than 5 percent. A beta greater than one means that the asset price is expected to move more rapidly than the market so if the market goes up (or down) by 5 percent then the asset will go up (or down) by more than 5 percent. A beta of zero indicates that the return on the asset does not correlate with the returns on the market. Key Takeaways • Risk measures quantify the amount of surprise potential contained in a probability distribution. • Measures such as the range and Value at Risk (VaR) and Maximal Probable Annual Loss (MPAL) focus on the extremes of the distributions and are appropriate measures of risk when interest is focused on solvency or making sure that enough capital is set aside to handle any realized extreme losses. • Measures such as the variance, standard deviation, and semivariance are useful when looking at average deviations from what is expected for the purpose of planning for expected deviations from expected results. • The market risk measure from the Capital Asset Pricing Model is useful when assessing systematic financial risk or the additional risk involved in adding an asset to an already existing diversified portfolio. Discussion Questions 1. Compare the relative risk of Insurer A to Insurer B in the following questions. 1. Which insurer carries more risk in losses and which carries more claims risk? Explain. 2. Compare the severity and frequency of the insurers as well. 2. The experience of Insurer A for the last three years as given in Problem 2 was the following: Year Number of Exposures Number of Collision Claims Collision Losses ($)
1 10,000 375 350,000
2 10,000 330 250,000
3 10,000 420 400,000
1. What is the range of collision losses per year?
2. What is the standard deviation of the losses per year?
3. Calculate the coefficient of variation of the losses per year.
4. Calculate the variance of the number of claims per year.
3. The experience of Insurer B for the last three years as given in Problem 3 was the following:
Year Number of Exposures Number of Collision Claims Collision Losses
1 20,000 975 650,000
2 20,000 730 850,000
3 20,000 820 900,000
1. What is the range of collision losses?
2. Calculate the variance in the number of collision claims per year.
3. What is the standard deviation of the collision losses?
4. Calculate the coefficient of collision variation.
5. Comparing the results of Insurer A and Insurer B, which insurer has a riskier book of business in terms of the range of possible losses they might experience?
6. Comparing the results of Insurer A and Insurer B, which insurer has a riskier book of business in terms of the standard deviation in the collision losses they might experience? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/02%3A_Risk_Measurement_and_Metrics/2.03%3A_Measures_of_Risk_-_Putting_It_Together.txt |
1. The Texas Department of Insurance publishes data on all the insurance claims closed during a given year. For the thirteen years from 1990 to 2002 the following table lists the percentage of medical malpractice claims closed in each year for which the injury actually occurred in the same year.
Year % of injuries in the year that are closed in that year
1990 0.32
1991 1.33
1992 0.86
1993 0.54
1994 0.69
1995 0.74
1996 0.76
1997 1.39
1998 1.43
1999 0.55
2000 0.66
2001 0.72
2002 1.06
Calculate the average percentage of claims that close in the same year as the injury occurs.
2. From the same Texas Department of Insurance data on closed claims for medical malpractice liability insurance referred to in Problem 1, we can estimate the number of claims in each year of injury that will be closed in the next 16 years. We obtain the following data. Here the estimated dollars per claim for each year have been adjusted to 2007 dollars to account for inflation, so the values are all compatible. Texas was said to have had a “medical malpractice liability crisis” starting in about 1998 and continuing until the legislature passed tort reforms effective in September 2003, which put caps on certain noneconomic damage awards. During this period premiums increased greatly and doctors left high-risk specialties such as emergency room service and delivering babies, and left high-risk geographical areas as well causing shortages in doctors in certain locations. The data from 1994 until 2001 is the following:
Injury year Estimated # claims Estimated \$ per claim
1994 1021 \$415,326.26
1995 1087 \$448,871.57
1996 1184 \$477,333.66
1997 1291 \$490,215.19
1998 1191 \$516,696.63
1999 1098 \$587,233.93
2000 1055 \$536,983.82
2001 1110 \$403,504.39
1. Calculate the mean or average number of claims per year for medical malpractice insurance in Texas over the four-year period 1994–1997.
2. Calculate the mean or average number of claims per year for medical malpractice insurance in Texas over the four-year period 1998–2001.
3. Calculate the mean or average dollar value per claim per year for medical malpractice insurance in Texas over the four-year period 1994–1997 (in 2009 dollars).
4. Calculate the mean or average dollar value per claim per year for medical malpractice insurance in Texas over the four-year period 1998–2001 (in 2009 dollars).
5. Looking at your results from (a) to (e), do you think there is any evidence to support the conclusion that costs were rising for insurers, justifying the rise in premiums?
3. Referring back to the Texas Department of Insurance data on closed claims for medical malpractice liability insurance presented in Problem 5, we wish to see if medical malpractice was more risky to the insurer during the 1998–2001 period than it was in the 1994–1997 period. The data from 1994 until 2001 was:
Injury year Estimated # claims Estimated \$ per claim
1994 1021 \$415,326.26
1995 1087 \$448,871.57
1996 1184 \$477,333.66
1997 1291 \$490,215.19
1998 1191 \$516,696.63
1999 1098 \$587,233.93
2000 1055 \$536,983.82
2001 1110 \$403,504.39
1. Calculate the standard deviation in the estimated payment per claim for medical malpractice insurance in Texas over the four-year period 1994–1997.
2. Calculate the standard deviation in the estimated payment per claim for medical malpractice insurance in Texas over the four-year period 1998–2001.
3. Which time period was more risky (in terms of the standard deviation in payments per claim)?
4. Using the results of (c) above, do you think the medical malpractice insurers raising rates during the period 1998–2001 was justified on the basis of assuming additional risk? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/02%3A_Risk_Measurement_and_Metrics/2.04%3A_Risk_Measurement_and_Metrics%28Exercises%29.txt |
Whenever we look into risks, risk measures, and risk management, we must always view these in a greater context. In this chapter, we focus on the risk within the “satisfaction” value maximization for individual and firms. The value here is measured economically. So, how do economists measure the value of satisfaction or happiness? Can we even measure satisfaction or happiness? Whatever the philosophical debate might be on the topic, economists have tried to measure the level of satisfaction.At one time, economists measured satisfaction in a unit called “utils” and discussed the highest number of utils as “bliss points”! What economists succeeded in doing is to compare levels of satisfaction an individual achieves when confronted with two or more choices. For example, we suppose that everyone likes to eat gourmet food at five-star hotels, drink French wine, vacation in exotic places, and drive luxury cars. For an economist, all these goods are assumed to provide satisfaction, some more than others. So while eating a meal at home gives us pleasure, eating exotic food at an upscale restaurant gives us an even higher level of satisfaction.
The problem with the quantity and quality of goods consumed is that we can find no common unit of measurement. That prevents economists from comparing levels of satisfaction from consumption of commodities that are different as apples are different from oranges. So does drinking tea give us the same type of satisfaction as eating cake? Or snorkeling as much as surfing?
To get around the problem of comparing values of satisfaction from noncomparable items, we express the value levels of satisfaction as a function of wealth. And indeed, we can understand intuitively that the level of wealth is linked directly to the quantity and quality of consumption a person can achieve. Notice the quality and level of consumption a person achieves is linked to the amount of wealth or to the individual’s budget. Economists consider that greater wealth can generate greater satisfaction. Therefore, a person with greater levels of wealth is deemed to be happier under the condition of everything else being equal between two individuals.Economists are fond of the phrase “ceteris paribus,” which means all else the same. We can only vary one component of human behavior at a time. We can link each person’s satisfaction level indirectly to that person’s wealth. The higher the person’s wealth, the greater his or her satisfaction level is likely to be.
Economists use the term “utils” to gauge a person’s satisfaction level. As a unit of measure, utils are similar to “ohms” as a measure of resistance in electrical engineering, except that utils cannot be measured with wires attached to a person’s head!
This notion that an individual derives satisfaction from wealth seems to work more often than not in economic situations. The economic theory that links the level of satisfaction to a person’s wealth level, and thus to consumption levels, is called utility theory. Its basis revolves around individuals’ preferences, but we must use caution as we apply utility theory.The utility theory is utilized to compare two or more options. Thus, by its very nature, we refer to the utility theory as an “ordinal” theory, which rank orders choices, rather than “cardinal” utility, which has the ability to attach a number to even a single outcome where there are no choices involved.
In this chapter, we will study the utility theory. If utility theory is designed to measure satisfaction, and since every individual always tries to maximize satisfaction, it’s reasonable to expect (under utility theory) that each person tries to maximize his or her own utility.
Then we will extend utility to one of its logical extensions as applied to uncertain situations: expected utility (EU henceforth). So while utility theory deals with situations in which there is no uncertainty, the EU theory deals with choices individuals make when the outcomes they face are uncertain. As we shall see, if individuals maximize utility under certainty, they will also attempt to maximize EU under uncertainty.
However, individuals’ unabashed EU maximization is not always the case. Other models of human behavior describe behavior in which the observed choices of an individual vary with the decision rule to maximize EU. So why would a mother jump into a river to save her child, even if she does not know how to swim? Economists still confront these and other such questions. They have provided only limited answers to such questions thus far.
Hence, we will touch upon some uncertainty-laden situations wherein individuals’ observed behavior departs from the EU maximization principle. Systematic departures in behavior from the EU principle stem from “biases” that people exhibit, and we shall discuss some of these biases. Such rationales of observed behavior under uncertainty are termed “behavioral” explanations, rather than “rational” explanations—explanations that explore EU behavior of which economists are so fond.
In this chapter, we will apply the EU theory to individuals’ hedging decisions/purchase of insurance. Let’s start by asking, Why would anyone buy insurance? When most people face that question, they respond in one of three ways. One set says that insurance provides peace of mind (which we can equate to a level of satisfaction). Others respond more bluntly and argue that if it were not for regulation they’d never buy insurance. The second reply is one received mostly from younger adults. Still others posit that insurance is a “waste of money,” since they pay premiums up front and insurance never pays up in the absence of losses. To all those who argue based upon the third response, one might say, would they rather have a loss for the sake of recovering their premiums? We look to EU theory for some answers, and we will find that even if governments did not make purchase of insurance mandatory, the product would still have existed. Risk-averse individuals would always demand insurance for the peace of mind it confers.
Thus we will briefly touch upon the ways that insurance is useful, followed by a discussion of how some information problems affect the insurance industry more than any other industry. “Information asymmetry” problems arise, wherein one economic agent in a contract is better informed than the other party to the same contract. The study of information asymmetries has become a full-time occupation for some economics researchers. Notably, professors George A. Akerlof, A. Michael Spence, and Joseph E. Stiglitz were awarded the Nobel Prize in Economics in 2001 for their analyses of information asymmetry problems.
Links
Preferences are not absolute but rather they depend upon market conditions, cultures, peer groups, and surrounding events. Individuals’ preferences nestle within these parameters. Therefore, we can never talk in absolute terms when we talk about satisfaction and preferences. The 2008 crisis, which continued into 2009, provides a good example of how people’s preferences can change very quickly. When people sat around in celebration of 2009 New Year’s Eve, conversation centered on hopes for “making a living” and having some means for income. These same people talked about trips around the world at the end of 2007. Happiness and preferences are a dynamic topic depending upon individuals’ stage of life and economic states of the world. Under each new condition, new preferences arise that fall under the static utility theory discussed below. Economists have researched “happiness,” and continuing study is very important to economists.An academic example is the following study: Yew-Kwang Ng, “A Case for Happiness, Cardinalism, and Interpersonal Comparability,” Economic Journal 107 (1997): 1848–58. She contends that “modern economists are strongly biased in favour of preference (in contrast to happiness), ordinalism, and against interpersonal comparison. I wish to argue for the opposite.” A more popular research is at Forbes on happiness research.Forbes magazine published several short pieces on happiness research. Nothing especially rigorous, but a pleasant enough read: | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/03%3A_Risk_Attitudes_-_Expected_Utility_Theory_and_Demand_for_Hedging/3.01%3A_Introduction.txt |
Learning Objectives
• In this section we discuss economists’ utility theory.
• You will learn about assumptions that underlie individual preferences, which can then be mapped onto a utility “function,” reflecting the satisfaction level associated with individuals’ preferences.
• Further, we will explore how individuals maximize utility (or satisfaction).
Utility theory bases its beliefs upon individuals’ preferences. It is a theory postulated in economics to explain behavior of individuals based on the premise people can consistently rank order their choices depending upon their preferences. Each individual will show different preferences, which appear to be hard-wired within each individual. We can thus state that individuals’ preferences are intrinsic. Any theory, which proposes to capture preferences, is, by necessity, abstraction based on certain assumptions. Utility theory is a positive theory that seeks to explain the individuals’ observed behavior and choices.The distinction between normative and positive aspects of a theory is very important in the discipline of economics. Some people argue that economic theories should be normative, which means they should be prescriptive and tell people what to do. Others argue, often successfully, that economic theories are designed to be explanations of observed behavior of agents in the market, hence positive in that sense. This contrasts with a normative theory, one that dictates that people should behave in the manner prescribed by it. Instead, it is only since the theory itself is positive, after observing the choices that individuals make, we can draw inferences about their preferences. When we place certain restrictions on those preferences, we can represent them analytically using a utility function—a mathematical formulation that ranks the preferences of the individual in terms of satisfaction different consumption bundles provide. Thus, under the assumptions of utility theory, we can assume that people behaved as if they had a utility function and acted according to it. Therefore, the fact that a person does not know his/her utility function, or even denies its existence, does not contradict the theory. Economists have used experiments to decipher individuals’ utility functions and the behavior that underlies individuals’ utility.
To begin, assume that an individual faces a set of consumption “bundles.” We assume that individuals have clear preferences that enable them to “rank order” all bundles based on desirability, that is, the level of satisfaction each bundle shall provide to each individual. This rank ordering based on preferences tells us the theory itself has ordinal utility—it is designed to study relative satisfaction levels. As we noted earlier, absolute satisfaction depends upon conditions; thus, the theory by default cannot have cardinal utility, or utility that can represent the absolute level of satisfaction. To make this theory concrete, imagine that consumption bundles comprise food and clothing for a week in all different combinations, that is, food for half a week, clothing for half a week, and all other possible combinations.
The utility theory then makes the following assumptions:
1. Completeness: Individuals can rank order all possible bundles. Rank ordering implies that the theory assumes that, no matter how many combinations of consumption bundles are placed in front of the individual, each individual can always rank them in some order based on preferences. This, in turn, means that individuals can somehow compare any bundle with any other bundle and rank them in order of the satisfaction each bundle provides. So in our example, half a week of food and clothing can be compared to one week of food alone, one week of clothing alone, or any such combination. Mathematically, this property wherein an individual’s preferences enable him or her to compare any given bundle with any other bundle is called the completeness property of preferences.
2. More-is-better: Assume an individual prefers consumption of bundle A of goods to bundle B. Then he is offered another bundle, which contains more of everything in bundle A, that is, the new bundle is represented by αA where α = 1. The more-is-better assumption says that individuals prefer αA to A, which in turn is preferred to B, but also A itself. For our example, if one week of food is preferred to one week of clothing, then two weeks of food is a preferred package to one week of food. Mathematically, the more-is-better assumption is called the monotonicity assumption on preferences. One can always argue that this assumption breaks down frequently. It is not difficult to imagine that a person whose stomach is full would turn down additional food. However, this situation is easily resolved. Suppose the individual is given the option of disposing of the additional food to another person or charity of his or her choice. In this case, the person will still prefer more food even if he or she has eaten enough. Thus under the monotonicity assumption, a hidden property allows costless disposal of excess quantities of any bundle.
3. Mix-is-better: Suppose an individual is indifferent to the choice between one week of clothing alone and one week of food. Thus, either choice by itself is not preferred over the other. The “mix-is-better” assumption about preferences says that a mix of the two, say half-week of food mixed with half-week of clothing, will be preferred to both stand-alone choices. Thus, a glass of milk mixed with Milo (Nestlè’s drink mix), will be preferred to milk or Milo alone. The mix-is-better assumption is called the “convexity” assumption on preferences, that is, preferences are convex.
4. Rationality: This is the most important and controversial assumption that underlies all of utility theory. Under the assumption of rationality, individuals’ preferences avoid any kind of circularity; that is, if bundle A is preferred to B, and bundle B is preferred to C, then A is also preferred to C. Under no circumstances will the individual prefer C to A. You can likely see why this assumption is controversial. It assumes that the innate preferences (rank orderings of bundles of goods) are fixed, regardless of the context and time.
If one thinks of preference orderings as comparative relationships, then it becomes simpler to construct examples where this assumption is violated. So, in “beats”—as in A beat B in college football. These are relationships that are easy to see. For example, if University of Florida beats Ohio State, and Ohio State beats Georgia Tech, it does not mean that Florida beats Georgia Tech. Despite the restrictive nature of the assumption, it is a critical one. In mathematics, it is called the assumption of transitivity of preferences.
Whenever these four assumptions are satisfied, then the preferences of the individual can be represented by a well-behaved utility function.The assumption of convexity of preferences is not required for a utility function representation of an individual’s preferences to exist. But it is necessary if we want that function to be well behaved. Note that the assumptions lead to “a” function, not “the” function. Therefore, the way that individuals represent preferences under a particular utility function may not be unique. Well-behaved utility functions explain why any comparison of individual people’s utility functions may be a futile exercise (and the notion of cardinal utility misleading). Nonetheless, utility functions are valuable tools for representing the preferences of an individual, provided the four assumptions stated above are satisfied. For the remainder of the chapter we will assume that preferences of any individual can always be represented by a well-behaved utility function. As we mentioned earlier, well-behaved utility depends upon the amount of wealth the person owns.
Utility theory rests upon the idea that people behave as if they make decisions by assigning imaginary utility values to the original monetary values. The decision maker sees different levels of monetary values, translates these values into different, hypothetical terms (“utils”), processes the decision in utility terms (not in wealth terms), and translates the result back to monetary terms. So while we observe inputs to and results of the decision in monetary terms, the decision itself is made in utility terms. And given that utility denotes levels of satisfaction, individuals behave as if they maximize the utility, not the level of observed dollar amounts.
While this may seem counterintuitive, let’s look at an example that will enable us to appreciate this distinction better. More importantly, it demonstrates why utility maximization, rather than wealth maximization, is a viable objective. The example is called the “St. Petersburg paradox.” But before we turn to that example, we need to review some preliminaries of uncertainty: probability and statistics.
Key Takeaways
• In economics, utility theory governs individual decision making. The student must understand an intuitive explanation for the assumptions: completeness, monotonicity, mix-is-better, and rationality (also called transitivity).
• Finally, students should be able to discuss and distinguish between the various assumptions underlying the utility function.
Discussion Questions
1. Utility theory is a preference-based approach that provides a rank ordering of choices. Explain this statement.
2. List and describe in your own words the four axioms/assumptions that lead to the existence of a utility function.
3. What is a “util” and what does it measure? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/03%3A_Risk_Attitudes_-_Expected_Utility_Theory_and_Demand_for_Hedging/3.02%3A_Utility_Theory.txt |
Learning Objectives
• In this section we discuss the notion of uncertainty. Mathematical preliminaries discussed in this section form the basis for analysis of individual decision making in uncertain situations.
• The student should pick up the tools of this section, as we will apply them later.
As we learned in the chapters "1: The Nature of Risk - Losses and Opportunities" and "2: Risk Measurement and Metrics", risk and uncertainty depend upon one another. The origins of the distinction go back to the Mr. Knight,See Jochen Runde, “Clarifying Frank Knight’s Discussion of the Meaning of Risk and Uncertainty,” Cambridge Journal of Economics 22, no. 5 (1998): 539–46. who distinguished between risk and uncertainty, arguing that measurable uncertainty is risk. In this section, since we focus only on measurable uncertainty, we will not distinguish between risk and uncertainty and use the two terms interchangeably.
As we described in "2: Risk Measurement and Metrics", the study of uncertainty originated in games of chance. So when we play games of dice, we are dealing with outcomes that are inherently uncertain. The branch of science of uncertain outcomes is probability and statistics. Notice that the analysis of probability and statistics applies only if outcomes are uncertain. When a student registers for a class but does not attend any lectures nor does any assigned work or test, only one outcome is possible: a failing grade. On the other hand, if the student attends all classes and scores 100 percent on all tests and assignments, then too only one outcome is possible, an “A” grade. In these extreme situations, no uncertainty arises with the outcomes. But between these two extremes lies the world of uncertainty. Students often do research on the instructor and try to get a “feel” for the chance that they will make a particular grade if they register for an instructor’s course.
Even though we covered some of this discussion of probability and uncertainty in "2: Risk Measurement and Metrics", we repeat it here for reinforcement. Figuring out the chance, in mathematical terms, is the same as calculating the probability of an event. To compute a probability empirically, we repeat an experiment with uncertain outcomes (called a random experiment) and count the number of times the event of interest happens, say n, in the N trials of the experiment. The empirical probability of the event then equals n/N. So, if one keeps a log of the number of times a computer crashes in a day and records it for 365 days, the probability of the computer crashing on a day will be the sum of all of computer crashes on a daily basis (including zeroes for days it does not crash at all) divided by 365.
For some problems, the probability can be calculated using mathematical deduction. In these cases, we can figure out the probability of getting a head on a coin toss, two aces when two cards are randomly chosen from a deck of 52 cards, and so on (see the example of the dice in "2: Risk Measurement and Metrics"). We don’t have to conduct a random experiment to actually compute the mathematical probability, as is the case with empirical probability.
Finally, as strongly suggested before, subjective probability is based on a person’s beliefs and experiences, as opposed to empirical or mathematical probability. It may also depend upon a person’s state of mind. Since beliefs may not always be rational, studying behavior using subjective probabilities belongs to the realm of behavioral economics rather than traditional rationality-based economics.
So consider a lottery (a game of chance) wherein several outcomes are possible with defined probabilities. Typically, outcomes in a lottery consist of monetary prizes. Returning to our dice example of "2: Risk Measurement and Metrics", let’s say that when a six-faced die is rolled, the payoffs associated with the outcomes are $1 if a 1 turns up,$2 for a 2, …, and $6 for a 6. Now if this game is played once, one and only one amount can be won—$1, $2, and so on. However, if the same game is played many times, what is the amount that one can expect to win? Mathematically, the answer to any such question is very straightforward and is given by the expected value of the game. In a game of chance, if W1, W2, …, WN are the N outcomes possible with probabilities π1, π2,…, πN, then the expected value of the game (G) is $E(U)= \sum_{i= 1}^{∞} πi U(Wi)= 12 × \ln(2)+ 14 × \ln(4)+ …= \sum_{i= 1}^{∞} 12i × \ln(2i).$ The computation can be extended to expected values of any uncertain situation, say losses, provided we know the outcome numbers and their associated probabilities. The probabilities sum to 1, that is, $\sum_{i= 1}^{N} π_i= π_1+ … + π_N =1.$ While the computation of expected value is important, equally important is notion behind expected values. Note that we said that when it comes to the outcome of a single game, only one amount can be won, either$1, $2, …,$6. But if the game is played over and over again, then one can expect to win E(G)= 161+ 162…+ 166=$3.50 per game. Often—like in this case—the expected value is not one of the possible outcomes of the distribution. In other words, the probability of getting$3.50 in the above lottery is zero. Therefore, the concept of expected value is a long-run concept, and the hidden assumption is that the lottery is played many times. Secondly, the expected value is a sum of the products of two numbers, the outcomes and their associated probabilities. If the probability of a large outcome is very high then the expected value will also be high, and vice versa.
Expected value of the game is employed when one designs a fair game. A fair game, actuarially speaking, is one in which the cost of playing the game equals the expected winnings of the game, so that net value of the game equals zero. We would expect that people are willing to play all fair value games. But in practice, this is not the case. I will not pay $500 for a lucky outcome based on a coin toss, even if the expected gains equal$500. No game illustrates this point better than the St. Petersburg paradox.
The paradox lies in a proposed game wherein a coin is tossed until “head” comes up. That is when the game ends. The payoff from the game is the following: if head appears on the first toss, then $2 is paid to the player, if it appears on the second toss then$4 is paid, if it appears on the third toss, then $8, and so on, so that if head appears on the nth toss then the payout is$2n. The question is how much would an individual pay to play this game?
Let us try and apply the fair value principle to this game, so that the cost an individual is willing to bear should equal the fair value of the game. The expected value of the game $E(G)$ is calculated below.
The game can go on indefinitely, since the head may never come up in the first million or billion trials. However, let us look at the expected payoff from the game. If head appears on the first try, the probability of that happening is 12 , and the payout is $2. If it happens on the second try, it means the first toss yielded a tail (T) and the second a head (H). The probability of TH combination =12×12=14 , and the payoff is$4. Then if H turns up on the third attempt, it implies the sequence of outcomes is TTH, and the probability of that occurring is 12×12×12=18 with a payoff of \$8. We can continue with this inductive analysis ad infinitum. Since expected is the sum of all products of outcomes and their corresponding probabilities,
$E(G)= 12×2+14×4+18×8+…=∞.$
It is evident that while the expected value of the game is infinite, not even the Bill Gateses and Warren Buffets of the world will give even a thousand dollars to play this game, let alone billions.
Daniel Bernoulli was the first one to provide a solution to this paradox in the eighteenth century. His solution was that individuals do not look at the expected wealth when they bid a lottery price, but the expected utility of the lottery is the key. Thus, while the expected wealth from the lottery may be infinite, the expected utility it provides may be finite. Bernoulli termed this as the “moral value” of the game. Mathematically, Bernoulli’s idea can be expressed with a utility function, which provides a representation of the satisfaction level the lottery provides.
Bernoulli used $U(W)=ln(W)$ to represent the utility that this lottery provides to an individual where W is the payoff associated with each event H, TH, TTH, and so on, then the expected utility from the game is given by
$E(U)= \sum_{i= 1}^{∞} π i U(Wi)= 12× ln(2)+ 14× ln(4)+ …= \sum_{i= 1}^{∞} 12 i \ln(2i),$
which can be shown to equal 1.39 after some algebraic manipulation. Since the expected utility that this lottery provides is finite (even if the expected wealth is infinite), individuals will be willing to pay only a finite cost for playing this lottery.
The next logical question to ask is, What if the utility was not given as natural log of wealth by Bernoulli but something else? What is that about the natural log function that leads to a finite expected utility? This brings us to the issue of expected utility and its central place in decision making under uncertainty in economics.
Key Takeaways
• Students should be able to explain probability as a measure of uncertainty in their own words.
• Moreover, the student should also be able to explain that any expected value is the sum of product of probabilities and outcomes and be able to compute expected values.
Discussion Questions
1. Define probability. In how many ways can one come up with a probability estimate of an event? Describe.
2. Explain the need for utility functions using St. Petersburg paradox as an example.
3. Suppose a six-faced fair die with numbers 1–6 is rolled. What is the number you expect to obtain?
4. What is an actuarially fair game? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/03%3A_Risk_Attitudes_-_Expected_Utility_Theory_and_Demand_for_Hedging/3.03%3A_Uncertainty%2C_Expected_Value%2C_and_Fair_Games.txt |
Learning Objectives
• In this section the student learns that an individual’s objective is to maximize expected utility when making decisions under uncertainty.
• We also learn that people are risk averse, risk neutral, or risk seeking (loving).
We saw earlier that in a certain world, people like to maximize utility. In a world of uncertainty, it seems intuitive that individuals would maximize expected utility. This refers to a construct used to explain the level of satisfaction a person gets when faced with uncertain choices. The intuition is straightforward, proving it axiomatically was a very challenging task. John von Neumann and Oskar Morgenstern (1944) advocated an approach that leads us to a formal mathematical representation of maximization of expected utility.
We have also seen that a utility function representation exists if the four assumptions discussed above hold. Messrs. von Neumann and Morgenstern added two more assumptions and came up with an expected utility function that exists if these axioms hold. While the discussions about these assumptionsThese are called the continuity and independence assumptions. is beyond the scope of the text, it suffices to say that the expected utility function has the form
$E(U)= \sum_{i= 1}^{n} πi Ui$
where u is a function that attaches numbers measuring the level of satisfaction ui associated with each outcome i. u is called the Bernoulli function while $E(U)$ is the von Neumann-Morgenstern expected utility function.
Again, note that expected utility function is not unique, but several functions can model the preferences of the same individual over a given set of uncertain choices or games. What matters is that such a function (which reflects an individual’s preferences over uncertain games) exists. The expected utility theory then says if the axioms provided by von Neumann-Morgenstern are satisfied, then the individuals behave as if they were trying to maximize the expected utility.
The most important insight of the theory is that the expected value of the dollar outcomes may provide a ranking of choices different from those given by expected utility. The expected utility theory then says persons shall choose an option (a game of chance or lottery) that maximizes their expected utility rather than the expected wealth. That expected utility ranking differs from expected wealth ranking is best explained using the example below.
Let us think about an individual whose utility function is given by $u(W)=W$ and has an initial endowment of $10. This person faces the following three lotteries, based on a coin toss: Table 3.1 Utility Function with Initial Endowment of$10
Outcome (Probability) Payoff Lottery 1 Payoff Lottery 2 Payoff Lottery 3
H (0.5) 10 20 30
T (0.5) −2 −5 −10
E(G) 4 7.5 10
We can calculate the expected payoff of each lottery by taking the product of probability and the payoff associated with each outcome and summing this product over all outcomes. The ranking of the lotteries based on expected dollar winnings is lottery 3, 2, and 1—in that order. But let us consider the ranking of the same lotteries by this person who ranks them in order based on expected utility.
We compute expected utility by taking the product of probability and the associated utility corresponding to each outcome for all lotteries. When the payoff is $10, the final wealth equals initial endowment ($10) plus winnings = ($20). The utility of this final wealth is given by 20 =4.472. The completed utility table is shown below. Table 3.2 Lottery Rankings by Expected Utility Outcome (Probability) Utility Lottery 1 Utility Lottery 2 Utility Lottery 3 $H(0.5)$ 4.472 5.477 6.324 $T(0.5)$ 2.828 2.236 0 $E(U)$ = 3.650 3.856 3.162 The expected utility ranks the lotteries in the order 2–1–3. So the expected utility maximization principle leads to choices that differ from the expected wealth choices. The example shows that the ranking of games of chance differs when one utilizes the expected utility ($E(U)$) theory than when the expected gain $E(G)$ principle applies This leads us to the insight that if two lotteries provide the same $E(G)$, the expected gain principle will rank both lotteries equally, while the $E(U)$ theory may lead to unique rankings of the two lotteries. What happens when the $E(U)$ theory leads to a same ranking? The theory says the person is indifferent between the two lotteries. Risk Types and Their Utility Function Representations What characteristic of the games of chance can lead to same $E(G)$ but different $E(U)$? The characteristic is the “risk” associated with each game.At this juncture, we only care about that notion of risk, which captures the inherent variability in the outcomes (uncertainty) associated with each lottery. Then the $E(U)$ theory predicts that the individuals’ risk “attitude” for each lottery may lead to different rankings between lotteries. Moreover, the theory is “robust” in the sense that it also allows for attitudes toward risk to vary from one individual to the next. As we shall now see, the $E(U)$ theory does enable us to capture different risk attitudes of individuals. Technically, the difference in risk attitudes across individuals is called “heterogeneity of risk preferences” among economic agents. From the $E(U)$ theory perspective, we can categorize all economic agents into one of the three categories as noted in "1: The Nature of Risk - Losses and Opportunities": • Risk averse • Risk neutral • Risk seeking (or loving) We will explore how $E(U)$ captures these attitudes and the meaning of each risk attitude next. Consider the $E(U)$ function given by $E(U)= \displaystyle \sum_{i= 1}^{n} πi U(Wi)$. Let the preferences be such that the addition to utility one gets out of an additional dollar at lower levels of wealth is always greater than the additional utility of an extra dollar at higher levels of wealth. So, let us say that when a person has zero wealth (no money), then the person has zero utility. Now if the person receives a dollar, his utility jumps to 1 util. If this person is now given an additional dollar, then as per the monotonicity (more-is-better) assumption, his utility will go up. Let us say that it goes up to 1.414 utils so that the increase in utility is only 0.414 utils, while earlier it was a whole unit (1 util). At 2 dollars of wealth, if the individual receives another dollar, then again his families’ utility rises to a new level, but only to 1.732 utils, an increase of 0.318 units (1.732 −1.414). This is increasing utility at a decreasing rate for each additional unit of wealth. Figure $1$ shows a graph of the utility. The first thing we notice from Figure $1$ is its concavity, which means if one draws a chord connecting any two points on the curve, the chord will lie strictly below the curve. Moreover, the utility is always increasing although at a decreasing rate. This feature of this particular utility function is called diminishing marginal utility. Marginal utility at any given wealth level is nothing but the slope of the utility function at that wealth level.Mathematically, the property that the utility is increasing at a decreasing rate can be written as a combination of restrictions on the first and second derivatives (rate of change of slope) of the utility function, $u^\prime$(W)>0, $u^{\prime\prime}$(W)<0. Some functions that satisfy this property are $u(W)= W$, $LN(W)$, $−e −aW$. $u^\prime$(W)>0, $u^{\prime\prime}$(W)<0, $u(W)= W$, $LN(W)$, $−e −2W$. The functional form depicted in Figure $1$ is $LN(W)$. The question we ask ourselves now is whether such an individual, whose utility function has the shape in Figure $1$, will be willing to pay the actuarially fair price (AFP), which equals expected winnings, to play a game of chance? Let the game that offers him payoffs be offered to him. In Game 1, tables have playoff games by Game 1 in Table 3.1 based on the toss of a coin. The AFP for the game is$4. Suppose that a person named Terry bears this cost upfront and wins; then his final wealth is $10−4+10= 16$ (original wealth minus the cost of the game, plus the winning of $10), or else it equals $10− 4− 2= 4$ (original wealth minus the cost of the game, minus the loss of$2) in case he loses. Let the utility function of this individual be given by W . Then expected utility when the game costs AFP equals 0.5 16 +0.5 4 =3 utils. On the other hand, suppose Terry doesn’t play the game; his utility remains at 10 =3.162. Since the utility is higher when Terry doesn’t play the game, we conclude that any individual whose preferences are depicted by Figure $1$ will forgo a game of chance if its cost equals AFP. This is an important result for a concave utility function as shown in Figure $1$.
Such a person will need incentives to be willing to play the game. It could come as a price reduction for playing the lottery, or as a premium that compensates the individual for risk. If Terry already faces a risk, he will pay an amount greater than the actuarially fair value to reduce or eliminate the risk. Thus, it works both ways—consumers demand a premium above AFP to take on risk. Just so, insurance companies charge individuals premiums for risk transfer via insurances.
An individual—let’s name him Johann—has preferences that are characterized by those shown in Figure $1$ (i.e., by a concave or diminishing marginal utility function). Johann is a risk-averse person. We have seen that a risk-averse person refuses to play an actuarially fair game. Such risk aversions also provide a natural incentive for Johann to demand (or, equivalently, pay) a risk premium above AFP to take on (or, equivalently, get rid of) risk. Perhaps you will recall from "1: The Nature of Risk - Losses and Opportunities" that introduced a more mathematical measure to the description of risk aversion. In an experimental study, Holt and Laury (2002) find that a majority of their subjects under study made “safe choices,” that is, displayed risk aversion. Since real-life situations can be riskier than laboratory settings, we can safely assume that a majority of people are risk averse most of the time. What about the remainder of the population?
We know that most of us do not behave as risk-averse people all the time. In the later 1990s, the stock market was considered to be a “bubble,” and many people invested in the stock market despite the preferences they exhibited before this time. At the time, Federal Reserve Board Chairman Alan Greenspan introduced the term “irrational exuberance” in a speech given at the American Enterprise Institute. The phrase has become a regular way to describe people’s deviations from normal preferences. Such behavior was also repeated in the early to mid-2000s with a real estate bubble. People without the rational means to buy homes bought them and took “nonconventional risks,” which led to the 2008–2009 financial and credit crisis and major recessions (perhaps even depression) as President Obama took office in January 2009. We can regard external market conditions and the “herd mentality” to be significant contributors to changing rational risk aversion traits.
An individual may go skydiving, hang gliding, and participate in high-risk-taking behavior. Our question is, can the expected utility theory capture that behavior as well? Indeed it can, and that brings us to risk-seeking behavior and its characterization in $E(U)$ theory. Since risk-seeking behavior exhibits preferences that seem to be the opposite of risk aversion, the mathematical functional representation may likewise show opposite behavior. For a risk-loving person, the utility function will show the shape given in Figure $2$. It shows that the greater the level of wealth of the individual, the higher is the increase in utility when an additional dollar is given to the person. We call this feature of the function, in which utility is always increasing at an increasing rate, increasing marginal utility. It turns out that all convex utility functions look like Figure $2$. The curve lies strictly below the chord joining any two points on the curve.The convex curve in Figure $1$ has some examples that include the mathematical function $u(W)= W^2$, e W . $u(W)=W^2$, e W .
A risk-seeking individual will always choose to play a gamble at its AFP. For example, let us assume that the individual’s preferences are given by $u(W)= W2$. As before, the individual owns $10, and has to decide whether or not to play a lottery based on a coin toss. The payoff if a head turns up is$10 and −$2 if it’s a tail. We have seen earlier (in Table 3.1) that the AFP for playing this lottery is$4.
The expected utility calculation is as follows. After bearing the cost of the lottery upfront, the wealth is $6. If heads turns up, the final wealth becomes $16 (6 + 10)$. In case tails turns face-up, then the final wealth equals$4 ($6 −$2). People’s expected utility if they play the lottery is $u(W)=0.5× 16^2+0.5× 4^2=136$ utils.
On the other hand, if an individual named Ray decides not to play the lottery, then the $E(U)= 10^2=100$. Since the $E(U)$ is higher if Ray plays the lottery at its AFP, he will play the lottery. As a matter of fact, this is the mind-set of gamblers. This is why we see so many people at the slot machines in gambling houses.
The contrast between the choices made by risk-averse individuals and risk-seeking individuals is starkly clear in the above example.Mathematically speaking, for a risk-averse person, we have $E(U[W])≤U[E(W)]$. Similarly, for a risk-seeking person we have $E(U[W])≥U[E(W)]$. This result is called Jensen’s inequality. To summarize, a risk-seeking individual always plays the lottery at its AFP, while a risk-averse person always forgoes it. Their concave (Figure 3.1.1) versus convex (Figure $1$) utility functions and their implications lie at the heart of their decision making.
Finally, we come to the third risk attitude type wherein an individual is indifferent between playing a lottery and not playing it. Such an individual is called risk neutral. The preferences of such an individual can be captured in $E(U)$ theory by a linear utility function of the form $u(W)=aW$, where a is a real number > 0. Such an individual gains a constant marginal utility of wealth, that is, each additional dollar adds the same utility to the person regardless of whether the individual is endowed with $10 or$10,000. The utility function of such an individual is depicted in Figure $3$.
Key Takeaways
• This section lays the foundation for analysis of individuals’ behavior under uncertainty. Student should be able to describe it as such.
• The student should be able to compute expected gains and expected utilities.
• Finally, and most importantly, the concavity and convexity of the utility function is key to distinguishing between risk-averse and risk-seeking individuals.
Discussion Questions
1. Discuss the von Neumann-Morgenstern expected utility function and discuss how it differs from expected gains.
2. You are told that $U(W)= W^2$ is a utility function with diminishing marginal utility. Is it correct? Discuss, using definition of diminishing marginal utility.
3. An individual has a utility function given by $U(W)=W$, and initial wealth of $100. If he plays a costless lottery in which he can win or lose$10 at the flip of a coin, compute his expected utility. What is the expected gain? Will such a person be categorized as risk neutral?
4. Discuss the three risk types with respect to their shapes, technical/mathematical formulation, and the economic interpretation. | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/03%3A_Risk_Attitudes_-_Expected_Utility_Theory_and_Demand_for_Hedging/3.04%3A_Choice_under_Uncertainty_-_Expected_Utility_Theory.txt |
Learning Objectives
• In this section the student learns that an individual’s behavior cannot always be characterized within an expected utility framework. Biases and other behavioral aspects make individuals deviate from the behavior predicted by the E(U) theory.
Why do some people jump into the river to save their loved ones, even if they cannot swim? Why would mothers give away all their food to their children? Why do we have herd mentality where many individuals invest in the stock market at times of bubbles like at the latter part of the 1990s? These are examples of aspects of human behavior that E(U) theory fails to capture. Undoubtedly, an emotional component arises to explain the few examples given above. Of course, students can provide many more examples. The realm of academic study that deals with departures from E(U) maximization behavior is called behavioral economics.
While expected utility theory provides a valuable tool for analyzing how rational people should make decisions under uncertainty, the observed behavior may not always bear it out. Daniel Kahneman and Amos Tversky (1974) were the first to provide evidence that E(U) theory doesn’t provide a complete description of how people actually decide under uncertain conditions. The authors conducted experiments that demonstrate this variance from the E(U) theory, and these experiments have withstood the test of time. It turns out that individual behavior under some circumstances violates the axioms of rational choice of E(U) theory.
Kahneman and Tversky (1981) provide the following example: Suppose the country is going to be struck by the avian influenza (bird flu) pandemic. Two programs are available to tackle the pandemic, A and B. Two sets of physicians, X and Y, are set with the task of containing the disease. Each group has the outcomes that the two programs will generate. However, the outcomes have different phrasing for each group. Group X is told about the efficacy of the programs in the following words:
• Program A: If adopted, it will save exactly 200 out of 600 patients.
• Program B: If adopted, the probability that 600 people will be saved is $\frac{1}{3}$, while the probability that no one will be saved is $\frac{2}{3}$.
Seventy-six percent of the doctors in group X chose to administer program A.
Group Y, on the other hand, is told about the efficacy of the programs in these words:
• Program A: If adopted, exactly 400 out of 600 patients will die.
• Program B: If adopted, the probability that nobody will die is $\frac{1}{3}$, while the probability that all 600 will die is $\frac{2}{3}$.
Only 13 percent of the doctors in this group chose to administer program A.
The only difference between the two sets presented to groups X and Y is the description of the outcomes. Every outcome to group X is defined in terms of “saving lives,” while for group Y it is in terms of how many will “die.” Doctors, being who they are, have a bias toward “saving” lives, naturally.
This experiment has been repeated several times with different subjects and the outcome has always been the same, even if the numbers differ. Other experiments with different groups of people also showed that the way alternatives are worded result in different choices among groups. The coding of alternatives that makes individuals vary from $E(U)$ maximizing behavior is called the framing effect.
In order to explain these deviations from $E(U)$, Kahneman and Tversky suggest that individuals use a value function to assess alternatives. This is a mathematical formulation that seeks to explain observed behavior without making any assumption about preferences. The nature of the value function is such that it is much steeper in losses than in gains. The authors insist that it is a purely descriptive device and is not derived from axioms like the E(U) theory. In the language of mathematics we say the value function is convex in losses and concave in gains. For the same concept, economists will say that the function is risk seeking in losses and risk averse in gains. A Kahneman and Tversky value function is shown in Figure $1$.
Figure $1$ shows the asymmetric nature of the value function. A loss of $200 causes the individual to feel more value is lost compared to an equivalent gain of$200. To see this notice that on the losses side (the negative x-axis) the graph falls more steeply than the rise in the graph on the gains side (positive x-axis). And this is true regardless of the initial level of wealth the person has initially.
The implications of this type of value function for marketers and sellers are enormous. Note that the value functions are convex in losses. Thus, if $L is lost then say the value lost =−L. Now if there are two consecutive losses of$2 and $3, then the total value lost feels like $V (lost) =−2−3= −1.414−1.732=−3.142$. On the other hand if the losses are combined, then total loss =$5, and the value lost feels like −5 =−2.236. Thus, when losses are combined, the total value lost feels less painful than when the losses are segregated and reported separately.
We can carry out similar analysis on the Kahneman and Tversky function when there is a gain. Note the value function is concave in gains, say, $V(W)=W$. Now if we have two consecutive gains of $2 and$3, then the total value gained feels like $V(gain)=2+3 =1.414+1.732=3.142$. On the other hand, if we combine the gains, then total gains = $5, and the value gained feels like 5 =2.236. Thus, when gains are segregated, the sum of the value of gains turns out to be higher than the value of the sum of gains. So the idea would be to report combined losses, while segregating gains. Since the individual feels differently about losses and gains, the analysis of the value function tells us that to offset a small loss, we require a larger gain. So small losses can be combined with larger gains, and the individual still feels “happier” since the net effect will be that of a gain. However, if losses are too large, then combining them with small gains would result in a net loss, and the individual would feel that value has been lost. In this case, it’s better to segregate the losses from the gains and report them separately. Such a course of action will provide a consolation to the individual of the type: “At least there are some gains, even if we suffer a big loss.” Framing effects are not the only reason why people deviate from the behavior predicted by E(U) theory. We discuss some other reasons next, though the list is not exhaustive; a complete study is outside the scope of the text. 1. Overweighting and underweighting of probabilities. Recall that $E(U)$ is the sum of products of two sets of numbers: first, the utility one receives in each state of the world and second, the probabilities with which each state could occur. However, most of the time probabilities are not assigned objectively, but subjectively. For example, before Hurricane Katrina in 2005, individuals in New Orleans would assign a very small probability to flooding of the type experienced in the aftermath of Katrina. However, after the event, the subjective probability estimates of flooding have risen considerably among the same set of individuals. Humans tend to give more weight to events of the recent past than to look at the entire history. We could attribute such a bias to limited memory, individuals’ myopic view, or just easy availability of more recent information. We call this bias to work with whatever information is easily availability an availability bias. But people deviate from E(U) theory for more reasons than simply weighting recent past more versus ignoring overall history. Individuals also react to experience bias. Since all of us are shaped somewhat by our own experiences, we tend to assign more weight to the state of the world that we have experienced and less to others. Similarly, we might assign a very low weight to a bad event occurring in our lives, even to the extent of convincing ourselves that such a thing could never happen to us. That is why we see women avoiding mammograms and men colonoscopies. On the other hand, we might attach a higher-than-objective probability to good things happening to us. No matter what the underlying cause is, availability or experience, we know empirically that the probability weights are adjusted subjectively by individuals. Consequently, their observed behavior deviates from E(U) theory. 2. Anchoring bias. Often individuals base their subjective assessments of outcomes based on an initial “guesstimate.” Such a guess may not have any reasonable relationship to the outcomes being studied. In an experimental study reported by Kahneman and Tversky in Science (1974), the authors point this out. The authors call this anchoring bias; it has the effect of biasing the probability estimates of individuals. The experiment they conducted ran as follows: First, each individual under study had to spin a wheel of fortune with numbers ranging from zero to one hundred. Then, the authors asked the individual if the percent of African nations in the United Nations (UN) was lower or higher than the number on the wheel. Finally, the individuals had to provide an estimate of the percent of African nations in the UN. The authors observed that those who spun a 10 or lower had a median estimate of 25 percent, while those who spun 65 or higher provided a median estimate of 45 percent. Notice that the number obtained on the wheel had no correlation with the question being asked. It was a randomly generated number. However, it had the effect of making people anchor their answers around the initial number that they had obtained. Kahneman and Tversky also found that even if the payoffs to the subjects were raised to encourage people to provide a correct estimate, the anchoring effect was still evident. 3. Failure to ignore sunk costs. This is the most common reason why we observe departures from E(U) theory. Suppose a person goes to the theater to watch a movie and discovers that he lost$10 on the way. Another person who had bought an online ticket for $10 finds he lost the ticket on the way. The decision problem is: “Should these people spend another$10 to watch the movie?” In experiments conducted suggesting exactly the same choices, respondents’ results show that the second group is more likely to go home without watching the movie, while the first one will overwhelmingly (88 percent) go ahead and watch the movie.
Why do we observe this behavior? The two situations are exactly alike. Each group lost $10. But in a world of mental accounting, the second group has already spent the money on the movie. So this group mentally assumes a cost of$20 for the movie. However, the first group had lost $10 that was not marked toward a specific expense. The second group does not have the “feel” of a lost ticket worth$10 as a sunk cost, which refers to money spent that cannot be recovered. What should matter under E(U) theory is only the value of the movie, which is $10. Whether the ticket or cash was lost is immaterial. Systematic accounting for sunk costs (which economists tell us that we should ignore) causes departures from rational behavior under E(U) theory. The failure to ignore sunk costs can cause individuals to continue to invest in ventures that are already losing money. Thus, somebody who bought shares at$1,000 that now trade at $500 will continue to hold on to them. They realized that the$1,000 is sunk and thus ignore it. Notice that under rational expectations, what matters is the value of the shares now. Mental accounting tells the shareholders that the value of the shares is still $1,000; the individual does not sell the shares at$500. Eventually, in the economists’ long run, the shareholder may have to sell them for \$200 and lose a lot more. People regard such a loss in value as a paper loss versus real loss, and individuals may regard real loss as a greater pain than a paper loss.
By no mean is the list above complete. Other kinds of cognitive biases intervene that can lead to deviating behavior from E(U) theory. But we must notice one thing about E(U) theory versus the value function approach. The E(U) theory is an axiomatic approach to the study of human behavior. If those axioms hold, it can actually predict behavior. On the other hand the value function approach is designed only to describe what actually happens, rather than what should happen.
Key Takeaways
• Students should be able to describe the reasons why observed behavior is different from the predicted behavior under E(U) theory.
• They should also be able to discuss the nature of the value function and how it differs from the utility function.
Discussion Questions
1. Describe the Kahneman and Tversky value function. What evidence do they offer to back it up?
2. Are shapes other than the ones given by utility functions and value function possible? Provide examples and discuss the implications of the shapes.
3. Discuss similarities and dissimilarities between availability bias, experience bias, and failure to ignore sunk costs? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/03%3A_Risk_Attitudes_-_Expected_Utility_Theory_and_Demand_for_Hedging/3.05%3A_Biases_Affecting_Choice_under_Uncertainty.txt |
Learning Objectives
• In this section we focus on risk aversion and the price of hedging risk. We discuss the actuarially fair premium (AFP) and the risk premium.
• Students will learn how these principles are applied to pricing of insurance (one mechanism to hedge individual risks) and the decision to purchase insurance.
From now on, we will restrict ourselves to the \(E(U)\) theory since we can predict behavior with it. We are interested in the predictions about human behavior, rather than just a description of it.
The risk averter’s utility function (as we had seen earlier in Figure 3.4.1 is concave to the origin. Such a person will never play a lottery at its actuarially fair premium, that is, the expected loss in wealth to the individual. Conversely, such a person will always pay at least an actuarially fair premium to get rid of the entire risk.
Suppose Ty is a student who gets a monthly allowance of \$200 (initial wealth \(W_0\)) from his parents. He might lose \$100 on any given day with a probability 0.5 or not lose any amount with 50 percent chance. Consequently, the expected loss (\(E(L)\)) to Ty equals \(0.5(\$0) + 0.5(\$100) = \$50\). In other words, Ty’s expected final wealth \(E (FW) = 0.5(\$200 − \$0) + 0.5(\$200 − \$100) = W_0 − E(L) = \$150\). The question is how much Ty would be willing to pay to hedge his expected loss of \$50. We will assume that Ty’s utility function is given by \(U(W)= W\) —a risk averter’s utility function.
To apply the expected utility theory to answer the question above, we solve the problem in stages. In the first step, we find out Ty’s expected utility when he does not purchase insurance and show it on Figure \(1\) (a). In the second step, we figure out if he will buy insurance at actuarially fair prices and use Figure \(1\) (b) to show it. Finally, we compute Ty’s utility when he pays a premium P to get rid of the risk of a loss. P represents the maximum premium Ty is willing to pay. This is featured in Figure \(1\) (c). At this premium, Ty is exactly indifferent between buying insurance or remaining uninsured. What is P?
• Step 1: Expected utility, no insurance.
In case Ty does not buy insurance, he retains all the uncertainty. Thus, he will have an expected final wealth of \$150 as calculated above. What is his expected utility?
The expected utility is calculated as a weighted sum of the utilities in the two states, loss and no loss. Therefore, \(E(U)= 0.5 (\$200−\$0) +0.5 (\$200−\$100) = 12.071\). Figure \(1\) (a) shows the point of \(E(U)\) for Ty when he does not buy insurance. His expected wealth is given by \$150 on the x-axis and expected utility by 12.071 on the y-axis. When we plot this point on the chart, it lies at D, on the chord joining the two points A and B. A and B on the utility curve correspond to the utility levels when a loss is possible (\(W_1= 100\)) and no loss (\(W_2= 200\)), respectively. In case Ty does not hedge, then his expected utility equals 12.071.
• What is the actuarially fair premium for Ty? Note actuarially fair premium (AFP) equals the expected loss = \$50. Thus the AFP is the distance between W0and the \(E(FW)\) in Figure \(1\) (a).
• Step 2: Utility with insurance at AFP.
Now, suppose an insurance company offers insurance to Ty at a \$50 premium (AFP). Will Ty buy it? Note that when Ty buys insurance at AFP, and he does not have a loss, his final wealth is \$150 (Initial Wealth [\$200] − AFP [\$50]). In case he does suffer a loss, his final wealth = Initial Wealth (\$200) − AFP (\$50) − Loss (\$100) + Indemnity (\$100) = \$150. Thus, after the purchase of insurance at AFP, Ty’s final wealth stays at \$150 regardless of a loss. That is why Ty has purchased a certain wealth of \$150, by paying an AFP of \$50. His utility is now given by 150 =12.247 . This point is represented by C in Figure \(1\) (b). Since C lies strictly above D, Ty will always purchase full insurance at AFP. The noteworthy feature for risk-averse individuals can now be succinctly stated. A risk-averse person will always hedge the risk completely at a cost that equals the expected loss. This cost is the actuarially fair premium (AFP). Alternatively, we can say that a risk-averse person always prefers certainty to uncertainty if uncertainty can be hedged away at its actuarially fair price.
However, the most interesting part is that a risk-averse individual like Ty will pay more than the AFP to get rid of the risk.
• Step 3: Utility with insurance at a price greater than AFP.
In case the actual premium equals AFP (or expected loss for Ty), it implies the insurance company does not have its own costs/profits. This is an unrealistic scenario. In practice, the premiums must be higher than AFP. The question is how much higher can they be for Ty to still be interested?
To answer this question, we need to answer the question, what is the maximum premium Ty would be willing to pay? The maximum premium P is determined by the point of indifference between no insurance and insurance at price P.
If Ty bears a cost of P, his wealth stands at \(\$200 − P\). And this wealth is certain for the same reasons as in step 2. If Ty does not incur a loss, his wealth remains \(\$200−P\). In case he does incur a loss then he gets indemnified by the insurance company. Thus, regardless of outcome his certain wealth is \(\$200 − P\).
To compute the point of indifference, we should equate the utility when Ty purchases insurance at P to the expected utility in the no-insurance case. Note \(E(U)\) in the no-insurance case in step 1 equals 12.071. After buying insurance at P, Ty’s certain utility is \(200−P\). So we solve the equation \(200−P=12.071\) and get P = \$54.29.
Let us see the above calculation on a graph, Figure \(1\) (c). Ty tells himself, “As long as the premium P is such that I am above the \(E(U)\) line when I do not purchase insurance, I would be willing to pay it.” So starting from the initial wealth \(W_0\), we deduct P, up to the point that the utility of final wealth equals the expected utility given by the point \(E(U)\) on the y-axis. This point is given by \(W_2 = W_0 − P\).
The Total Premium (TP) = P comprises two parts. The AFP = the distance between initial wealth W0 and \(E(FW)\) (= \(E(L)\)), and the distance between \(E(FW)\) and \(W_2\). This distance is called the risk premium (RP, shown as the length ED in Figure \(1\) [c]) and in Ty’s case above, it equals \(\$54.29 − \$50 = \$4.29\).
The premium over and above the AFP that a risk-averse person is willing to pay to get rid of the risk is called the risk premium. Insurance companies are aware of this behavior of risk-averse individuals. However, in the example above, any insurance company that charges a premium greater than \$54.29 will not be able to sell insurance to Ty.
Thus, we see that individuals’ risk aversion is a key component in insurance pricing. The greater the degree of risk aversion, the higher the risk premium an individual will be willing to pay. But the insurance price has to be such that the premium charged turns out to be less than or equal to the maximum premium the person is willing to pay. Otherwise, the individual will never buy full insurance.
Thus, risk aversion is a necessary condition for transfer of risks. Since insurance is one mechanism through which a risk-averse person transfers risk, risk aversion is of paramount importance to insurance demand.
The degree of risk aversion is only one aspect that affects insurance prices. Insurance prices also reflect other important components. To study them, we now turn to the role that information plays in the markets: in particular, how information and information asymmetries affect the insurance market.
Key Takeaways
• In this section, students learned that risk aversion is the key to understanding why insurance and other risk hedges exist.
• The student should be able to express the demand for hedging and the conditions under which a risk-averse individual might refuse to transfer risk.
Discussion Questions
1. What shape does a risk-averse person’s utility curve take? What role does risk aversion play in market demand for insurance products?
2. Distinguish between risk premium and AFP. Show the two on a graph.
3. Under what conditions will a risk-averse person refuse an insurance offer? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/03%3A_Risk_Attitudes_-_Expected_Utility_Theory_and_Demand_for_Hedging/3.06%3A_Risk_Aversion_and_Price_of_Hedging_Risk.txt |
Learning Objectives
• Students learn the critical role that information plays in markets. In particular, we discuss two major information economics problems: moral hazard and adverse selection. Students will understand how these two problems affect insurance availability and affordability (prices).
We all know about the used-car market and the market for “lemons.” Akerlof (1970) was the first to analyze how information asymmetry can cause problems in any market. This is a problem encountered when one party knows more than the other party in the contract. In particular, it addresses how information differences between buyers and the sellers (information asymmetry) can cause market failure. These differences are the underlying causes of adverse selection, a situation under which a person with higher risk chooses to hedge the risk, preferably without paying more for the greater risk. Adverse selection refers to a particular kind of information asymmetry problem, namely, hidden information.
A second kind of information asymmetry lies in the hidden action, wherein one party’s actions are not observable by the counterparty to the contract. Economists study this issue as one of moral hazard.
Adverse Selection
Consider the used-car market. While the sellers of used cars know the quality of their cars, the buyers do not know the exact quality (imagine a world with no blue book information available). From the buyer’s point of view, the car may be a lemon. Under such circumstances, the buyer’s offer price reflects the average quality of the cars in the market.
When sellers approach a market in which average prices are offered, sellers who know that their cars are of better quality do not sell their cars. (This example can be applied to the mortgage and housing crisis in 2008. Sellers who knew that their houses are worth more prefer to hold on to them, instead of lowering the price in order to just make a sale). When they withdraw their cars from market, the average quality of the cars for sale goes down. Buyers’ offer prices get revised downward in response. As a result, the new level of better-quality car sellers withdraws from the market. As this cycle continues, only lemons remain in the market, and the market for used cars fails. As a result of an information asymmetry, the bad-quality product drives away the good-quality ones from the market. This phenomenon is called adverse selection.
It’s easy to demonstrate adverse selection in health insurance. Imagine two individuals, one who is healthy and the other who is not. Both approach an insurance company to buy health insurance policies. Assume for a moment that the two individuals are alike in all respects but their health condition. Insurers can’t observe applicants’ health status; this is private information. If insurers could find no way to figure out the health status, what would it do?
Suppose the insurer’s price schedule reads, “Charge $10 monthly premium to the healthy one, and$25 to the unhealthy one.” If the insurer is asymmetrically informed of the health status of each applicant, it would charge an average premium ( $\frac{10+25}{2}=17.50$ ) to each. If insurers charge an average premium, the healthy individual would decide to retain the health risk and remain uninsured. In such a case, the insurance company would be left with only unhealthy policyholders. Note that these less-healthy people would happily purchase insurance, since while their actual cost should be $25 they are getting it for$17.50. In the long run, however, what happens is that the claims from these individuals exceed the amount of premium collected from them. Eventually, the insurance company may become insolvent and go bankrupt. Adverse selection thus causes bankruptcy and market failure. What is the solution to this problem? The easiest is to charge $25 to all individuals regardless of their health status. In a monopolistic market of only one supplier without competition this might work but not in a competitive market. Even in a close-to-competitive market the effect of adverse selection is to increase prices. How can one mitigate the extent of adverse selection and its effects? The solution lies in reducing the level of information asymmetry. Thus we find that insurers ask a lot of questions to determine the risk types of individuals. In the used-car market, the buyers do the same. Specialized agencies provide used-car information. Some auto companies certify their cars. And buyers receive warranty offers when they buy used cars. Insurance agents ask questions and undertake individuals’ risk classification according to risk types. In addition, leaders in the insurance market also developed solutions to adverse selection problems. This comes in the form of risk sharing, which also means partial insurance. Under partial insurance, companies offer products with deductibles (the initial part of the loss absorbed by the person who incurs the loss) and coinsurance, where individuals share in the losses with the insurance companies. It has been shown that high-risk individuals prefer full insurance, while low-risk individuals choose partial insurance (high deductibles and coinsurance levels). Insurance companies also offer policies where the premium is adjusted at a later date based on the claim experience of the policyholder during the period. Moral Hazard Adverse selection refers to a particular kind of information asymmetry problem, namely, hidden information. A second kind of information asymmetry lies in the hidden action, if actions of one party of the contract are not clear to the other. Economists study these problems under a category called the moral hazard problem. The simplest way to understand the problem of “observability” (or clarity of action) is to imagine an owner of a store who hires a manager. The store owner may not be available to actually monitor the manager’ actions continuously and at all times, for example, how they behave with customers. This inability to observe actions of the agent (manager) by the principal (owner) falls under the class of problems called the principal-agent problem.The complete set of principal-agent problems comprises all situations in which the agent maximizes his own utility at the expense of the principal. Such behavior is contrary to the principal-agent relationship that assumes that the agent is acting on behalf of the principal (in principal’s interest). Extension of this problem to the two parties of the insurance contract is straightforward. Let us say that the insurance company has to decide whether to sell an auto insurance policy to Wonku, who is a risk-averse person with a utility function given by $U(W)= W$. Wonku’s driving record is excellent, so he can claim to be a good risk for the insurance company. However, Wonku can also choose to be either a careful driver or a not-so-careful driver. If he drives with care, he incurs a cost. To exemplify, let us assume that Wonku drives a car carrying a market value of$10,000. The only other asset he owns is the $3,000 in his checking account. Thus, he has a total initial wealth of$13,000. If he drives carefully, he incurs a cost of $3,000. Assume he faces the following loss distributions when he drives with or without care. Table 3.3 Loss Distribution Drives with Care Drives without Care Probability Loss Probability Loss 0.25 10,000 0.75 10,000 0.75 0 0.25 0 Table 3.3 shows that when he has an accident, his car is a total loss. The probabilities of “loss” and “no loss” are reversed when he decides to drive without care. The $E(L)$ equals$2,500 in case he drives with care and $7,500 in case he does not. Wonku’s problem has four parts: whether to drive with or without care, (I) when he has no insurance and (II) when he has insurance. We consider Case I when he carries no insurance. Table 3.4 shows the expected utility of driving with and without care. Since care costs$3,000, his initial wealth gets reduced to $10,000 when driving with care. Otherwise, it stays at$13,000. The utility distribution for Wonku is shown in Table 3.4.
Table 3.4 Utility Distribution without Insurance
Drives with Care Drives without Care
Probability U (Final Wealth) Probability U (Final Wealth)
0.25 0 0.75 54.77
0.75 100 0.25 114.02
When he drives with care and has an accident, then his final wealth(FW) (FW)=$13,000−$3,000−$10,000=$0, and the utility = 0 =0. In case he does not have an accident and drives with care then his final wealth (FW) = (FW)=$13,000−$3,000−$0=$10,000 (note that the cost of care, $3,000, is still subtracted from the initial wealth) and the utility = 10,000 =100. Hence, $E(U)$ of driving with care = 0.25×0+0.75×100=75. Let’s go through it in bullets and make sure each case is clarified. • When Wonku drives without care he does not incur cost of care, so his initial wealth =$13,000. If he is involved in an accident, his final wealth $(FW)=13,000−10,000=3,000$, and the utility = 3,000=54.77. Otherwise, his final wealth $(FW)=13,000−0=13,000$ and the utility = 13,000 =114.02. Computing the expected utility the same way as in the paragraph above, we get $E(U)=0.75×54.77+0.25×114.02=69.58$.
• In Case I, when Wonku does not carry insurance, he will drive carefully since his expected utility is higher when he exercises due care. His utility is 75 versus 69.58.
• In Case II we assume that Wonku decides to carry insurance, and claims to the insurance company. He is a careful driver. Let us assume that his insurance policy is priced based on this claim. Assuming the insurance company’s profit and expense loading factor equals 10 percent of AFP (actuarially fair premium), the premium demanded is $2,750=2,500(1+0.10)$. Wonku needs to decide whether or not to drive with care.
• We analyze the decision based on $E(U)$ as in Case I. The wealth after purchase of insurance equals $10,250. The utility in cases of driving with care or without care is shown in Table 3.5 below. Table 3.5 Utility Distribution with Insurance Drives with Care Drives without Care Probability U(FW) Probability U(FW) 0.25 85.15 0.75 101.24 0.75 85.15 0.25 101.24 Notice that after purchase of insurance, Wonku has eliminated the uncertainty. So if he has an accident, the insurance company indemnifies him with$10,000. Thus, when Wonku has insurance, the following are the possibilities:
• He is driving with care
• And his car gets totaled, his final wealth = $10,250−3,000−10,000+10,000=7,250$, and associated utility = 7250 =85.15.
• And no loss occurs, his final wealth = $10,250−3,000=7,250$.
So the expected utility for Wonku = 85.15 when he drives with care.
• He does not drive with care
• And his car gets totaled, his final wealth = $10,250−10,000+10,000=10,250$, and associated utility = 10250 =101.24.
• And no loss occurs, his final wealth = \$10,250 and utility = 101.24.
So the expected utility for Wonku = 101.24 when he drives without care after purchasing insurance.
The net result is he switches to driving with no care.
Wonku’s behavior thus changes from driving with care to driving without care after purchasing insurance. Why do we get this result? In this example, the cost of insurance is cheaper than the cost of care. Insurance companies can charge a price greater than the cost of care up to a maximum of what Wonku is willing to pay. However, in the event of asymmetric information, the insurance company will not know the cost of care. Thus, inexpensive insurance distorts the incentives and individuals switch to riskier behavior ex post.
In this moral hazard example, the probabilities of having a loss are affected, not the loss amounts. In practice, both will be affected. At its limit, when moral hazard reaches a point where the intention is to cheat the insurance company, it manifests itself in fraudulent behavior.
How can we solve this problem? An ideal solution would be continuous monitoring, which is prohibitively expensive and may not even be legal for privacy issues. Alternatively, insurance companies try and gather as much information as possible to arrive at an estimate of the cost of care or lack of it. Also, more information leads to an estimate of the likelihood that individuals will switch to riskier behavior afterwards. So questions like marital status/college degree and other personal information might be asked. Insurance companies will undertake a process called risk classification. We discuss this important process later in the text.
So far we have learned how individuals’ risk aversion and information asymmetry explain behavior associated with hedging. But do these reasons also hold when we study why corporations hedge their risks? We provide the answer to this question next.
Key Takeaways
• Students should be able to define information asymmetry problems, in particular moral hazard and adverse selection.
• They must also be able to discuss in detail the effects these phenomena have on insurance prices and risk transfer markets in general.
• Students should spend some effort to understand computations, which are so important if they wish to fully understand the effects that these computations have on actuarial science. Insurance companies make their decisions primarily on the basis of such calculations.
Discussion Questions
1. What information asymmetry problems arise in economics? Distinguish between moral hazard and adverse selection. Give an original example of each.
2. What effects can information asymmetry have in markets?
3. Is risk aversion a necessary condition for moral hazard or adverse selection to exist? Provide reasons.
4. What can be done to mitigate the effect of moral hazard and adverse selection in markets/insurance markets? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/03%3A_Risk_Attitudes_-_Expected_Utility_Theory_and_Demand_for_Hedging/3.07%3A_Information_Asymmetry_Problem_in_Economics.txt |
Learning Objectives
• Why should corporations hedge? Financial theory tells us that in a perfect world, corporations are risk neutral. Students can learn in this section the reasons why large companies hedge risk, and, in particular, why they buy insurance.
Financial theory tells us that corporations are risk neutral. This is because only the systematic risk matters, while a particular company can diversify the idiosyncratic riskSystematic risk is the risk that everyone has to share, each according to his/her capacity. Idiosyncratic risk, on the other hand, falls only on a small section of the population. While systematic risk cannot be transferred to anyone outside since it encompasses all agents, idiosyncratic risk can be transferred for a price. That is why idiosyncratic risk is called diversifiable, and systematic is not. The economy-wide recession that unfolded in 2008 is a systematic risk in which everyone is affected. away. If we think about a large company held by a large number of small shareholders like us, then we’d prefer that the company not hedge its risks. In fact, if we wanted to hedge those risks we can do it ourselves. We hold a particular company’s shares because we are looking for those particular risks.
Look back at Figure 3.3.3. Since firms are risk neutral, their value function is the straight line that appears in the figure. Thus corporations will hedge risk only at their AFP, otherwise they will not. But we know that insurance companies cannot really sell policies at AFP, since they also have to cover their costs and profits. Yet we find that corporations still buy these hedging instruments at greater price than AFP. Therefore, to find a rationale for corporations hedging behavior, we have to move beyond the individual level utility functions of risk aversion.
The following are several reasons for companies hedging behavior:
1. Managers hedge because they are undiversified: Small shareholders like us can diversify our risks, but managers cannot. They invest their income from labor as well as their personal assets in the firm. Therefore, while owners (principals) are diversified, managers (agents) are not. Since managers are risk averse and they control the company directly, they hedge.
2. Managers want to lower expected bankruptcy costs: If a company goes bankrupt, then bankruptcy supervisors investigate and retain a part of the company’s assets. The wealth gets transferred to third parties and constitutes a loss of assets to the rightful owners. Imagine a fire that destroys the plant. If the company wants to avoid bankruptcy, it might want to rebuild it. If rebuilding is financed through debt financing, the cost of debt is going to be very high because the company may not have any collateral to offer. In this case, having fire insurance can serve as collateral as well as compensate the firm when it suffers a loss due to fire.
3. Risk bearers may be in a better position to bear the risk: Companies may not be diversified, in terms of either product or geography. They may not have access to broader capital markets because of small size. Companies may transfer risk to better risk bearers that are diversified and have better and broader access to capital markets.
4. Hedging can increase debt capacity: Financial theory tells us about an optimal capital structure for every company. This means that each company has an optimal mix of debt and equity financing. The amount of debt determines the financial risk to a company. With hedging, the firm can transfer the risk outside the firm. With lower risk, the firm can undertake a greater amount of debt, thus changing the optimal capital structure.
5. Lowering of tax liability: Since insurance premiums are tax deductible for some corporate insurance policies, companies can lower the expected taxes by purchasing insurance.
6. Other reasons: We can cite some other reasons why corporations hedge. Regulated companies are found to hedge more than unregulated ones, probably because law limits the level of risk taking. Laws might require companies to purchase some insurance mandatorily. For example, firms might need aircraft liability insurance, third-party coverage for autos, and workers compensation. Firms may also purchase insurance to signal credit worthiness (e.g., construction coverage for commercial builders). Thus, the decision to hedge can reduce certain kinds of information asymmetry problems as well.
We know that corporations hedge their risks, either through insurance or through other financial contracts. Firms can use forwards and futures, other derivatives, and option contracts to hedge their risk. The latter are not pure hedges and firms can use them to take on more risks instead of transferring them outside the firm. Forwards and futures, derivatives, and option contracts present the firm with double-edged swords. Still, because of their complex nature, corporations are in a better position to use it than the individuals who mostly use insurance contracts to transfer their risk.
Key Takeaways
• The student should be able to able to distinguish between individual demand and corporate demand for risk hedging.
• The student should be able to understand and express reasons for corporate hedging.
Discussion Questions
1. Which risks matter for corporations: systematic or idiosyncratic? Why?
2. Why can’t the rationale of hedging used to explain risk transfer at individual level be applied to companies?
3. Describe the reasons why companies hedge their risks. Provide examples.
4. What is an optimal capital structure? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/03%3A_Risk_Attitudes_-_Expected_Utility_Theory_and_Demand_for_Hedging/3.08%3A_Why_Corporations_Hedge.txt |
1. What is risk? How is it philosophically different from uncertainty?
2. What is asymmetric information? Explain how it leads to market failures in an otherwise perfectly competitive market.
3. Explain the difference between moral hazard and adverse selection. Can one exist without the other?
4. What externalities are caused in the insurance market by moral hazard and adverse selection? How are they overcome in practice?
5. Do risk-averse individuals outnumber risk-seeking ones? Give an intuitive explanation.
6. Provide examples that appear to violate expected utility theory and risk aversion.
7. Give two examples that tell how the framing of alternatives affects peoples’ choices under uncertainty.
8. Suppose you are a personal financial planner managing the portfolio of your mother. In a recession like the one in 2008, there are enormous losses and very few gains to the assets in the portfolio you suggested to your mother. Given the material covered in this chapter, suggest a few marketing strategies to minimize the pain of bad news to your mother.
9. Distinguish, through examples, between sunk cost, availability bias, and anchoring effect as reasons for departure from the expected utility paradigm.
10. Suppose Yuan Yuan wants to purchase a house for investment purposes. She will rent it out after buying it. She has two choices. Either buy it in an average location where the lifetime rent from the property will be \$700,000 with certainty or buy it in an upscale location. However, in the upscale neighborhood there is a 60 percent chance that the lifetime income will equal \$1 million and 40 percent chance it will equal only \$250,000. If she has a utility function that equals \(U(W)= W\), Where would she prefer to buy the house?
11. What is the expected value when a six-sided fair die is tossed?
12. Suppose Yijia’s utility function is given by \(LN(W)\) and her initial wealth is \$500,000. If there is a 0.01 percent chance that a liability lawsuit will reduce her wealth to \$50,000, how much premium will she be willing to pay to get rid of the risk?
13. Your professor of economics tells you, “The additional benefit that a person derives from a given increase of his stock of a thing decreases with every increase in the stock he already has.” What type of risk attitude does such a person have?
14. Ms. Frangipani prefers Pepsi to Coke on a rainy day; Coke to Pepsi on a sunny one. On one sunny day at the CNN center in Atlanta, when faced with a choice between Pepsi, Coke, and Lipton iced tea, she decides to have a Pepsi. Should the presence of iced teas in the basket of choices affect her decision? Does she violate principles of utility maximization? If yes, which assumptions does she violate? If not, then argue how her choices are consistent with the utility theory.
15. Explain why a risk-averse person will purchase insurance for the following scenario: Lose \$20,000 with 5 percent chance or lose \$0 with 95 percent probability. The premium for the policy is \$1,000.
16. Imagine that you face the following pair of concurrent decisions. First examine both decisions, then indicate the options you prefer:
• Decision (i) Choose between
1. a sure gain of \$240,
2. 25 percent chance to gain \$1,000, and 75 percent chance to gain nothing.
• Decision (ii) Choose between:
1. a sure loss of \$750,
2. 75 percent chance to lose \$1,000 and 25 percent chance to lose nothing.
• Indicate which option you would choose in each of the decisions and why.This problem has been adopted from D. Kahneman and D. Lovallo, “Timid Choices and Bold Forecasts: A Cognitive Perspective on Risk Taking,” Management Science 39, no. 1 (1993): 17–31.
1. Consider the following two lotteries:
Which of these lotteries will you prefer to play?
Now, assume somebody promises you sure sums of money so as to induce you to not play the lotteries. What is the sure sum of money you will be willing to accept in case of each lottery: a or b? Is your decision “rational”?
1. Gain of \$100 with probability 0.75; no gain (\$0 gain) with probability 0.25
2. Gain of \$1,000 with probability 0.05; no gain (\$0 gain) with probability 0.95
2. Partial insurance:Challenging problem. This problem is designed to illustrate why partial insurance (i.e., a policy that includes deductibles and coinsurance) may be optimal for a risk-averse individual.
Suppose Marco has an initial wealth of \$1,000 and a utility function given by \(U(W)= W\). He faces the following loss distribution:
Prob Loss
0.9 0
0.1 500
1. If the price per unit of insurance is \$0.10 per dollar of loss, show that Marco will purchase full insurance (i.e., quantity for which insurance is purchased = \$500).
2. If the price per unit of insurance is \$0.11 per dollar of loss, show that Marco will purchase less than full insurance (i.e., quantity for which insurance is purchased is less than \$500). Hint: Compute \(E(U)\) for full \$500 loss and also for an amount less than \$500. See that when he insures strictly less than \$500, the EU is higher.
3. Otgo has a current wealth of \$500 and a lottery ticket that pays \$50 with probability 0.25; otherwise, it pays nothing. If her utility function is given by \(U(W)= W^2\), what is the minimum amount she is willing to sell the ticket for?
4. Suppose a coin is tossed twice in a row. The payoffs associated with the outcomes are
Outcome Win (+) or loss (−)
H-H +15
H-T +9
T-H −6
T-T −12
If the coin is unbiased, what is the fair value of the gamble?
5. If you apply the principle of framing to put a favorable spin to events in your life, how would you value the following gains or losses?
1. A win of \$100 followed by a loss of \$20
2. A win of \$20 followed by a loss of \$100
3. A win of \$50 followed by a win of \$60
4. A loss of \$50 followed by a win of \$60
6. Explain in detail what happens to an insurer that charges the same premium to teenage drivers as it does to the rest of its customers.
7. Corporations are risk neutral, yet they hedge. Why? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/03%3A_Risk_Attitudes_-_Expected_Utility_Theory_and_Demand_for_Hedging/3.09%3A_Risk_Attitudes_-_Expected_Utility_Theory_and_Demand_for_Hedging%28Exercises%29.txt |
In the prior chapters, we discussed risks from many aspects. With this chapter we begin the discussion of risk management and its methods that are so vital to businesses and to individuals. Today’s unprecedented global financial crisis following the man-made and natural megacatastrophes underscore the urgency for studying risk management and its tools. Information technology, globalization, and innovation in financial technologies have all led to a term called “enterprise risk management” (ERM). As you learned from the definition of risk in "1: The Nature of Risk - Losses and Opportunities" (see Figure 1.3.1), ERM includes managing pure opportunity and speculative risks. In this chapter, we discuss how firms use ERM to further their goals. This chapter and "5: The Evolution of Risk Management - Enterprise Risk Management" that follows evolve into a more thorough discussion of ERM. While employing new innovations, we should emphasize that the first step to understanding risk management is to learn the basics of the fundamental risk management processes. In a broad sense, they include the processes of identifying, assessing, measuring, and evaluating alternative ways to mitigate risks.
The steps that we follow to identify all of the entity’s risks involve measuring the frequency and severity of losses, as we discussed in "1: The Nature of Risk - Losses and Opportunities" and computed in "2: Risk Measurement and Metrics". The measurements are essential to create the risk map that profile all the risks identified as important to a business. The risk map is a visual tool used to consider alternatives of the risk management tool set. A risk map forms a grid of frequency and severity intersection points of each identified and measured risk. In this and the next chapter we undertake the task of finding risk management solutions to the risks identified in the risk map. Following is the anthrax story, which occurred right after September 11. It was an unusual risk of high severity and low frequency. The alternative tools for financial solutions to each particular risk are shown in the risk management matrix, which provides fundamental possible solutions to risks with high and low severity and frequency. These possible solutions relate to external and internal conditions and are not absolutes. In times of low insurance prices, the likelihood of using risk transfer is greater than in times of high rates. The risk management process also includes cost-benefit analysis.
The anthrax story was an unusual risk of high severity and low frequency. It illustrates a case of risk management of a scary risk and the dilemma of how best to counteract the risks.
How to Handle the Risk Management of a Low-Frequency but Scary Risk Exposure: The Anthrax Scare
The date staring up from the desk calendar reads June 1, 2002, so why is the Capitol Hill office executive assistant opening Christmas cards? The anthrax scare after September 11, 2001, required these late actions. For six weeks after an anthrax-contaminated letter was received in Senate Majority Leader Tom Daschle’s office, all Capitol Hill mail delivery was stopped. As startling as that sounds, mail delivery is of small concern to the many public and private entities that suffered loss due to the terrorism-related issues of anthrax. The biological agent scare, both real and imagined, created unique issues for businesses and insurers alike since it is the type of poison that kills very easily.
Who is responsible for the clean-up costs related to bioterrorism? Who is liable for the exposure to humans within the contaminated facility? Who covers the cost of a shutdown of a business for decontamination? What is a risk manager to do?
Senator Charles Grassley (R-Iowa), member of the Senate Finance Committee at the time, estimated that the clean-up project cost for the Hart Senate Office Building would exceed \$23 million. Manhattan Eye, Ear, and Throat Hospital closed its doors in late October 2001 after a supply-room worker contracted and later died from pulmonary anthrax. The hospital—a small, thirty-bed facility—reopened November 6, 2001, announcing that the anthrax scare closure had cost the facility an estimated \$700,000 in revenue.
These examples illustrate the necessity of holistic risk management and the effective use of risk mapping to identify any possible risk, even those that may remotely affect the firm. Even if their companies aren’t being directly targeted, risk managers must incorporate disaster management plans to deal with indirect atrocities that slow or abort the firms’ operations. For example, an import/export business must protect against extended halts in overseas commercial air traffic. A mail-order-catalog retailer must protect against long-term mail delays. Evacuation of a workplace for employees due to mold infestation or biochemical exposure must now be added to disaster recovery plans that are part of loss-control programs. Risk managers take responsibility for such programs.
After a temporary closure, reopened facilities still give cause for concern. Staffers at the Hart Senate Office Building got the green light to return to work on January 22, 2002, after the anthrax remediation process was completed. Immediately, staffers began reporting illnesses. By March, 255 of the building’s employees had complained of symptoms that included headaches, rashes, and eye or throat irritation, possibly from the chemicals used to kill the anthrax. Was the decision to reopen the facility too hasty?
Sources: “U.S. Lawmakers Complain About Old Mail After Anthrax Scare.” Dow Jones Newswires, 8 May 2002; David Pilla, “Anthrax Scare Raises New Liability Issues for Insurers,” A.M. Best Newswire, October 16, 2001; Sheila R. Cherry, “Health Questions Linger at Hart,” Insight on the News, April 15, 2002, p.16; Cinda Becker, “N.Y. Hospital Reopens; Anthrax Scare Costs Facility \$700,000,” Modern Healthcare, 12 November 2001, p. 8; Sheila R. Cherry, “Health Questions Linger at Hart,” Insight on the News, April 15, 2002, p. 16(2).
Today’s risk managers explore all risks together and consider correlations between risks and their management. Some risks interact positively with other risks, and the occurrence of one can trigger the other—flood can cause fires or an earthquake that destroys a supplier can interrupt business in another side of the country. As we discussed in "1: The Nature of Risk - Losses and Opportunities", economic systemic risks can impact many facets of the corporations, as is the current state of the world during the financial crisis of 2008.
In our technological and information age, every person involved in finding solutions to lower the adverse impact of risks uses risk management information systems (RMIS), which are data bases that provide information with which to compute the frequency and severity, explore difficult-to-identify risks, and provide forecasts and cost-benefits analyses.
This chapter therefore includes the following:
1. Links
2. The risk management function
3. Projected frequency and severity, cost-benefit analysis, and capital budgeting
4. Risk management alternatives: the risk management matrix
5. Comparing to current risk-handling methods
Links
Now that we understand the notion and measurement of risks from "1: The Nature of Risk - Losses and Opportunities" and "2: Risk Measurement and Metrics", and the attitudes toward risk in "3: Risk Attitudes - Expected Utility Theory and Demand for Hedging", we are ready to begin learning about the actual process of risk management. Within the goals of the firm discussed in "1: The Nature of Risk - Losses and Opportunities", we now delve into how risk managers conduct their jobs and what they need to know about the marketplace to succeed in reducing and eliminating risks. Holistic risk management is connected to our complete package of risks shown in Figure \(1\). To complete the puzzle, we have to
1. identify all the risks,
2. assess the risks,
3. find risk management solutions to each risk, and
4. evaluate the results.
Risk management decisions depend on the nature of the identified risks, the forecasted frequency and severity of losses, cost-benefit analysis, and using the risk management matrix in context of external market conditions. As you will see later in this chapter, risk managers may decide to transfer the risk to insurance companies. In such cases, final decisions can’t be separated from the market conditions at the time of purchase. Therefore, we must understand the nature of underwriting cycles, which are the business cycles of the insurance industry when insurance processes increase and fall (explained in "6: The Insurance Solution and Institutions"). When insurance prices are high, risk management decisions differ from those made during times of low insurance prices. Since insurance prices are cyclical, different decisions are called for at different times for the same assessed risks.
Risk managers also need to understand the nature of insurance well enough to be aware of which risks are uninsurable. Overall, in this Links section, shown in Figure \(1\), we can complete our puzzle only when we have mitigated all risks in a smart risk management process. | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/04%3A_Evolving_Risk_Management_-_Fundamental_Tools/4.01%3A_Introduction.txt |
Learning Objectives
• In this section you will learn about the big picture of all risk management steps.
Traditionally, a firm’s risk management function ensured that the pure risks of losses were managed appropriately. The risk manager was charged with the responsibility for specific risks only. Most activities involved providing adequate insurance and implementing loss-control techniques so that the firm’s employees and property remained safe. Thus, risk managers sought to reduce the firm’s costs of pure risks and to initiate safety and disaster management.
Typically, the traditional risk management position has reported to the corporate treasurer. Handling risks by self-insuring (retaining risks within the firm) and paying claims in-house requires additional personnel within the risk management function. In a small company or sole proprietorship, the owner usually performs the risk management function, establishing policy and making decisions. In fact, each of us manage our own risks, whether we have studied risk management or not. Every time we lock our house or car, check the wiring system for problems, or pay an insurance premium, we are performing the same functions as a risk manager. Risk managers use agents or brokers to make smart insurance and risk management decisions (agents and brokers are discussed in"7: Insurance Operations").
The traditional risk manager’s role has evolved, and corporations have begun to embrace enterprise risk management in which all risks are part of the process: pure, opportunity, and speculative risks. With this evolution, firms created the new post of chief risk officer (CRO). The role of CROs expanded the traditional role by integrating the firm’s silos, or separate risks, into a holistic framework. Risks cannot be segregated—they interact and affect one another.
In addition to insurance and loss control, risk managers or CROs use specialized tools to keep cash flow in-house, which we will discuss in "6: The Insurance Solution and Institutions" and "7: Insurance Operations". Captives are separate insurance entities under the corporate structure—mostly for the exclusive use of the firm itself. CROs oversee the increasing reliance on capital market instruments to hedge risk. They also address the entire risk map—a visual tool used to consider alternatives of the risk management tool set—in the realm of nonpure risks. For example, a cereal manufacturer, dependent upon a steady supply of grain used in production, may decide to enter into fixed-price long-term contractual arrangements with its suppliers to avoid the risk of price fluctuations. The CRO or the financial risk managers take responsibility for these trades. They also create the risk management guideline for the firm that usually includes the following:
• Writing a mission statement for risk management in the organization
• Communicating with every section of the business to promote safe behavior
• Identifying risk management policy and processes
• Pinpointing all risk exposures (what “keeps employees awake at night”)
• Assessing risk management and financing alternatives as well as external conditions in the insurance markets
• Allocating costs
• Negotiating insurance terms
• Adjusting claims adjustment in self-insuring firms
• Keeping accurate records
Writing risk management manuals set up the process of identification, monitoring, assessment, evaluation, and adjustments.
In larger organizations, the risk manager or CRO has differing authority depending upon the policy that top management has adopted. Policy statements generally outline the dimensions of such authority. Risk managers may be authorized to make decisions in routine matters but restricted to making only recommendations in others. For example, the risk manager may recommend that the costs of employee injuries be retained rather than insured, but a final decision of such magnitude would be made by top management.
The Risk Management Process
A typical risk management function includes the steps listed above: identifying risks, assessing them, forecasting future frequency and severity of losses, mitigating risks, finding risk mitigation solutions, creating plans, conducting cost-benefits analyses, and implementing programs for loss control and insurance. For each property risk exposure, for example, the risk manager would adopt the following or similar processes:
• Finding all properties that are exposed to losses (such as real property like land, buildings, and other structures; tangible property like furniture and computers; and intangible personal property like trademarks)
• Evaluating the potential causes of loss that can affect the firms’ property, including natural disasters (such as windstorms, floods, and earthquakes); accidental causes (such as fires, explosions, and the collapse of roofs under snow); and many other causes noted in "1: The Nature of Risk - Losses and Opportunities";
• Evaluating property value by different methods, such as book value, market value, reproduction cost, and replacement cost
• Evaluating the firm’s legal interest in each of the properties—whether each property is owned or leased
• Identifying the actual loss exposure in each property using loss histories (frequency and severity), accounting records, personal inspections, flow charts, and questionnaires
• Computing the frequency and severity of losses for each of the property risk exposures based on loss data
• Forecasting future losses for each property risk exposure
• Creating a specific risk map for all property risk exposures based on forecasted frequency and severity
• Developing risk management alternative tools (such as loss-control techniques) based upon cost-benefit analysis or insurance
• Comparing the existing solutions to potential solutions (traditional and nontraditional)—uses of risk maps
• Communicating the solutions with the whole organization by creating reporting techniques, feedback, and a path for ongoing execution of the whole process
• The process is very similar to any other business process.
Key Takeaways
• The modern firm ensures that the risk management function is embedded throughout the whole organization.
• The risk management process follows logical sequence just as any business process will.
• The main steps in the risk management process are identifying risks, measuring risks, creating a map, finding alternative solutions to managing the risk, and evaluating programs once they are put into place.
Discussion Questions
1. What are the steps in the pure risk management process?
2. Imagine that the step of evaluation of the risks did not account for related risks. What would be the result for the risk manager?
3. In the allocation of costs, does the CRO need to understand the holistic risk map of the whole company? Explain your answer with an example. | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/04%3A_Evolving_Risk_Management_-_Fundamental_Tools/4.02%3A_The_Risk_Management_Function.txt |
Learning Objectives
• In this section you will learn how to identify risks and create a risk map to communicate the importance of each risk on a severity and frequency grid.
Risk management policy statements are the primary tools to communicate risk management objectives. Forward-thinking firms have made a place for risk management policy statements for many years as leaders discuss the risk management process. Other tools used to relay objectives may include company mission statements, risk management manuals (which provide specific guidelines for detailed risk management problems, such as how to deal with the death or disability of a key executive), and even describe the risk manager’s job description. Effective risk management objectives coincide with those of the organization generally, and both must be communicated consistently. Advertisements, employee training programs, and other public activities also can communicate an organization’s philosophies and objectives.
Identifying Risks
The process of identifying all of a firm’s risks and their values is a very detailed process. It is of extreme importance to ensure that the business is not ignoring anything that can destroy it. To illustrate how the process takes shape, imagine a business such as Thompson’s department store that has a fleet of delivery trucks, a restaurant, a coffee shop, a restaurant, and a babysitting service for parents who are shopping. The risk manager who talks to each employee in the store usually would ask for a list of all the perils and hazards (discussed in "1: The Nature of Risk - Losses and Opportunities") that can expose the operation to losses.
A simple analysis of this department store risk exposure nicely illustrates risk identification, which is a critical element of risk management. For the coffee shop and restaurant, the risks include food poisoning, kitchen fire, and injuries to customers who slip. Spilled coffee can damage store merchandise. For the babysitting service, the store may be liable for any injury to infants as they are fed or play or possibly suffer injuries from other kids. In addition to worry about employees’ possible injuries while at work or damage to merchandise from mistreatment, the store risk manager would usually worry about the condition of the floors as a potential hazard, especially when wet. Most risk managers work with the architectural schematics of the building and learn about evacuation routes in case of fires. The location of the building is also critical to identification of risks. If the department store is in a flood-prone area, the risks are different than if the store were located in the mountains. The process involves every company stakeholder. Understanding the supply chain of movement of merchandise is part of the plan as well. If suppliers have losses, risk managers need to know about the risk associated with such delays. This example is a short illustration of the enormous task of risk identification.
Today’s CRO also reviews the financial statement of the firm to ensure the financial viability within the financial risks, the asset risks and product risks the firm undertakes. We elaborate more on this aspect with examples in "5: The Evolution of Risk Management - Enterprise Risk Management".
Risk Profiling
Discovering all risks, their assessments and their relationships to one another becomes critical to learning and understanding an organization’s tolerance for risk. This step comes after a separate and thorough review of each risk. Holistic risk mapping is the outcome of risk profiling, a process that evaluates all the risks of the organizations and measures the frequency and severity of each risk. Different kinds of organizations pose very different types of risk exposures, and risk evaluations can differ vastly among industries. Boeing, for example, has a tremendous wrongful death exposure resulting from plane crashes. Intellectual property piracy and property rights issues could have a big impact upon the operations of an organization like Microsoft.
Risk Mapping: Creating the Model
The results of risk profiling can be graphically displayed and developed into a model. One such model is risk mapping.Etti G. Baranoff, “Mapping the Evolution of Risk Management,” Contingencies (July/August 2004): 22–27. Risk mapping involves charting entire spectrums of risk, not individual risk “silos” from each separate business unit. Risk mapping becomes useful both in identifying risks and in choosing approaches to mitigate them. Such a map presents a cumulative picture of all the risks in one risk management solution chart. Different facets of risk could include
• workers’ compensation claims,
• earthquake or tornado exposure,
• credit risk,
• mold,
• terrorism,
• theft,
• environmental effects,
• intellectual property piracy, and
• a host of other concerns.
A risk map puts the risks a company faces into a visual medium to see how risks are clustered and to understand the relationships among risks. The risks are displayed on a severity and frequency grid after each risk is assessed. Risk maps can be useful tools for explaining and communicating various risks to management and employees. One map might be created to chart what risks are most significant to a particular company. This chart would be used to prioritize risk across the enterprise. Another map might show the risk reduction after risk management action is adopted, as we will show later in this chapter.Lee Ann Gjertsen, “‘Risk Mapping’ Helps RM’s Chart Solutions,” National Underwriter, Property & Casualty/Risk & Benefits Management Edition, June 7, 1999.
Figure \(1\) presents an example of a holistic risk map for an organization examining the dynamics of frequency and severity as they relate to each risk. By assigning the probability of occurrence against the estimate of future magnitude of possible loss, risk managers can form foundations upon which a corporation can focus on risk areas in need of actions. The possible actions—including risk avoidance, loss control, and insurance (loss transfer)—provide alternative solutions during the discussion of the risk management matrix in this chapter.
Note that risk maps include plotting intersection points between measures of frequency (on an x-axis) and severity (on a y-axis) and visually plotting intersection points. Each point represents the relationship between the frequency of the exposure and the severity of the exposure for each risk measured.
Risk Identification and Estimates of Frequency and Severity
Strategies for risk mapping will vary from organization to organization. Company objectives arise out of the firm’s risk appetite and culture. These objectives help determine the organization’s risk tolerance level (see "3: Risk Attitudes - Expected Utility Theory and Demand for Hedging"). As in the separate risk management process for each risk exposure, the first step in mapping risk is to identify the firm’s loss exposures and estimate and forecast the frequency and severity of each potential risk. Figure \(1\) displays (for illustration purposes only) quantified trended estimates of loss frequency and severity that risk managers use as inputs into the risk map for a hypothetical small import/export business, Notable Notions. The risk map graph is divided into the four quadrants of the classical risk management matrix (which we discuss in detail later in this chapter). As we will see, such matrices provide a critical part of the way to provide risk management solutions to each risk.
Plotting the Risk Map
Several sample risks are plotted in Notable Notions’ holistic risk map.The exercise is abridged for demonstrative purposes. An actual holistic risk mapping model would include many more risk intersection points plotted along the frequency/severity X and Y axes. This model can be used to help establish a risk-tolerance boundary and determine priority for risks facing the organization. Graphically, risk across the enterprise comes from four basic risk categories:
1. natural and man-made risks (grouped together under the hazard risks),
2. financial risks,
3. business risks, and
4. operational risks.
Natural and man-made risks include unforeseen events that arise outside of the normal operating environment. The risk map denotes that the probability of a natural and man-made frequency is very low, but the potential severity is very high—for example, a tornado, valued at approximately \$160 million. This risk is similar to earthquake, mold exposure, and even terrorism, all of which also fall into the low-frequency/high-severity quadrant. For example, in the aftermath of Hurricane Katrina, the New Orleans floods, and September 11, 2001, most corporations have reprioritized possible losses related to huge man-made and natural catastrophes. For example, more than 1,200 World Bank employees were sent home and barred from corporate headquarters for several days following an anthrax scare in the mailroom.Associated Press Newswire, May 22, 2002. This possibility exposes firms to large potential losses associated with an unexpected interruption to normal business operations. See the box in the introduction to this chapter "How to Handle the Risk Management of a Low-Frequency but Scary Risk Exposure: The Anthrax Scare".
Financial risks arise from changing market conditions involving
• prices,
• volatility,
• liquidity,
• credit risk,
• foreign exchange risk, and
• general market recession (as in the third and fourth quarter of 2008).
The credit crisis that arose in the third and fourth quarters of 2008 affected most businesses as economies around the world slowed down and consumers retrench and lower their spending. Thus, risk factors that may provide opportunities as well as potential loss as interest rates, foreign exchange rates are embedded in the risk map. We can display the opportunities—along with possible losses (as we show in "5: The Evolution of Risk Management - Enterprise Risk Management" in Figure 5.1.1).
In our example, we can say that because of its global customer base, Notable Notions has a tremendous amount of exposure to exchange rate risk, which may provide opportunities as well as risks. In such cases, there is no frequency of loss and the opportunity risk is not part of the risk map shown in Figure \(1\). If Notable Notions was a highly leveraged company (meaning that the firm has taken many loans to finance its operations), the company would be at risk of inability to operate and pay salaries if credit lines dried out. However, if it is a conservative company with cash reserves for its operations, Notable Notions’ risk map denotes the high number (frequency) of transactions in addition to the high dollar exposure (severity) associated with adverse foreign exchange rate movement. The credit risk for loans did not even make the map, since there is no frequency of loss in the data base for the company. Methods used to control the risks and lower the frequency and severity of financial risks are discussed in "5: The Evolution of Risk Management - Enterprise Risk Management".
One example of business risks is reputation risk, which is plotted in the high-frequency/high-severity quadrant. Only recently have we identified reputation risk in map models. Not only do manufacturers such as Coca-Cola rely on their high brand-name identification, so do smaller companies (like Notable Notions) whose customers rely on stellar business practices. One hiccup in the distribution chain causing nondelivery or inconsistent quality in an order can damage a company’s reputation and lead to canceled contracts. The downside of reputation damage is potentially significant and has a long recovery period. Companies and their risk managers currently rate loss of good reputation as one of the greatest corporate threats to the success or failure of their organization.“Risk Whistle: Reputation Risk,” Swiss Re publication, www.swissre.com. A case in point is the impact on Martha Stewart’s reputation after she was linked to an insider trading scandal involving the biotech firm ImClone.Geeta Anand, Jerry Arkon, and Chris Adams, “ImClone’s Ex-CEO Arrested, Charged with Insider Trading,” Wall Street Journal, June 13, 2002, 1. The day after the story was reported in the Wall Street Journal, the stock price of Martha Stewart Living Omnimedia declined almost 20 percent, costing Stewart herself nearly \$200 million.
Operational risks are those that relate to the ongoing day-to-day business activities of the organization. Here we reflect IT system failure exposure (which we will discuss in detail later in this chapter). On the figure above, this risk appears in the lower-left quadrant, low severity/low frequency. Hard data shows low down time related to IT system failure. (It is likely that this risk was originally more severe and has been reduced by backup systems and disaster recovery plans.) In the case of a nontechnology firm such as Notable Notions, electronic risk exposure and intellectual property risk are also plotted in the low-frequency/low-severity quadrant.
A pure risk (like workers’ compensation) falls in the lower-right quadrant for Notable Notions. The organization experiences a high-frequency but low-severity outcome for workers’ compensation claims. Good internal record-keeping helps to track the experience data for Notable Notions and allows for an appropriate mitigation strategy.
The location of each of the remaining data points on Figure \(1\) reflects an additional risk exposure for Notable Notions.
Once a company or CRO has reviewed all these risks together, Notable Notions can create a cohesive and consistent holistic risk management strategy. Risk managers can also review a variety of effects that may not be apparent when exposures are isolated. Small problems in one department may cause big ones in another, and small risks that have high frequency in each department can become exponentially more severe in the aggregate. We will explore property and liability risks more in "9: Fundamental Doctrines Affecting Insurance Contracts" and "10: Structure and Analysis of Insurance Contracts".
Key Takeaways
• Communication is key in the risk management processes and there are various mediums in use such as policy statement and manuals.
• The identification process includes profiling and risk mapping.
Discussion Questions
1. Design a brief risk management policy statement for a small child-care company. Remember to include the most important objectives.
2. For the same child-care company, create a risk identification list and plot the risks on a risk map.
3. Identify the nature of each risk on the risk map in terms of hazard risk, financial risk, business risk, and operational risks.
4. For the child-care company, do you see any speculative or opportunity risks? Explain. | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/04%3A_Evolving_Risk_Management_-_Fundamental_Tools/4.03%3A_Beginning_Steps_-_Communication_and_Identification.txt |
Learning Objectives
• In this section we focus on an example of how to compute the frequency and severity of losses (learned in "2: Risk Measurement and Metrics").
• We also forecast these measures and conduct a cost-benefit analysis for loss control.
Dana, the risk manager at Energy Fitness Centers, identified the risks of workers’ injury on the job and collected the statistics of claims and losses since 2003. Dana computed the frequency and severity using her own data in order to use the data in her risk map for one risk only. When we focus on one risk only, we work with the risk management matrix. This matrix provides alternative financial action to undertake for each frequency/severity combination (described later in this chapter). Dana’s computations of the frequency and severity appear in Table 4.1. Forecasting, on the other hand, appears in Table 4.2 and Figure \(1\). Forecasting involves projecting the frequency and severity of losses into the future based on current data and statistical assumptions.
Table 4.1 Workers’ Compensation Loss History of Energy Fitness Centers—Frequency and Severity
Year Number of WC Claims WC Losses Average Loss per Claim
2003 2,300 \$3,124,560 \$1,359
2004 1,900 \$1,950,000 \$1,026
2005 2,100 \$2,525,000 \$1,202
2006 1,900 \$2,345,623 \$1,235
2007 2,200 \$2,560,200 \$1,164
2008 1,700 \$1,907,604 \$1,122
Total 12,100 \$14,412,987
Frequency for the whole period Severity for the whole period
Mean 2,017 \$2,402,165 \$1,191
(See "2: Risk Measurement and Metrics" for the computation)
Table 4.2 Workers’ Compensation Frequency and Severity of Energy Fitness Centers—Actual and Trended
WC Frequency Linear Trend Frequency WC Average Claim Linear Trend Severity
2003 2,300 2,181 \$1,359 \$1,225
2004 1,900 2,115 \$1,026 \$1,226
2005 2,100 2,050 \$1,202 \$1,227
2006 1,900 1,984 \$1,235 \$1,228
2007 2,200 1,918 \$1,422 \$1,229
2008 1,700 1,852 \$1,122 \$1,230
2009 Estimated 1,786.67 Estimated \$1,231.53
Dana installed various loss-control tools during the period under study. The result of the risk reduction investments appear to be paying off. Her analysis of the results indicated that the annual frequency trend has decreased (see the negative slope for the frequency in Figure 4.3.1). The company’s success in decreasing loss severity doesn’t appear in such dramatic terms. Nevertheless, Dana feels encouraged that her efforts helped level off the severity. The slope of the annual severity (losses per claim) trend line is 1.09 per year—and hence almost level as shown in the illustration in Figure 4.3.1. (See the "4.7: Appendix - Forecasting" to this chapter for explanation of the computation of the forecasting analysis.)
Capital Budgeting: Cost-Benefit Analysis for Loss-Control Efforts
With the ammunition of reducing the frequency of losses, Dana is planning to continue her loss-control efforts. Her next step is to convince management to invest in a new innovation in security belts for the employees. These belts have proven records of reducing the severity of WC claim in other facilities. In this example, we show her cost-benefit analysis—analysis that examines the cost of the belts and compares the expense to the expected reduction in losses or savings in premiums for insurance. If the benefit of cost reduction exceeds the expense for the belt, Dana will be able to prove her point. In terms of the actual analysis, she has to bring the future reduction in losses to today’s value of the dollar by looking at the present value of the reduction in premiums. If the present value of premium savings is greater than the cost of the belts, we will have a positive net present value (NPV) and management will have a clear incentive to approve this loss-control expense.
With the help of her broker, Dana plans to show her managers that, by lowering the frequency and severity of losses, the workers’ compensation rates for insurance can be lowered by as much as 20–25 percent. This 20–25 percent is actually a true savings or benefit for the cost-benefit analysis. Dana undertook to conduct cash flow analysis for purchasing the new innovative safety belts project. A cash flow analysis looks at the amount of cash that will be saved and brings it into today’s present value. Table 4.3 provides the decrease in premium anticipated when the belts are used as a loss-control technique.
The cash outlay required to purchase the innovative belts is \$50,000 today. The savings in premiums for the next few years are expected to be \$20,000 in the first year, \$25,000 in the second year, and \$30,000 in the third year. Dana would like to show her managers this premium savings over a three-year time horizon. Table 4.3 shows the cash flow analysis that Dana used, using a 6 percent rate of return. For 6 percent, the NPV would be \(\$66,310 – 50,000= \$16,310\). You are invited to calculate the NPV at different interest rates. Would the NPV be greater for 10 percent? (The student will find that it is lower, since the future value of a lower amount today grows faster at 10 percent than at 6 percent.)
Table 4.3 Net Present Value (NPV) of Workers’ Compensation Premiums Savings for Energy Fitness Centers When Purchasing Innovative Safety Belts for \$50,000
Savings on Premiums Present Value of \$1 (at 6 percent) Present Value of Premium savings
End of Year End of Year
1 \$20,000 0.943 \$18,860
2 \$25,000 0.890 \$22,250
3 \$30,000 0.840 \$25,200
Total present value of all premium savings \$66,310
Net present value = \(\$66,310 − \$50,000= \$16,310> 0\)
Use a financial calculator
Risk Management Information System
Risk managers rely upon data and analysis techniques to assess and evaluate and thus to make informed decisions. One of the risk managers’ primary tasks—as you see from the activities of Dana at Energy Fitness Centers—is to develop the appropriate data systems to allow them to quantify the organization’s loss history, including
• types of losses,
• amounts,
• circumstances surrounding them,
• dates, and
• other relevant facts.
We call such computerized quantifications a risk management information system, or RMIS. An RMIS provides risk managers with the ability to slice and dice the data in any way that may help risk managers assess and evaluate the risks their companies face. The history helps to establish probability distributions and trends analysis. When risk managers use good data and analysis to make risk reduction decisions, they must always include consideration of financial concepts (such as the time value of money) as shown above.
The key to good decision making lies in the risk managers’ ability to analyze large amounts of data collected. A firm’s data warehousing (a system of housing large sets of data for strategic analysis and operations) of risk data allows decision makers to evaluate multiple dimensions of risks as well as overall risk. Reporting techniques can be virtually unlimited in perspectives. For example, risk managers can sort data by location, by region, by division, and so forth. Because risk solutions are only as good as their underlying assumptions, RMIS allows for modeling data to assist in the risk exposure measurement process. Self-administered retained coverages have experienced explosive growth across all industries. The boom has meant that systems now include customized Web-based reporting capabilities. The technological advances that go along with RMIS allows all decision makers to maximize a firm’s risk/reward tradeoff through data analysis.
KEY TAKEAWAY
• The student learned how to trend the frequency and severity measures for use in the risk map. When this data is available, the risk manager is able to conduct cost-benefit analysis comparing the benefit of adopting a loss-control measure.
Discussion Questions
1. Following is the loss data for slip-and-fall shoppers’ medical claims of the grocery store chain Derelex for the years 2004–2008.
1. Calculate the severity and frequency of the losses.
2. Forecast the severity and frequency for next year using the appendix to this chapter.
3. If a new mat can help lower the severity of slips and falls by 50 percent in the third year from now, what will be the projected severity in 3 years if the mats are used?
4. What should be the costs today for this mats to break even? Use cost-benefit analysis at 6 percent.
Year Number of Slip and Fall Claims Slip-and-Fall Losses
2004 1,100 \$1,650,000
2005 900 \$4,000,000
2006 700 \$3,000,000
2007 1,000 \$12,300,000
2008 1,400 \$10,500,000 | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/04%3A_Evolving_Risk_Management_-_Fundamental_Tools/4.04%3A__Projected_Frequency_and_Severity_and_Cost-Benefit_Analysis%E2%80%94Capital_Budgeting.txt |
Learning Objectives
• In this section you will learn about the alternatives available for managing risks based on the frequency and severity of the risks.
• We also address the risk manager’s alternatives—transferring the risk, avoiding it, and managing it internally with loss controls.
Once they are evaluated and forecasted, loss frequency and loss severity are used as the vertical and horizontal lines in the risk management matrix for one specific risk exposure. Note that such a matrix differs from the risk map described below (which includes all important risks a firm is exposed to). The risk management matrix includes on one axis, categories of relative frequency (high and low) and on the other, categories of relative severity (high and low). The simplest of these matrices is one with just four cells, as shown in the pure risk solutions in Table 4.4. While this matrix takes into account only two variables, in reality, other variables—the financial condition of the firm, the size of the firm, and external market conditions, to name a few—are very important in the decision.Etti G. Baranoff, “Determinants in Risk-Financing Choices: The Case of Workers’ Compensation for Public School Districts,” Journal of Risk and Insurance, June 2000.
Table 4.4 The Traditional Risk Management Matrix (for One Risk)
Pure Risk Solutions
Low Frequency of Losses High Frequency of Losses
Low Severity of Losses Retention—self-insurance Retention with loss control—risk reduction
High Severity of Losses Transfer—insurance Avoidance
The Risk Management Decision—Return to the Example
Dana, the risk manager of Energy Fitness Centers, also uses a risk management matrix to decide whether or not to recommend any additional loss-control devices. Using the data in Table 4.3 and Figure 4.4.1, Dana compared the forecasted frequency and severity of the worker’s compensation results to the data of her peer group that she obtained from the Risk and Insurance Management Society (RIMS) and her broker. In comparison, her loss frequency is higher than the median for similarly sized fitness centers. Yet, to her surprise, EFC’s risk severity is lower than the median. Based on the risk management matrix she should suggest to management that they retain some risks and use loss control as she already had been doing. Her cost-benefit analysis from above helps reinforce her decision. Therefore, with both cost-benefits analysis and the method of managing the risk suggested by the matrix, she has enough ammunition to convince management to agree to buy the additional belts as a method to reduce the losses.
To understand the risk management matrix alternatives, we now concentrate on each of the cells in the matrix.
Risk Transfer—Insurance
The lower-left corner of the risk management matrix represents situations involving low frequency and high severity. Here we find transfer of risk—that is, displacement of risk to a third, unrelated party—to an insurance company. We discuss insurance—both its nature and its operations—at length in "6: The Insurance Solution and Institutions" and "7: Insurance Operations". In essence, risk transference involves paying someone else to bear some or all of the risk of certain financial losses that cannot be avoided, assumed, or reduced to acceptable levels. Some risks may be transferred through the formation of a corporation with limited liability for its stockholders. Others may be transferred by contractual arrangements, including insurance.
Corporations—A Firm
The owner or owners of a firm face serious potential losses. They are responsible to pay debts and other financial obligations when such liabilities exceed the firm’s assets. If the firm is organized as a sole proprietorship, the proprietor faces this risk. His or her personal assets are not separable from those of the firm because the firm is not a separate legal entity. The proprietor has unlimited liability for the firm’s obligations. General partners in a partnership occupy a similar situation, each partner being liable without limit for the debts of the firm.
Because a corporation is a separate legal entity, investors who wish to limit possible losses connected with a particular venture may create a corporation and transfer such risks to it. This does not prevent losses from occurring, but the burden is transferred to the corporation. The owners suffer indirectly, of course, but their loss is limited to their investment in the corporation. A huge liability claim for damages may take all the assets of the corporation, but the stockholders’ personal assets beyond their stock in this particular corporation are not exposed to loss. Such a method of risk transfer sometimes is used to compartmentalize the risks of a large venture by incorporating separate firms to handle various segments of the total operation. In this way, a large firm may transfer parts of its risks to separate smaller subsidiaries, thus placing limits on possible losses to the parent company owners. Courts, however, may not approve of this method of transferring the liability associated with dangerous business activities. For example, a large firm may be held legally liable for damages caused by a small subsidiary formed to manufacture a substance that proves dangerous to employees and/or the environment.
Contractual Arrangements
Some risks are transferred by a guarantee included in the contract of sale. A noteworthy example is the warranty provided a car buyer. When automobiles were first manufactured, the purchaser bore the burden of all defects that developed during use. Somewhat later, automobile manufacturers agreed to replace defective parts at no cost, but the buyer was required to pay for any labor involved. Currently, manufacturers typically not only replace defective parts but also pay for labor, within certain constraints. The owner has, in effect, transferred a large part of the risk of purchasing a new automobile back to the manufacturer. The buyer, of course, is still subject to the inconvenience of having repairs made, but he or she does not have to pay for them.
Other types of contractual arrangements that transfer risk include leases and rental agreements, hold-harmless clauses“A Hold Harmless Agreement is usually used where the Promisor’s actions could lead to a claim or liability to the Promisee. For example, the buyer of land wants to inspect the property prior to close of escrow, and needs to conduct tests and studies on the property. In this case, the buyer would promise to indemnify the current property owner from any claims resulting from the buyer’s inspection (i.e., injury to a third party because the buyer is drilling a hole; to pay for a mechanic’s lien because the buyer hired a termite inspector, etc.). Another example is where a property owner allows a caterer to use its property to cater an event. In this example, the Catering Company (the “Promisor”) agrees to indemnify the property owner for any claims arising from the Catering Company’s use of the property.” From Legaldocs, a division of U.S.A. Law Publications, Inc., www.legaldocs.com/docs/holdha_1.mv. and surety bonds.A surety bond is a three-party instrument between a surety, the contractor, and the project owner. The agreement binds the contractor to comply with the terms and conditions of a contract. If the contractor is unable to successfully perform the contract, the surety assumes the contractor’s responsibilities and ensures that the project is completed. Perhaps the most important arrangement for the transfer of risk important to our study is insurance.
Insurance is a common form of planned risk transfer as a financing technique for individuals and most organizations. The insurance industry has grown tremendously in industrialized countries, developing sophisticated products, employing millions of people, and investing billions of dollars. Because of its core importance in risk management, insurance is the centerpiece in most risk management activities.
Risk Assumption
The upper-left corner of the matrix in Table 4.4, representing both low frequency and low severity, shows retention of risk. When an organization uses a highly formalized method of retention of a risk, it is said the organization has self-insured the risk. The company bears the risk and is willing to withstand the financial losses from claims, if any. It is important to note that the extent to which risk retention is feasible depends upon the accuracy of loss predictions and the arrangements made for loss payment. Retention is especially attractive to large organizations. Many large corporations use captives, which are a form of self-insurance. When a business creates a subsidiary to handle the risk exposures, the business creates a captive. As noted above, broadly defined, a captive insurance company is one that provides risk management protection to its parent company and other affiliated organizations. The captive is controlled by its parent company. We will provide a more detailed explanation of captives in "6: The Insurance Solution and Institutions". If the parent can use funds more productively (that is, can earn a higher after-tax return on investment), the formation of a captive may be wise. The risk manager must assess the importance of the insurer’s claims adjusting and other services (including underwriting) when evaluating whether to create or rent a captive.
Risk managers of smaller businesses can become part of a risk retention group.President Reagan signed into law the Liability Risk Retention Act in October 1986 (an amendment to the Product Liability Risk Retention Act of 1981). The act permits formation of retention groups (a special form of captive) with fewer restrictions than existed before. The retention groups are similar to association captives. The act permits formation of such groups in the U.S. under more favorable conditions than have existed generally for association captives. The act may be particularly helpful to small businesses that could not feasibly self-insure on their own but can do so within a designated group. How extensive will be the use of risk retention groups is yet to be seen. As of the writing of this text there are efforts to amend the act. A risk retention group provides risk management and retention to a few players in the same industry who are too small to act on their own. In this way, risk retention groups are similar to group self-insurance. We discuss them further in "6: The Insurance Solution and Institutions".
Risk Reduction
Moving over to the upper-right corner of the risk management matrix in Table 4.4 the quadrant characterized by high frequency and low severity, we find retention with loss control. If frequency is significant, risk managers may find efforts to prevent losses useful. If losses are of low value, they may be easily paid out of the organization’s or individual’s own funds. Risk retention usually finances highly frequent, predictable losses more cost effectively. An example might be losses due to wear and tear on equipment. Such losses are predictable and of a manageable, low-annual value. We described loss control in the case of the fitness center above.
Loss prevention efforts seek to reduce the probability of a loss occurring. Managers use loss reduction efforts to lessen loss severity. If you want to ski in spite of the hazards involved, you may take instruction to improve your skills and reduce the likelihood of you falling down a hill or crashing into a tree. At the same time, you may engage in a physical fitness program to toughen your body to withstand spills without serious injury. Using both loss prevention and reduction techniques, you attempt to lower both the probability and severity of loss.
Loss prevention’s goal seeks to reduce losses to the minimum compatible with a reasonable level of human activity and expense. At any given time, economic constraints place limits on what may be done, although what is considered too costly at one time may be readily accepted at a later date. Thus, during one era, little effort may have been made to prevent injury to employees, because employees were regarded as expendable. The general notion today, however, is that such injuries are prevented because they have become too expensive. Change was made to adapt to the prevailing ideals concerning the value of human life and the social responsibility of business.
Risk Avoidance
In the lower-right corner of the matrix in Table 4.4, at the intersection of high frequency and high severity, we find avoidance. Managers seek to avoid any situation falling in this category if possible. An example might be a firm that is considering construction of a building on the east coast of Florida in Key West. Flooding and hurricane risk would be high, with significant damage possibilities.
Of course, we cannot always avoid risks. When Texas school districts were faced with high severity and frequency of losses in workers’ compensation, schools could not close their doors to avoid the problem. Instead, the school districts opted to self-insure, that is, retain the risk up to a certain loss limit.Etti G. Baranoff, “Determinants in Risk-Financing Choices: The Case of Workers’ Compensation for Public School Districts,” Journal of Risk and Insurance, June 2000.
Not all avoidance necessarily results in “no loss.” While seeking to avoid one loss potential, many efforts may create another. Some people choose to travel by car instead of plane because of their fear of flying. While they have successfully avoided the possibility of being a passenger in an airplane accident, they have increased their probability of being in an automobile accident. Per mile traveled, automobile deaths are far more frequent than aircraft fatalities. By choosing cars over planes, these people actually raise their probability of injury.
Key Takeaways
• One of the most important tools in risk management is a road map using projected frequency and severity of losses of one risk only.
• Within a framework of similar companies, the risk manager can tell when it is most appropriate to use risk transfer, risk reduction, retain or transfer the risk.
Discussion Questions
1. Using the basic risk management matrix, explain the following:
1. When would you buy insurance?
2. When would you avoid the risk?
3. When would you retain the risk?
4. When would you use loss control?
2. Give examples for the following risk exposures:
1. High-frequency and high-severity loss exposures
2. Low-frequency and high-severity loss exposures
3. Low-frequency and low-severity loss exposures
4. High-frequency and low-severity loss exposure | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/04%3A_Evolving_Risk_Management_-_Fundamental_Tools/4.05%3A_Risk_Management_Alternatives_-_The_Risk_Management_Matrix.txt |
Learning Objectives
• In this section we return to the risk map and compare the risk map created for the identification purpose to that created for the risk management tools already used by the business.
• If the solution the firm uses does not fit within the solutions suggested by the risk management matrix, the business has to reevaluate its methods of managing the risks.
At this point, the risk manager of Notable Notions can see the potential impact of its risks and its best risk management strategies. The next step in the risk mapping technique is to create separate graphs that show how the firm is currently handling each risk. Each of the risks in Figure \(1\) is now graphed according to whether the risk is uninsured, retained, partially insured or hedged (a financial technique to lower the risk by using the financial instrument discussed in "6: The Insurance Solution and Institutions"), or insured. Figure \(1\) is the new risk map reflecting the current risk management handling.
When the two maps, the one in Figure 4.3.1 and the one in Figure \(1\), are overlaid, it can be clearly seen that some of the risk strategies suggested in Table 4.4 differ from current risk handling as shown in Figure \(1\). For example, a broker convinced the risk manager to purchase an expensive policy for e-risk. The risk map shows that for Notable Notions, e-risk is low severity and low frequency and thus should remain uninsured. By overlaying the two risk maps, the risk manager can see where current risk handling may not be appropriate.
The Effect of Risk Handling Methods
We can create another map to show how a particular risk management strategy of the maximum severity that will remain after insurance. This occurs when insurance companies give only low limits of coverage. For example, if the potential severity of Notable Notions’ earthquake risk is \$140 million, but coverage is offered only up to \$100 million, the risk falls to a level of \$40 million.
Using holistic risk mapping methodology presents a clear, easy-to-read presentation of a firm’s overall risk spectrum or the level of risks that are still left after all risk mitigation strategies were put in place. It allows a firm to discern between those exposures that after all mitigation efforts are still
1. unbearable,
2. difficult to bear, and
3. relatively unimportant.
In summary, risk mapping has five main objectives:
1. To aid in the identification of risks and their interrelations
2. To provide a mechanism to see clearly what risk management strategy would be the best to undertake
3. To compare and evaluate the firm’s current risk handling and to aid in selecting appropriate strategies
4. To show the leftover risks after all risk mitigation strategies are put in place
5. To easily communicate risk management strategy to both management and employees
Ongoing Monitoring
The process of risk management is continuous, requiring constant monitoring of the program to be certain that (1) the decisions implemented were correct and have been implemented appropriately and that (2) the underlying problems have not changed so much as to require revised plans for managing them. When either of these conditions exists, the process returns to the step of identifying the risks and risk management tools and the cycle repeats. In this way, risk management can be considered a systems process, one in never-ending motion.
Key Takeaways
• In this section we return to the risk map and compare the risk map created for the identification purpose to that created for the risk management tools already used by the business. This is part of the decision making using the highly regarded risk management matrix tool.
• If the projected frequency and severity indicate different risk management solutions, the overlay of the maps can immediately clarify any discrepancies. Corrective actions can be taken and the ongoing monitoring continues.
Discussion Questions
1. Use the designed risk map for the small child-care company you created above. Create a risk management matrix for the same risks indentified in the risk map of question 1.
2. Overlay the two risk maps to see if the current risk management tools fit in with what is required under the risk management matrix.
3. Propose corrective measures, if any.
4. What would be the suggestions for ongoing risk management for the child-care company? | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/04%3A_Evolving_Risk_Management_-_Fundamental_Tools/4.06%3A_Comparisons_to_Current_Risk-Handling_Methods.txt |
Forecasting of Frequency and Severity
When insurers or risk managers use frequency and severity to project the future, they use trending techniques that apply to the loss distributions known to them.Forecasting is part of the Associate Risk Manager designation under the Risk Assessment course using the book: Baranoff Etti, Scott Harrington, and Greg Niehaus, Risk Assessment (Malvern, PA: American Institute for Chartered Property Casualty Underwriters/Insurance Institute of America, 2005). Regressions are the most commonly used tools to predict future losses and claims based on the past. In this textbook, we introduce linear regression using the data featured in "2: Risk Measurement and Metrics". The scientific notations for the regressions are discussed later in this appendix.
Table 4.5 Linear Regression Trend of Claims and Losses of A
Year Actual Fire Claims Linear Trend For Claims Actual Fire Losses Linear Trend For Losses
1 11 8.80 \$16,500 \$10,900.00
2 9 9.50 \$40,000 \$36,900.00
3 7 10.20 \$30,000 \$62,900.00
4 10 10.90 \$123,000 \$88,900.00
5 14 11.60 \$105,000 \$114,900.00
Using Linear Regression
Linear regression attempts to explain the relationship among observed values by applying a straight line fit to the data. The linear regression model postulates that
\[Y= b+mX+e\]
,where the “residual” e is a random variable with mean of zero. The coefficients a and b are determined by the condition that the sum of the square residuals is as small as possible. For our purposes, we do not discuss the error term. We use the frequency and severity data of A for 5 years. Here, we provide the scientific notation that is behind Figure \(1\) and Figure \(2\).
In order to determine the intercept of the line on the y-axis and the slope, we use m (slope) and b (y-intercept) in the equation.
Given a set of data with n data points, the slope (m) and the y-intercept (b) are determined using:
m= nΣ(xy)−ΣxΣy nΣ( x 2 )− (Σx) 2
b= Σy−mΣx n
The graph is provided by Chris D. Odom, with permission.
Most commonly, practitioners use various software applications to obtain the trends. The student is invited to experiment with Microsoft Excel spreadsheets. Table 4.6 provides the formulas and calculations for the intercept and slope of the claims to construct the trend line.
Table 4.6 Method of Calculating the Trend Line for the Claims
(1) (2) (3) = (1) × (2) (4) = (1)2
Year Claims
X Y XY X2
1 11 11.00 1
2 9 18.00 4
3 7 21.00 9
4 10 40.00 16
n=5 14 70.00 25
Total 15 51 160 55
M = Slope = 0.7 = m= nΣ(xy)−ΣxΣy nΣ( x 2 )− (Σx) 2 = (5×160)−(15×51) (5×55)−(15×15)
b = Intercept = 8.1 b= Σy−mΣx n = 51−(0.7×15) 5
Future Forecasts using the Slopes and Intercepts for A:
• Future claims = \(Intercept + Slope × (X)\)
• In year 6, the forecast of the number of claims is projected to be: \(8.1 + (0.7 × 6) = 12.3\) claims
• Future losses = \(Intercept + Slope × (X)\)
• In year 6, the forecast of the losses in dollars is projected to be: \(−15, 100 + (26,000 × 6) = \$140,900\) in losses
The in-depth statistical explanation of the linear regression model is beyond the scope of this course. Interested students are invited to explore statistical models in elementary statistics textbooks. This first exposure to the world of forecasting, however, is critical to a student seeking further study in the fields of insurance and risk management. | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/04%3A_Evolving_Risk_Management_-_Fundamental_Tools/4.07%3A_Appendix_-_Forecasting.txt |
1. What are the adverse consequences of risk? Give examples of each.
2. What is a common process of risk management for property exposure of a firm?
3. How was the traditional process of risk management expanded?
4. The liability of those who own a corporation is limited to their investment, while proprietors and general partners have unlimited liability for the obligations of their business. Explain what relevance this has for risk management.
5. What are the three objectives of risk mapping? Explain one way a chief risk officer would use a risk map model.
6. Define the terms loss prevention and loss reduction. Provide examples of each.
7. What are the types of risks that are included in an enterprise risk analysis?
8. What has helped to expand risk management into enterprise risk management?
9. Following is the loss data for slip-and-fall shoppers’ medical claims of the fashion designer LOLA for the years 2004–2008.
1. Calculate the severity and frequency of the losses.
2. Forecast the severity and frequency for next year using the appendix to this chapter.
3. What would be the risk management solution if Lola’s results are above the median of severity and frequency for the industry of the geographical location?
Year Number of Slip-and-Fall Claims Slip-and-Fall Losses
2004 700 \$2,650,000
2005 1,000 \$6,000,000
2006 700 \$7,000,000
2007 900 \$12,300,000
2008 1,400 \$10,500,000
10. Brooks Trucking, which provides trucking services over a twelve-state area from its home base in Cincinnati, has never had a risk management program. Shawana Lee, Brooks Trucking’s financial vice-president, has a philosophy that “lightning can’t strike twice in the same place.” Because of this, she does not believe in trying to practice loss prevention or loss reduction.
1. If you were appointed its risk manager, how would you identify the pure-risk exposures facing Brooks?
2. Do you agree or disagree with Shawana? Why?
11. Devin Davis is an independent oil driller in Oklahoma. He feels that the most important risk he has is small property damages to his drilling rig, because he constantly has small, minor damage to the rig while it is being operated or taken to new locations.
1. Do you agree or disagree with Devin?
2. Which is more important, frequency of loss or severity of loss? Explain.
12. Rinaldo’s is a high-end jeweler with one retail location on Fifth Avenue in New York City. The majority of sales are sophisticated pieces that sell for \$5,000 or more and are Rinaldo’s own artistic creations using precious metals and stones. The raw materials are purchased primarily in Africa (gold, platinum, and diamonds) and South America (silver). Owing to a large amount of international marketing efforts, Internet and catalog sales represent over 45 percent of the total \$300 million in annual sales revenue. To accommodate his customers, Rinaldo will accept both the U.S. dollar and other foreign currencies as a form of payment. Acting as an enterprise risk manager consultant, create a risk map model to identify Rinaldo’s risks across the four basic categories of business/strategic risk, operational risk, financial risk, and hazard risk. | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/04%3A_Evolving_Risk_Management_-_Fundamental_Tools/4.08%3A_Evolving_Risk_Management-_Fundamental_Tools%28Exercises%29.txt |
In the first three chapters, we provided information to help you understand and measure risks, as well as to evaluate risk attitudes and risk behavior. "4: Evolving Risk Management - Fundamental Tools" concentrated on risk management and methods for identifying, measuring, and managing risks. In this chapter we elaborate further on the management of risk, placing greater emphasis on the opportunities that risk represents. We emphasize prudent opportunities rather than actions motivated by greed. When trying to identify the main causes of the 2008–2009 credit crisis, the lack of risk management and prudent behavior emerge as key factors. However, even companies that were not part of the debacle are paying the price, as the whole economy suffers a lack of credit and consumers’ entrenchment. Consumers are less inclined to buy something that they don’t consider a necessity. As such, even firms with prudent and well-organized risk management are currently seeing huge devaluation of their stocks.See explanation at www.Wikiperdia.org. see also “Executive Suite: Textron CEO Zeroes in on Six Sigma,” USA Today, updated January 28, 2008.
In many corporations, the head of the ERM effort is the chief risk officer or CRO. In other cases, the whole executive team handles the risk management decision with specific coordinators. Many large corporations adopted a system called Six Sigma, which is a business strategy widely adopted by many corporations to improve processes and efficiency. Within this model of operation they embedded enterprise risk management. The ERM function at Textron follows the latter model. Textron’s stock fell from \$72 in January 2008 to \$15 in December 2008. Let’s recall that ERM includes every aspect of risks within the corporation, including labor negotiation risks, innovation risks, lack-of-foresight risks, ignoring market condition risks, managing self-interest and greed risks, and so forth. Take the case of the three U.S. auto manufacturers—GM, Chrysler, and Ford. Their holistic risks include not only insuring buildings and automobiles or worker’s compensation. They must look at the complete picture of how to ensure survival in a competitive and technologically innovative world. The following is a brief examination of the risk factors that contributed to the near-bankrupt condition of the U.S. automakers: Paul Ingrassia, “How Detroit Drove into a Ditch: The Financial Crisis Has Brought the U.S. Auto Industry to a Breaking Point, but the Trouble Began Long Ago,” Wall Street Journal, October 25, 2008.
• Lack of foresight in innovation of fuel-efficient automobiles with endurance and sustainability of value.
• Too much emphasis on the demand for the moment rather than on smart projections of potential catastrophes impacting fuel prices, like hurricanes Katrina, Wilma, and Ike.
• They did not account for an increase in the worldwide demand for use of fuel.
• Inability to compete in terms of quality control and manufacturing costs because of the labor unions’ high wage demands. Shutting down individual initiatives and smart thinking. Everything was negotiated rather than done via smart business decisions and processes.
• Allowing top management to stagnate into luxury and overspending, such as the personal planes in which they went to Washington to negotiate bailouts.
• The credit crisis of 2008 escalated the demise; it compounded the already mismanaged industry that didn’t respond to consumers’ needs.
Had risk management been a top priority for the automobile companies, perhaps they would face a different attitude as they approach U.S. taxpayers for their bailouts. ERM needs to be part of the mind-set of every company stakeholder. When one arm of the company is pulling for its own gains without consideration of the total value it delivers to stakeholders, the result, no doubt, will be disastrous. The players need to dance together under the paradigm that every action might have the potential to lead to catastrophic results. The risk of each action needs to be clear, and assuredness for risk mitigation is a must.
This chapter includes the following:
1. Links
2. Enterprise risk management within firm goals
3. Risk management and the firm’s financial statement—opportunities within the ERM
4. Risk management using the capital markets
Links
While "4: Evolving Risk Management - Fundamental Tools" enumerated all risks, we emphasized the loss part more acutely, since avoiding losses represents the essence of risk management. But, with the advent of ERM, the risks that represent opportunities for gain are clearly just as important. The question is always “How do we evaluate activities in terms of losses and gains within the firm’s main goal of value maximization?” Therefore, we are going to look at maps that examine both sides—both gains and losses as they appear in Figure \(1\). We operate on the negative and positive sides of the ERM map and we look into opportunity risks. We expand our puzzle to incorporate the firm’s goals. We introduce more sophisticated tools to ensure that you are equipped to work with all elements of risk management for firms to sustain themselves.
Let us emphasize that, in light of the financial crisis of 2008–2009, ERM is a needed mind-set for all disciplines. The tools are just what ERM-oriented managers can pull out of their tool kits. For example, we provide an example for the life insurance industry as a key to understanding the links. We provide a more complete picture of ERM in Figure \(2\).
Part C illustrates the interaction between parts A and B.
Source: Etti G. Baranoff and Thomas W. Sager, “Integrated Risk Management in Life Insurance Companies,” a \$10,000 award-winning paper, International Insurance Society Seminar, Chicago, July 2006 and in special edition of the Geneva Papers on Risk and Insurance Issues and Practice. | textbooks/biz/Finance/Risk_Management_for_Enterprises_and_Individuals/05%3A_The_Evolution_of_Risk_Management_-_Enterprise_Risk_Management/5.01%3A_Introduction.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.